Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory | How To Mfa Server Migration Utility | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-server-migration-utility.md | + + Title: How to use the MFA Server Migration Utility to migrate to Azure AD MFA - Azure Active Directory +description: Step-by-step guidance to migrate MFA server settings to Azure AD using the MFA Server Migration Utility. +++++ Last updated : 08/30/2022+++++++++# MFA Server migration ++This topic covers how to migrate MFA settings for Azure Active Directory (Azure AD) users from on-premises Azure MFA Server to Azure AD Multi-Factor Authentication. ++## Solution overview ++The MFA Server Migration Utility helps synchronize multifactor authentication data stored in the on-premises Azure MFA Server directly to Azure AD MFA. +After the authentication data is migrated to Azure AD, users can perform cloud-based MFA seamlessly without having to register again or confirm authentication methods. +Admins can use the MFA Server Migration Utility to target single users or groups of users for testing and controlled rollout without having to make any tenant-wide changes. ++## Limitations and requirements ++- The MFA Server Migration Utility is currently in public preview. Some features might not be supported or have limited capabilities. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). +- The MFA Server Migration Utility requires a new preview build of the MFA Server solution to be installed on your Primary MFA Server. The build makes updates to the MFA Server data file, and includes the new MFA Server Migration Utility. You donΓÇÖt have to update the WebSDK or User portal. Installing the update _doesn't_ start the migration automatically. +- The MFA Server Migration Utility copies the data from the database file onto the user objects in Azure AD. During migration, users can be targeted for Azure AD MFA for testing purposes using [Staged Rollout](../hybrid/how-to-connect-staged-rollout.md). Staged migration lets you test without making any changes to your domain federation settings. Once migrations are complete, you must finalize your migration by making changes to your domain federation settings. +- AD FS running Windows Server 2016 or higher is required to provide MFA authentication on any AD FS relying parties, not including Azure AD and Office 365. +- Review your AD FS claims rules and make sure none requires MFA to be performed on-premises as part of the authentication process. +- Staged rollout can target a maximum of 500,000 users (10 groups containing a maximum of 50,000 users each). ++## Migration guide ++|Phase|Steps| +|:|:--| +|Preparations |[Identify Azure AD MFA Server dependencies](#identify-azure-ad-mfa-server-dependencies) | +||[Backup Azure AD MFA Server datafile](#backup-azure-ad-mfa-server-datafile) | +||[Install MFA Server update](#install-mfa-server-update) | +||[Configure MFA Server Migration Utility](#configure-the-mfa-server-migration-utility) | +|Migrations |[Migrate user data](#migrate-user-data)| +||[Validate and test](#validate-and-test)| +||[Staged Rollout](#enable-staged-rollout-using-azure-portal) | +||[Educate users](#educate-users)| +||[Complete user migration](#complete-user-migration)| +|Finalize |[Migrate MFA Server dependencies](#migrate-mfa-server-dependencies)| +||[Update domain federation settings](#update-domain-federation-settings)| +||[Disable MFA Server User portal](#optional-disable-mfa-server-user-portal)| +||[Decommission MFA server](#decommission-mfa-server)| ++An MFA Server migration generally includes the steps in the following process: +++A few important points: ++**Phase 1** should be repeated as you add test users. + - The migration tool uses Azure AD groups for determining the users for which authentication data should be synced between MFA Server and Azure AD MFA. After user data has been synced, that user is then ready to use Azure AD MFA. + - Staged Rollout allows you to reroute users to Azure AD MFA, also using Azure AD groups. + While you certainly could use the same groups for both tools, we recommend against it as users could potentially be redirected to Azure AD MFA before the tool has synched their data. We recommend setting up Azure AD groups for syncing authentication data by the MFA Server Migration Utility, and another set of groups for Staged Rollout to direct targeted users to Azure AD MFA rather than on-premises. ++**Phase 2** should be repeated as you migrate your user base. By the end of Phase 2, your entire user base should be using Azure AD MFA for all workloads federated against Azure AD. ++During the previous phases, you can remove users from the Staged Rollout folders to take them out of scope of Azure AD MFA and route them back to your on-premises Azure MFA server for all MFA requests originating from Azure AD. ++**Phase 3** requires moving all clients that authenticate to the on-premises MFA Server (VPNs, password managers, and so on) to Azure AD federation via SAML/OAUTH. If modern authentication standards arenΓÇÖt supported, you're required to stand up NPS server(s) with the Azure AD MFA extension installed. Once dependencies are migrated, users should no longer use the User portal on the MFA Server, but rather should manage their authentication methods in Azure AD ([aka.ms/mfasetup](https://aka.ms/mfasetup)). Once users begin managing their authentication data in Azure AD, those methods won't be synced back to MFA Server. If you roll back to the on-premises MFA Server after users have made changes to their Authentication Methods in Azure AD, those changes will be lost. After user migrations are complete, change the [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-1.0#federatedidpmfabehavior-values&preserve-view=true) domain federation setting. The change tells Azure AD to no longer perform MFA on-premises and to perform _all_ MFA requests with Azure AD MFA, regardless of group membership. ++The following sections explain the migration steps in more detail. ++### Identify Azure AD MFA Server dependencies ++We've worked hard to ensure that moving onto our cloud-based Azure AD MFA solution will maintain and even improve your security posture. There are three broad categories that should be used to group dependencies: ++- [MFA methods](#mfa-methods) +- [User portal](#user-portal) +- [Authentication services](#authentication-services) ++To help your migration, we've matched widely used MFA Server features with the functional equivalent in Azure AD MFA for each category. ++#### MFA methods ++Open MFA Server, click **Company Settings**: ++++|MFA Server|Azure AD MFA| +|:|:--| +|**General Tab**|| +|**User Defaults section**|| +|Phone call (Standard)|No action needed| +|Text message (OTP)<sup>*</sup>|No action needed| +|Mobile app (Standard)|No action needed| +|Phone Call (PIN)<sup>*</sup>|Enable Voice OTP | +|Text message (OTP + PIN)<sup>**</sup>|No action needed| +|Mobile app (PIN)<sup>*</sup>|Enable [number matching](how-to-mfa-number-match.md) | +|Phone call/text message/mobile app/OATH token language|Language settings will be automatically applied to a user based on the locale settings in their browser| +|**Default PIN rules section**|Not applicable; see updated methods in the preceding screenshot| +|**Username Resolution tab**|Not applicable; username resolution isn't required for Azure AD MFA| +|**Text Message tab**|Not applicable; Azure AD MFA uses a default message for text messages| +|OATH Token tab|Not applicable; Azure AD MFA uses a default message for OATH tokens| +|Reports|[Azure AD Authentication Methods Activity reports](howto-authentication-methods-activity.md)| ++<sup>*</sup>When a PIN is used to provide proof-of-presence functionality, the functional equivalent is provided above. PINs that arenΓÇÖt cryptographically tied to a device don't sufficiently protect against scenarios where a device has been compromised. To protect against these scenarios, including [SIM swap attacks](https://wikipedia.org/wiki/SIM_swap_scam), move users to more secure methods according to Microsoft authentication methods [best practices](concept-authentication-methods.md). ++<sup>**</sup>The default SMS MFA experience in Azure AD MFA sends users a code, which they're required to enter in the login window as part of authentication. The requirement to roundtrip the SMS code provides proof-of-presence functionality. ++#### User portal ++Open MFA Server, click **User Portal**: +++|MFA Server|Azure AD MFA| +|:--:|:-:| +|**Settings Tab**|| +|User portal URL|[aka.ms/mfasetup](https://aka.ms/mfasetup)| +|Allow user enrollment|See [Combined security information registration](concept-registration-mfa-sspr-combined.md)| +|- Prompt for backup phone|See [MFA Service settings](howto-mfa-mfasettings.md#mfa-service-settings)| +|- Prompt for third-party OATH token|See [MFA Service settings](howto-mfa-mfasettings.md#mfa-service-settings)| +|Allow users to initiate a One-Time Bypass|See [Azure AD TAP functionality](howto-authentication-temporary-access-pass.md)| +|Allow users to select method|See [MFA Service settings](howto-mfa-mfasettings.md#mfa-service-settings)| +|- Phone call|See [Phone call documentation](howto-mfa-mfasettings.md#phone-call-settings)| +|- Text message|See [MFA Service settings](howto-mfa-mfasettings.md#mfa-service-settings)| +|- Mobile app|See [MFA Service settings](howto-mfa-mfasettings.md#mfa-service-settings)| +|- OATH token|See [OATH token documentation](howto-mfa-mfasettings.md#oath-tokens)| +|Allow users to select language|Language settings will be automatically applied to a user based on the locale settings in their browser| +|Allow users to activate mobile app|See [MFA Service settings](howto-mfa-mfasettings.md#mfa-service-settings)| +|- Device limit|Azure AD limits users to 5 cumulative devices (mobile app instances + hardware OATH token + software OATH token) per user| +|Use security questions for fallback|Azure AD allows users to choose a fallback method at authentication time should the chosen authentication method fail| +|- Questions to answer|Security Questions in Azure AD can only be used for SSPR. See more details for [Azure AD Custom Security Questions](concept-authentication-security-questions.md#custom-security-questions)| +|Allow users to associate third-party OATH token|See [OATH token documentation](howto-mfa-mfasettings.md#oath-tokens)| +|Use OATH token for fallback|See [OATH token documentation](howto-mfa-mfasettings.md#oath-tokens)| +|Session Timeout|| +|**Security Questions tab** |Security questions in MFA Server were used to gain access to the User portal. Azure AD MFA only supports security questions for self-service password reset. See [security questions documentation](concept-authentication-security-questions.md).| +|**Passed Sessions tab**|All authentication method registration flows are managed by Azure AD and donΓÇÖt require configuration| +|**Trusted IPs**|[Azure AD trusted IPs](howto-mfa-mfasettings.md#trusted-ips)| ++Any MFA methods available in MFA Server must be enabled in Azure AD MFA by using [MFA Service settings](howto-mfa-mfasettings.md#mfa-service-settings). +Users can't try their newly migrated MFA methods unless they're enabled. ++#### Authentication services +Azure MFA Server can provide MFA functionality for third-party solutions that use RADIUS or LDAP by acting as an authentication proxy. To discover RADIUS or LDAP dependencies, click **RADIUS Authentication** and **LDAP Authentication** options in MFA Server. For each of these dependencies, determine if these third parties support modern authentication. If so, consider federation directly with Azure AD. ++For RADIUS deployments that canΓÇÖt be upgraded, youΓÇÖll need to deploy an NPS Server and install the [Azure AD MFA NPS extension](howto-mfa-nps-extension.md). ++For LDAP deployments that canΓÇÖt be upgraded or moved to RADIUS, [determine if Azure Active Directory Domain Services can be used](/azure/active-directory/fundamentals/auth-ldap). In most cases, LDAP was deployed to support in-line password changes for end users. Once migrated, end users can manage their passwords by using [self-service password reset in Azure AD](tutorial-enable-sspr.md). ++If you enabled the [MFA Server Authentication provider in AD FS 2.0](/azure/active-directory/authentication/howto-mfaserver-adfs-windows-server#secure-windows-server-ad-fs-with-azure-multi-factor-authentication-server) on any relying party trusts except for the Office 365 relying party trust, youΓÇÖll need to upgrade to [AD FS 3.0](/windows-server/identity/ad-fs/deployment/upgrading-to-ad-fs-in-windows-server) or federate those relying parties directly to Azure AD if they support modern authentication methods. Determine the best plan of action for each of the dependencies. ++### Backup Azure AD MFA Server datafile +Make a backup of the MFA Server data file located at %programfiles%\Multi-Factor Authentication Server\Data\PhoneFactor.pfdata (default location) on your primary MFA Server. Make sure you have a copy of the installer for your currently installed version in case you need to roll back. If you no longer have a copy, contact Customer Support Services. ++Depending on user activity, the data file can become outdated quickly. Any changes made to MFA Server, or any end-user changes made through the portal after the backup won't be captured. If you roll back, any changes made after this point won't be restored. ++### Install MFA Server update +Run the new installer on the Primary MFA Server. Before you upgrade a server, remove it from load balancing or traffic sharing with other MFA Servers. You don't need to uninstall your current MFA Server before running the installer. The installer performs an in-place upgrade using the current installation path (for example, C:\Program Files\Multi-Factor Authentication Server). If you're prompted to install a Microsoft Visual C++ 2015 Redistributable update package, accept the prompt. Both the x86 and x64 versions of the package are installed. It isn't required to install updates for User portal, Web SDK, or AD FS Adapter. ++After the installation is complete, it can take several minutes for the datafile to be upgraded. During this time, the User portal may have issues connecting to the MFA Service. **Don't restart the MFA Service, or the MFA Server during this time.** This behavior is normal. Once the upgrade is complete, the primary serverΓÇÖs main service will again be functional. ++You can check \Program Files\Multi-Factor Authentication Server\Logs\MultiFactorAuthSvc.log to make sure the upgrade is complete. You should see **Completed performing tasks to upgrade from 23 to 24**. ++>[!NOTE] +>After you run the installer on your primary server, secondary servers may begin to log **Unhandled SB** entries. This is due to schema changes made on the primary server that will not be recognized by secondary servers. These errors are expected. In environments with 10,000 users or more, the amount of log entries can increase significantly. To mitigate this issue, you can increase the file size of your MFA Server logs, or upgrade your secondary servers. ++### Configure the MFA Server Migration Utility +After installing the MFA Server update, open an elevated PowerShell command prompt: hover over the PowerShell icon, right-click, and click **Run as Administrator**. Run the .\Configure-MultiFactorAuthMigrationUtility.ps1 script found in your MFA Server installation directory (C:\Program Files\Multi-factor Authentication Server by default). ++This script will require you to provide credentials for an Application Administrator in your Azure AD tenant. The script will then create a new MFA Server Migration Utility application within Azure AD, which will be used to write user authentication methods to each Azure AD user object. ++For government cloud customers who wish to carry out migrations, replace ".com" entries in the script with ".us". This script will then write the HKLM:\SOFTWARE\WOW6432Node\Positive Networks\PhoneFactor\ StsUrl and GraphUrl registry entries and instruct the Migration Utility to use the appropriate GRAPH endpoints. ++You'll also need access to the following URLs: ++- `https://graph.microsoft.com/*` (or `https://graph.microsoft.us/*` for government cloud customers) +- `https://login.microsoftonline.com/*` (or `https://login.microsoftonline.us/*` for government cloud customers) ++The script will instruct you to grant admin consent to the newly created application. Navigate to the URL provided, or within the Azure AD portal, click **Application Registrations**, find and select the **MFA Server Migration Utility** app, click on **API permissions** and then granting the appropriate permissions. +++Once complete, navigate to the Multi-factor Authentication Server folder, and open the **MultiFactorAuthMigrationUtilityUI** application. You should see the following screen: +++You've successfully installed the Migration Utility. ++### Migrate user data +Migrating user data doesn't remove or alter any data in the Multi-Factor Authentication Server database. Likewise, this process won't change where a user performs MFA. This process is a one-way copy of data from the on-premises server to the corresponding user object in Azure AD. ++The MFA Server Migration utility targets a single Azure AD group for all migration activities. You can add users directly to this group, or add other groups. You can also add them in stages during the migration. ++To begin the migration process, enter the name or GUID of the Azure AD group you want to migrate. Once complete, press Tab or click outside the window and the utility will begin searching for the appropriate group. The window will populate all users in the group. A large group can take several minutes to finish. ++To view user attribute data for a user, highlight the user, and select **View**: +++This window displays the attributes for the selected user in both Azure AD and the on-premises MFA Server. You can use this window to view how data was written to a user after theyΓÇÖve been migrated. ++The settings option allows you to change the settings for the migration process: +++- Migrate ΓÇô This setting allows you to specify which method(s) should be migrated for the selection of users +- User Match ΓÇô Allows you to specify a different attribute for matching users instead of the default UPN-matching +- Automatic synchronization ΓÇô Starts a background service that will continually monitor any authentication method changes to users in the on-premises MFA Server, and write them to Azure AD at the specified time interval defined ++The migration process can be an automatic process, or a manual process. ++The manual process steps are: ++1. To begin the migration process for a user or selection of multiple users, press and hold the Ctrl key while selecting each of the user(s) you wish to migrate. +1. After you select the desired users, click **Migrate Users** > **Selected users** > **OK**. +1. To migrate all users in the group, click **Migrate Users** > **All users in AAD group** > **OK**. ++For the automatic process, click **Automatic synchronization** in the settings dialog, and then select whether you want all users to be synced, or only members of a given Azure AD group. ++The following table lists the sync logic for the various methods. ++| Method | Logic | +|--|-| +|**Phone** |If there's no extension, update MFA phone.<br>If there's an extension, update Office phone.<br> Exception: If the default method is Text Message, drop extension and update MFA phone.| +|**Backup Phone**|If there's no extension, update Alternate phone.<br>If there's an extension, update Office phone.<br>Exception: If both Phone and Backup Phone have an extension, skip Backup Phone.| +|**Mobile App**|Maximum of five devices will be migrated or only four if the user also has a hardware OATH token.<br>If there are multiple devices with the same name, only migrate the most recent one.<br>Devices will be ordered from newest to oldest.<br>If devices already exist in Azure AD, match on OATH Token Secret Key and update.<br>- If there's no match on OATH Token Secret Key, match on Device Token<br>-- If found, create a Software OATH Token for the MFA Server device to allow OATH Token method to work. Notifications will still work using the existing Azure AD MFA device.<br>-- If not found, create a new device.<br>If adding a new device will exceed the five-device limit, the device will be skipped. | +|**OATH Token**|If devices already exist in Azure AD, match on OATH Token Secret Key and update.<br>- If not found, add a new Hardware OATH Token device.<br>If adding a new device will exceed the five-device limit, the OATH token will be skipped.| ++MFA Methods will be updated based on what was migrated and the default method will be set. MFA Server will track the last migration timestamp and only migrate the user again if the userΓÇÖs MFA settings change or an admin modifies what to migrate in the **Settings** dialog. ++During testing, we recommend doing a manual migration first, and test to ensure a given number of users behave as expected. Once testing is successful, turn on automatic synchronization for the Azure AD group you wish to migrate. As you add users to this group, their information will be automatically synchronized to Azure AD. MFA Server Migration Utility targets one Azure AD group, however that group can encompass both users and nested groups of users. ++Once complete, a confirmation will inform you of the tasks completed: +++As mentioned in the confirmation message, it can take several minutes for the migrated data to appear on user objects within Azure AD. Users can view their migrated methods by navigating to [aka.ms/mfasetup](https://aka.ms/mfasetup). ++### Validate and test ++Once you've successfully migrated user data, you can validate the end-user experience using Staged Rollout before making the global tenant change. The following process will allow you to target specific Azure AD group(s) for Staged Rollout for MFA. Staged Rollout tells Azure AD to perform MFA by using Azure AD MFA for users in the targeted groups, rather than sending them on-premises to perform MFA. You can validate and testΓÇöwe recommend using the Azure portal, but if you prefer, you can also use Microsoft Graph. ++#### Enable Staged Rollout using Azure portal ++1. Navigate to the following url: [Enable staged rollout features - Microsoft Azure](https://portal.azure.com/?mfaUIEnabled=true%2F#view/Microsoft_AAD_IAM/StagedRolloutEnablementBladeV2). ++1. Change **Azure multifactor authentication (preview)** to **On**, and then click **Manage groups**. ++ :::image type="content" border="true" source="./media/how-to-mfa-server-migration-utility/staged-rollout.png" alt-text="Screenshot of Staged Rollout."::: ++1. Click **Add groups** and add the group(s) containing users you wish to enable for Azure MFA. Selected groups appear in the displayed list. ++ >[!NOTE] + >Any groups you target using the Microsoft Graph method below will also appear in this list. ++ :::image type="content" border="true" source="./media/how-to-mfa-server-migration-utility/managed-groups.png" alt-text="Screenshot of Manage Groups menu."::: ++#### Enable Staged Rollout using Microsoft Graph ++1. Create the featureRolloutPolicy + 1. Navigate to [aka.ms/ge](https://aka.ms/ge) and login to Graph Explorer using a Hybrid Identity Administrator account in the tenant you wish to setup for Staged Rollout. + 1. Ensure POST is selected targeting the following endpoint: + `https://graph.microsoft.com/v1.0/policies/featureRolloutPolicies` + 1. The body of your request should contain the following (change **MFA rollout policy** to a name and description for your organization): + + ```msgraph-interactive + { + "displayName": "MFA rollout policy", + "description": "MFA rollout policy", + "feature": "multiFactorAuthentication", + "isEnabled": true, + "isAppliedToOrganization": false + } + ``` + + :::image type="content" border="true" source="./media/how-to-mfa-server-migration-utility/body.png" alt-text="Screenshot of request."::: ++ 1. Perform a GET with the same endpoint and make note of the **ID** value (crossed out in the following image): + + :::image type="content" border="true" source="./media/how-to-mfa-server-migration-utility/get.png" alt-text="Screenshot of GET command."::: ++1. Target the Azure AD group(s) that contain the users you wish to test + 1. Create a POST request with the following endpoint (replace {ID of policy} with the **ID** value you copied from step 1d): ++ `https://graph.microsoft.com/v1.0/policies/featureRolloutPolicies/{ID of policy}/appliesTo/$ref` ++ 1. The body of the request should contain the following (replace {ID of group} with the object ID of the group you wish to target for staged rollout): + + ```msgraph-interactive + { + "@odata.id": "https://graph.microsoft.com/v1.0/directoryObjects/{ID of group}" + } + ``` ++ 1. Repeat steps a and b for any other groups you wish to target with staged rollout. + 1. You can view the current policy in place by doing a GET against the following URL: ++ `https://graph.microsoft.com/v1.0/policies/featureRolloutPolicies/{policyID}?$expand=appliesTo` ++ The preceding process uses the [featureRolloutPolicy resource](/graph/api/resources/featurerolloutpolicy?view=graph-rest-1.0&preserve-view=true). The public documentation hasn't yet been updated with the new multifactorAuthentication feature, but it has detailed information on how to interact with the API. ++1. Confirm that the end-user MFA experience. Here are a few things to check: + 1. Do users see their methods in [aka.ms/mfasetup](https://aka.ms/mfasetup)? + 1. Do users receive phone calls/text messages? + 1. Are they able to successfully authenticate using the above methods? + 1. Do users successfully receive Authenticator notifications? Are they able to approve these notifications? Is authentication successful? + 1. Are users able to authenticate successfully using Hardware OATH tokens? ++### Educate users +Ensure users know what to expect when they're moved to Azure MFA, including new authentication flows. You may also wish to instruct users to use the Azure AD Combined Registration portal ([aka.ms/mfasetup](https://aka.ms/mfasetup)) to manage their authentication methods rather than the User portal once migrations are complete. Any changes made to authentication methods in Azure AD won't propagate back to your on-premises environment. In a situation where you had to roll back to MFA Server, any changes users have made in Azure AD wonΓÇÖt be available in the MFA Server User portal. ++If you use third-party solutions that depend on Azure MFA Server for authentication (see [Authentication services](#authentication-services)), youΓÇÖll want users to continue to make changes to their MFA methods in the User portal. These changes will be synced to Azure AD automatically. Once you've migrated these third party solutions, you can move users to the Azure AD combined registration page. ++### Complete user migration +Repeat migration steps found in [Migrate user data](#migrate-user-data) and [Validate and test](#validate-and-test) sections until all user data is migrated. ++### Migrate MFA Server dependencies +Using the data points you collected in [Authentication services](#authentication-services), begin carrying out the various migrations necessary. Once this is completed, consider having users manage their authentication methods in the combined registration portal, rather than in the User portal on MFA server. ++### Update domain federation settings +Once you've completed user migrations, and moved all of your [Authentication services](#authentication-services) off of MFA Server, itΓÇÖs time to update your domain federation settings. After the update, Azure AD no longer sends MFA request to your on-premises federation server. ++To configure Azure AD to ignore MFA requests to your on-premises federation server, install the [Microsoft Graph PowerShell SDK](/powershell/microsoftgraph/installation?view=graph-powershell-&preserve-view=true) and set [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-1.0#federatedidpmfabehavior-values&preserve-view=true) to `rejectMfaByFederatedIdp`, as shown in the following example. ++#### Request +<!-- { + "blockType": "request", + "name": "update_internaldomainfederation" +} +--> +``` http +PATCH https://graph.microsoft.com/beta/domains/contoso.com/federationConfiguration/6601d14b-d113-8f64-fda2-9b5ddda18ecc +Content-Type: application/json +{ + "federatedIdpMfaBehavior": "rejectMfaByFederatedIdp" +} +``` +++#### Response +>**Note:** The response object shown here might be shortened for readability. +<!-- { + "blockType": "response", + "truncated": true, + "@odata.type": "microsoft.graph.internalDomainFederation" +} +--> +``` http +HTTP/1.1 200 OK +Content-Type: application/json +{ + "@odata.type": "#microsoft.graph.internalDomainFederation", + "id": "6601d14b-d113-8f64-fda2-9b5ddda18ecc", + "issuerUri": "http://contoso.com/adfs/services/trust", + "metadataExchangeUri": "https://sts.contoso.com/adfs/services/trust/mex", + "signingCertificate": "MIIE3jCCAsagAwIBAgIQQcyDaZz3MI", + "passiveSignInUri": "https://sts.contoso.com/adfs/ls", + "preferredAuthenticationProtocol": "wsFed", + "activeSignInUri": "https://sts.contoso.com/adfs/services/trust/2005/usernamemixed", + "signOutUri": "https://sts.contoso.com/adfs/ls", + "promptLoginBehavior": "nativeSupport", + "isSignedAuthenticationRequestRequired": true, + "nextSigningCertificate": "MIIE3jCCAsagAwIBAgIQQcyDaZz3MI", + "signingCertificateUpdateStatus": { + "certificateUpdateResult": "Success", + "lastRunDateTime": "2021-08-25T07:44:46.2616778Z" + }, + "federatedIdpMfaBehavior": "rejectMfaByFederatedIdp" +} +``` ++Set the **Staged Rollout for Azure MFA** to **Off**. Users will once again be redirected to your on-premises federation server for MFA. ++>[!NOTE] +>The update of the domain federation setting can take up to 24 hours to take effect. ++### Optional: Disable MFA Server User portal +Once you've completed migrating all user data, end users can begin using the Azure AD combined registration pages to manage MFA Methods. There are a couple ways to prevent users from using the User portal in MFA Server: ++- Redirect your MFA Server User portal URL to [aka.ms/mfasetup](https://aka.ms/mfasetup) +- Clear the **Allow users to log in** checkbox under the **Settings** tab in the User portal section of MFA Server to prevent users from logging into the portal altogether. ++### Decommission MFA Server ++When you no longer need the Azure MFA server, follow your normal server deprecation practices. No special action is required in Azure AD to indicate MFA Server retirement. ++## Rollback plan ++If the upgrade had issues, follow these steps to roll back: ++1. Uninstall MFA Server 8.1. +1. Replace PhoneFactor.pfdata with the backup made before upgrading. ++ >[!NOTE] + >Any changes since the backup was made will be lost, but should be minimal if backup was made right before upgrade and upgrade was unsuccessful. ++1. Run the installer for your previous version (for example, 8.0.x.x). +1. Configure Azure AD to accept MFA requests to your on-premises federation server. Use Graph PowerShell to set [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-1.0#federatedidpmfabehavior-values&preserve-view=true) to `enforceMfaByFederatedIdp`, as shown in the following example. ++ **Request** + <!-- { + "blockType": "request", + "name": "update_internaldomainfederation" + } + --> + ``` http + PATCH https://graph.microsoft.com/beta/domains/contoso.com/federationConfiguration/6601d14b-d113-8f64-fda2-9b5ddda18ecc + Content-Type: application/json + { + "federatedIdpMfaBehavior": "enforceMfaByFederatedIdp" + } + ``` + + The following response object is shortened for readability. ++ **Response** ++ <!-- { + "blockType": "response", + "truncated": true, + "@odata.type": "microsoft.graph.internalDomainFederation" + } + --> + ``` http + HTTP/1.1 200 OK + Content-Type: application/json + { + "@odata.type": "#microsoft.graph.internalDomainFederation", + "id": "6601d14b-d113-8f64-fda2-9b5ddda18ecc", + "issuerUri": "http://contoso.com/adfs/services/trust", + "metadataExchangeUri": "https://sts.contoso.com/adfs/services/trust/mex", + "signingCertificate": "MIIE3jCCAsagAwIBAgIQQcyDaZz3MI", + "passiveSignInUri": "https://sts.contoso.com/adfs/ls", + "preferredAuthenticationProtocol": "wsFed", + "activeSignInUri": "https://sts.contoso.com/adfs/services/trust/2005/usernamemixed", + "signOutUri": "https://sts.contoso.com/adfs/ls", + "promptLoginBehavior": "nativeSupport", + "isSignedAuthenticationRequestRequired": true, + "nextSigningCertificate": "MIIE3jCCAsagAwIBAgIQQcyDaZz3MI", + "signingCertificateUpdateStatus": { + "certificateUpdateResult": "Success", + "lastRunDateTime": "2021-08-25T07:44:46.2616778Z" + }, + "federatedIdpMfaBehavior": "enforceMfaByFederatedIdp" + } + ``` ++Users will no longer be redirected to your on-premises federation server for MFA, whether theyΓÇÖre targeted by the Staged Rollout tool or not. Note this can take up to 24 hours to take effect. +++## Next steps ++- [Overview of how to migrate from MFA Server to Azure AD Multi-Factor Authentication](how-to-migrate-mfa-server-to-azure-mfa.md) +- [Migrate to cloud authentication using Staged Rollout](../hybrid/how-to-connect-staged-rollout.md) |
active-directory | How To Migrate Mfa Server To Azure Mfa User Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md | Multi-factor authentication (MFA) helps secure your infrastructure and assets fr MicrosoftΓÇÖs Multi-Factor Authentication Server (MFA Server) is no longer offered for new deployments. Customers who are using MFA Server should move to Azure AD Multi-Factor Authentication (Azure AD MFA). -There are several options for migrating your multi-factor authentication (MFA) from MFA Server to Azure Active Directory (Azure AD). -These include: +There are several options for migrating from MFA Server to Azure Active Directory (Azure AD): * Good: Moving only your [MFA service to Azure AD](how-to-migrate-mfa-server-to-azure-mfa.md). * Better: Moving your MFA service and user authentication to Azure AD, covered in this article. This process enables the iterative migration of users from MFA Server to Azure M Each step is explained in the subsequent sections of this article. >[!NOTE]->If you are planning on moving any applications to Azure Active Directory as a part of this migration, you should do so prior to your MFA migration. If you move all of your apps, you can skip sections of the MFA migration process. See the section on moving applications at the end of this article. +>If you're planning on moving any applications to Azure Active Directory as a part of this migration, you should do so prior to your MFA migration. If you move all of your apps, you can skip sections of the MFA migration process. See the section on moving applications at the end of this article. ## Process to migrate to Azure AD and user authentication Each step is explained in the subsequent sections of this article. ## Prepare groups and Conditional Access Groups are used in three capacities for MFA migration.-* **To iteratively move users to Azure AD MFA with staged rollout.** -Use a group created in Azure AD, also known as a cloud-only group. You can use Azure AD security groups or Microsoft 365 Groups for both moving users to MFA and for Conditional Access policies. For more information see creating an Azure AD security group, and this overview of Microsoft 365 Groups for administrators. ++* **To iteratively move users to Azure AD MFA with Staged Rollout.** ++ Use a group created in Azure AD, also known as a cloud-only group. You can use Azure AD security groups or Microsoft 365 Groups for both moving users to MFA and for Conditional Access policies. + >[!IMPORTANT]- >Nested and dynamic groups are not supported in the staged rollout process. Do not use these types of groups for your staged rollout effort. + >Nested and dynamic groups aren't supported for Staged Rollout. Don't use these types of groups. + * **Conditional Access policies**. -You can use either Azure AD or on-premises groups for conditional access. + You can use either Azure AD or on-premises groups for conditional access. + * **To invoke Azure AD MFA for AD FS applications with claims rules.**-This applies only if you have applications on AD FS. -This must be an on-premises Active Directory security group. Once Azure AD MFA is an additional authentication method, you can designate groups of users to use that method on each relying party trust. For example, you can call Azure AD MFA for users you have already migrated, and MFA Server for those not yet migrated. This is helpful both in testing, and during migration. + This step applies only if you use applications with AD FS. + + You must use an on-premises Active Directory security group. Once Azure AD MFA is an additional authentication method, you can designate groups of users to use that method on each relying party trust. For example, you can call Azure AD MFA for users you already migrated, and MFA Server for users who aren't migrated yet. This strategy is helpful both in testing and during migration. >[!NOTE] ->We do not recommend that you reuse groups that are used for security. When using a security group to secure a group of high-value apps via a Conditional Access policy, that should be the only use of that group. +>We don't recommend that you reuse groups that are used for security. Only use the security group to secure a group of high-value apps with a Conditional Access policy. ### Configure Conditional Access policies -If you are already using Conditional Access to determine when users are prompted for MFA, you wonΓÇÖt need any changes to your policies. -As users are migrated to cloud authentication, they will start using Azure AD MFA as defined by your existing Conditional Access policies. +If you're already using Conditional Access to determine when users are prompted for MFA, you wonΓÇÖt need any changes to your policies. +As users are migrated to cloud authentication, they'll start using Azure AD MFA as defined by your existing Conditional Access policies. They wonΓÇÖt be redirected to AD FS and MFA Server anymore. -If your federated domain(s) have the **federatedIdpMfaBehavior** set to `enforceMfaByFederatedIdp` or **SupportsMfa** flag set to `$True` (the **federatedIdpMfaBehavior** overrides **SupportsMfa** when both are set), you are likely enforcing MFA on AD FS using claims rules. -In this case, you will need to analyze your claims rules on the Azure AD relying party trust and create Conditional Access policies that support the same security goals. +If your federated domains have the **federatedIdpMfaBehavior** set to `enforceMfaByFederatedIdp` or **SupportsMfa** flag set to `$True` (the **federatedIdpMfaBehavior** overrides **SupportsMfa** when both are set), you're likely enforcing MFA on AD FS by using claims rules. +In this case, you'll need to analyze your claims rules on the Azure AD relying party trust and create Conditional Access policies that support the same security goals. -If you need to configure Conditional Access policies, you need to do so before enabling staged rollout. +If necessary, configure Conditional Access policies before you enable Staged Rollout. For more information, see the following resources:+ * [Plan a Conditional Access deployment](../conditional-access/plan-conditional-access.md) * [Common Conditional Access policies](../conditional-access/concept-conditional-access-policy-common.md) ## Prepare AD FS -If you do not have any applications in AD FS that require MFA, you can skip this section and go to the section Prepare staged rollout. +If you don't have any applications in AD FS that require MFA, you can skip this section and go to the section [Prepare Staged Rollout](#prepare-staged-rollout). ### Upgrade AD FS server farm to 2019, FBL 4 -In AD FS 2019, Microsoft released new functionality that provides the ability to specify additional authentication methods for a relying party, such as an application. -This is done by using group membership to determine the authentication provider. +In AD FS 2019, Microsoft released new functionality to help specify additional authentication methods for a relying party, such as an application. +You can specify an additional authentication method by using group membership to determine the authentication provider. By specifying an additional authentication method, you can transition to Azure AD MFA while keeping other authentication intact during the transition. For more information, see [Upgrading to AD FS in Windows Server 2016 using a WID database](/windows-server/identity/ad-fs/deployment/upgrading-to-ad-fs-in-windows-server). The article covers both upgrading your farm to AD FS 2019 and upgrading your FBL ### Configure claims rules to invoke Azure AD MFA -Now that you have Azure AD MFA as an additional authentication method, you can assign groups of users to use Azure AD MFA. You do this by configuring claims rules, also known as *relying party trusts*. By using groups, you can control which authentication provider is called either globally or by application. For example, you can call Azure AD MFA for users who have registered for combined security information or had their phone numbers migrated, while calling MFA Server for those who have not. +Now that Azure AD MFA is an additional authentication method, you can assign groups of users to use Azure AD MFA by configuring claims rules, also known as *relying party trusts*. By using groups, you can control which authentication provider is called either globally or by application. For example, you can call Azure AD MFA for users who registered for combined security information or had their phone numbers migrated, while calling MFA Server for users whose phone numbers haven't migrated. >[!NOTE] >Claims rules require on-premises security group. Now that you have Azure AD MFA as an additional authentication method, you can a #### Back up existing rules Before configuring new claims rules, back up your existing rules. -You will need to restore these as a part of your clean up steps. +You'll need to restore claims rules as a part of your cleanup steps. Depending on your configuration, you may also need to copy the existing rule and append the new rules being created for the migration. To view existing global rules, run: + ```powershell Get-AdfsAdditionalAuthenticationRule ``` This command will move the logic from your current Access Control Policy into Ad #### Set up the group, and find the SID You will need to have a specific group in which you place users for whom you want to invoke Azure AD MFA. You will need to find the security identifier (SID) for that group.-To find the group SID use the following command, with your group name -`Get-ADGroup ΓÇ£GroupNameΓÇ¥` +To find the group SID, run the following command and replace `GroupName` with your group name: ++```powershell +Get-ADGroup GroupName +```  #### Setting the claims rules to call Azure MFA -The following PowerShell cmdlets invoke Azure AD MFA for those in the group when they arenΓÇÖt on the corporate network. -You must replace "YourGroupSid" with the SID found by running the preceding cmdlet. +The following PowerShell cmdlets invoke Azure AD MFA for users in the group when they arenΓÇÖt on the corporate network. +You must replace `"YourGroupSid"` with the SID found by running the preceding cmdlet. Make sure you review the [How to Choose Additional Auth Providers in 2019](/windows-server/identity/ad-fs/overview/whats-new-active-directory-federation-services-windows-server#how-to-choose-additional-auth-providers-in-2019). Value==ΓÇ£YourGroupSid"]) => issue(Type = ### Configure Azure AD MFA as an authentication provider in AD FS In order to configure Azure AD MFA for AD FS, you must configure each AD FS server. -If you have multiple AD FS servers in your farm, you can configure them remotely using Azure AD PowerShell. +If multiple AD FS servers are in your farm, you can configure them remotely using Azure AD PowerShell. For step-by-step directions on this process, see [Configure the AD FS servers](/windows-server/identity/ad-fs/operations/configure-ad-fs-and-azure-mfa#configure-the-ad-fs-servers). -Once you have configured the servers, you can add Azure AD MFA as an additional authentication method. +After you configure the servers, you can add Azure AD MFA as an additional authentication method.  -## Prepare staged rollout +## Prepare Staged Rollout -Now you are ready to enable the staged rollout feature. Staged rollout helps you to iteratively move your users to either PHS or PTA. +Now you're ready to enable [Staged Rollout](../hybrid/how-to-connect-staged-rollout.md). Staged Rollout helps you to iteratively move your users to either PHS or PTA while also migrating their on-premises MFA settings. * Be sure to review the [supported scenarios](../hybrid/how-to-connect-staged-rollout.md#supported-scenarios). -* First you will need to do either the [prework for PHS](../hybrid/how-to-connect-staged-rollout.md#pre-work-for-password-hash-sync) or the [prework for PTA](../hybrid/how-to-connect-staged-rollout.md#pre-work-for-pass-through-authentication). We recommend PHS. -* Next you will do the [prework for seamless SSO](../hybrid/how-to-connect-staged-rollout.md#pre-work-for-seamless-sso). -* [Enable the staged rollout of cloud authentication](../hybrid/how-to-connect-staged-rollout.md#enable-a-staged-rollout-of-a-specific-feature-on-your-tenant) for your selected authentication method. -* Add the group(s) you created for staged rollout. Remember that you will add users to groups iteratively, and that they cannot be dynamic groups or nested groups. +* First, you'll need to do either the [prework for PHS](../hybrid/how-to-connect-staged-rollout.md#pre-work-for-password-hash-sync) or the [prework for PTA](../hybrid/how-to-connect-staged-rollout.md#pre-work-for-pass-through-authentication). We recommend PHS. +* Next, you'll do the [prework for seamless SSO](../hybrid/how-to-connect-staged-rollout.md#pre-work-for-seamless-sso). +* [Enable the Staged Rollout of cloud authentication](../hybrid/how-to-connect-staged-rollout.md#enable-a-staged-rollout-of-a-specific-feature-on-your-tenant) for your selected authentication method. +* Add the group(s) you created for Staged Rollout. Remember that you'll add users to groups iteratively, and that they can't be dynamic groups or nested groups. ## Register users for Azure MFA There are two ways to register users for Azure MFA: * Register for combined security (MFA and self-service-password reset) * Migrate phone numbers from MFA Server -The Microsoft Authenticator app can be used as a passwordless sign in method as well as a second factor for MFA with either method. +Microsoft Authenticator can be used as a passwordless sign-in method and a second factor for MFA with either method. ### Register for combined security registration (recommended) We recommend having your users register for combined security information, which is a single place to register their authentication methods and devices for both MFA and SSPR. -While it is possible to migrate data from the MFA Server to Azure AD MFA, the following challenges occur: +While it's possible to migrate data from the MFA Server to Azure AD MFA, you face these challenges: * Only phone numbers can be migrated. * Authenticator apps will need to be reregistered. * Stale data can be migrated. Microsoft provides communication templates that you can provide to your users to guide them through the combined registration process. -These include templates for email, posters, table tents, and a variety of other assets. Users register their information at `https://aka.ms/mysecurityinfo`, which takes them to the combined security registration screen. +These include templates for email, posters, table tents, and various other assets. Users register their information at `https://aka.ms/mysecurityinfo`, which takes them to the combined security registration screen. We recommend that you [secure the security registration process with Conditional Access](../conditional-access/howto-conditional-access-policy-registration.md) that requires the registration to occur from a trusted device or location. For information on tracking registration statuses, see [Authentication method activity for Azure Active Directory](howto-authentication-methods-activity.md). > [!NOTE] > Users who MUST register their combined security information from a non-trusted location or device can be issued a Temporary Access Pass or alternatively, temporarily excluded from the policy. -### Migrate phone numbers from MFA Server +### Migrate MFA settings from MFA Server -While you can migrate usersΓÇÖ registered MFA phone numbers and hardware tokens, you cannot migrate device registrations such as their Microsoft Authenticator app settings. -Migrating phone numbers can lead to stale numbers being migrated, and make users more likely to stay on phone-based MFA instead of setting up more secure methods like [passwordless sign-in with the Microsoft Authenticator app](howto-authentication-passwordless-phone.md). -We therefore recommend that regardless of the migration path you choose, that you have all users register for [combined security information](howto-registration-mfa-sspr-combined.md). -Combined security information enables users to also register for self-service password reset. +You can use the [MFA Server Migration utility](how-to-mfa-server-migration-utility.md) to synchronize registered MFA settings for users from MFA Server to Azure AD. +You can synchronize phone numbers, hardware tokens, and device registrations such as Microsoft Authenticator app settings. -If having users register their combined security information is not an option, it is possible to export the users along with their phone numbers from MFA Server and import the phone numbers into Azure AD. +### Migrate phone numbers from MFA Server ++If you only want to migrate registered MFA phone numbers, you can export the users along with their phone numbers from MFA Server and import the phone numbers into Azure AD. #### Export user phone numbers from MFA Server 1. Open the Multi-Factor Authentication Server admin console on the MFA Server. 1. Select **File** > **Export Users**.-3) Save the CSV file. The default name is Multi-Factor Authentication Users.csv. +1. Save the .csv file. The default name is Multi-Factor Authentication Users.csv. #### Interpret and format the .csv file -The .csv file contains a number of fields not necessary for migration and will need to be edited and formatted prior to importing the phone numbers into Azure AD. +The .csv file contains many fields not necessary for migration and will need to be edited and formatted prior to importing the phone numbers into Azure AD. -When opening the .csv file, columns of interest include Username, Primary Phone, Primary Country Code, Backup Country Code, Backup Phone, Backup Extension. You must interpret this data and format it, as necessary. +In the .csv file, columns of interest include Username, Primary Phone, Primary Country Code, Backup Country Code, Backup Phone, Backup Extension. You must interpret this data and format it, as necessary. #### Tips to avoid errors during import -* The CSV file will need to be modified prior to using the Authentication Methods API to import the phone numbers into Azure AD. +* The .csv file will need to be modified prior to using the Authentication Methods API to import the phone numbers into Azure AD. * We recommend simplifying the .csv to three columns: UPN, PhoneType, and PhoneNumber.  -* Make sure the exported MFA Server Username matches the Azure AD UserPrincipalName. If it does not, update the username in the CSV file to match what is in Azure AD, otherwise the user will not be found. +* Make sure the exported MFA Server Username matches the Azure AD UserPrincipalName. If it doesn't, update the username in the .csv file to match what is in Azure AD, otherwise the user won't be found. Users may have already registered phone numbers in Azure AD. -When importing the phone numbers using the Authentication Methods API, you must decide whether to overwrite the existing phone number or to add the imported number as an alternate phone number. +When importing the phone numbers using the Authentication Methods API, you must decide whether to overwrite the existing phone number, or to add the imported number as an alternate phone number. -The following PowerShell cmdlets takes the CSV file you supply and add the exported phone numbers as a phone number for each UPN using the Authentication Methods API. You must replace "myPhones" with the name of your CSV file. +The following PowerShell cmdlets takes the .csv file you supply and add the exported phone numbers as a phone number for each UPN using the Authentication Methods API. You must replace "myPhones" with the name of your .csv file. ```powershell $csv = import-csv myPhones.csv $csv|% { New-MgUserAuthenticationPhoneMethod -UserId $_.UPN -phoneType $_.PhoneType -phoneNumber $_.PhoneNumber} ``` -For more information about managing usersΓÇÖ authentication methods, see [Manage authentication methods for Azure AD Multi-Factor Authentication](howto-mfa-userdevicesettings.md). +For more information about managing authentication methods, see [Manage authentication methods for Azure AD Multi-Factor Authentication](howto-mfa-userdevicesettings.md). ### Add users to the appropriate groups * If you created new conditional access policies, add the appropriate users to those groups. * If you created on-premises security groups for claims rules, add the appropriate users to those groups. -* Only after you have added users to the appropriate conditional access rules, add users to the group that you created for staged rollout. Once done, they will begin to use the Azure authentication method that you selected (PHS or PTA) and Azure AD MFA when they are required to perform multi-factor authentication. +* Only after you add users to the appropriate conditional access rules, add users to the group that you created for Staged Rollout. Once done, they'll begin to use the Azure authentication method that you selected (PHS or PTA) and Azure AD MFA when they are required to perform MFA. > [!IMPORTANT] -> Nested and dynamic groups are not supported in the staged rollout process. Do not use these types of groups. +> Nested and dynamic groups aren't supported for Staged Rollout. Do not use these types of groups. -We do not recommend that you reuse groups that are used for security. Therefore, if you are using a security group to secure a group of high-value apps via a Conditional Access policy, that should be the only use of that group. +We don't recommend that you reuse groups that are used for security. Therefore, if you're using a security group to secure a group of high-value apps via a Conditional Access policy, that should be the only use of that group. ## Monitoring -A number of [Azure Monitor workbooks](../reports-monitoring/howto-use-azure-monitor-workbooks.md) and usage & insights reports are available to monitor your deployment. -These can be found in Azure AD in the navigation pane under **Monitoring**. +Many [Azure Monitor workbooks](../reports-monitoring/howto-use-azure-monitor-workbooks.md) and **Usage & Insights** reports are available to monitor your deployment. +These reports can be found in Azure AD in the navigation pane under **Monitoring**. -### Monitoring staged rollout +### Monitoring Staged Rollout In the [workbooks](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Workbooks) section, select **Public Templates**. Under **Hybrid Auth** section select the **Groups, Users and Sign-ins in Staged Rollout** workbook. -This workbook can be used to monitor the following: +This workbook can be used to monitor the following activities: * Users and groups added to Staged Rollout. * Users and groups removed from Staged Rollout.-* Sign-in failures for users in staged rollout, and the reasons for failures. +* Sign-in failures for users in Staged Rollout, and the reasons for failures. ### Monitoring Azure MFA registration Azure MFA registration can be monitored using the [Authentication methods usage & insights report](https://portal.azure.com/#blade/Microsoft_AAD_IAM/AuthenticationMethodsMenuBlade/AuthMethodsActivity/menuId/AuthMethodsActivity). This report can be found in Azure AD. Select **Monitoring**, then select **Usage & insights**. Detailed Azure MFA registration information can be found on the Registration tab ### Monitoring app sign-in health -Monitor applications you have moved to Azure AD with the App sign-in health workbook or the application activity usage report. +Monitor applications you moved to Azure AD with the App sign-in health workbook or the application activity usage report. * **App sign-in health workbook**. See [Monitoring application sign-in health for resilience](../fundamentals/monitor-sign-in-health-for-resilience.md) for detailed guidance on using this workbook. * **Azure AD application activity usage report**. This [report](https://portal.azure.com/#blade/Microsoft_AAD_IAM/UsageAndInsightsMenuBlade/Azure%20AD%20application%20activity) can be used to view the successful and failed sign-ins for individual applications as well as the ability to drill down and view sign-in activity for a specific application. ## Clean up tasks -Once you have moved all your users to Azure AD cloud authentication and Azure MFA, you should be ready to decommission your MFA Server. +After you move all users to Azure AD cloud authentication and Azure MFA, you are ready to decommission your MFA Server. We recommend reviewing MFA Server logs to ensure no users or applications are using it before you remove the server. ### Convert your domains to managed authentication -You should now [convert your federated domains in Azure AD to managed](../hybrid/migrate-from-federation-to-cloud-authentication.md#convert-domains-from-federated-to-managed) and remove the staged rollout configuration. -This ensures new users use cloud authentication without being added to the migration groups. +You should now [convert your federated domains in Azure AD to managed](../hybrid/migrate-from-federation-to-cloud-authentication.md#convert-domains-from-federated-to-managed) and remove the Staged Rollout configuration. +This conversion ensures new users use cloud authentication without being added to the migration groups. ### Revert claims rules on AD FS and remove MFA Server authentication provider -Follow the steps under [Configure claims rules to invoke Azure AD MFA](#configure-claims-rules-to-invoke-azure-ad-mfa) to revert back to the backed up claims rules and remove any AzureMFAServerAuthentication claims rules. +Follow the steps under [Configure claims rules to invoke Azure AD MFA](#configure-claims-rules-to-invoke-azure-ad-mfa) to revert the claims rules and remove any AzureMFAServerAuthentication claims rules. -For example, remove the following from the rule(s): +For example, remove the following section from the rule(s): ```console c:[Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid", Value == Value=="YourGroupSid"]) => issue(Type = "AzureMfaServerAuthentication");ΓÇÖ ``` - ### Disable MFA Server as an authentication provider in AD FS This change ensures only Azure MFA is used as an authentication provider. Possible considerations when decommissions the MFA Server include: ## Move application authentication to Azure Active Directory -If you migrate all your application authentication along with your MFA and user authentication, you will be able to remove significant portions of your on-premises infrastructure, reducing costs and risks. +If you migrate all your application authentication along with your MFA and user authentication, you'll be able to remove significant portions of your on-premises infrastructure, reducing costs and risks. If you move all application authentication, you can skip the [Prepare AD FS](#prepare-ad-fs) stage and simplify your MFA migration. The process for moving all application authentication is shown in the following diagram.  -If it is not possible to move all your applications prior to the migration, move applications that you can before starting. -For more information on migrating applications to Azure, see [Resources for migrating applications to Azure Active Directory](../manage-apps/migration-resources.md). +If you can't move all your applications before the migration, move as many as possible before you start. +For more information about migrating applications to Azure, see [Resources for migrating applications to Azure Active Directory](../manage-apps/migration-resources.md). ## Next steps -- [Migrate from Microsoft MFA Server to Azure multi-factor authentication (Overview)](how-to-migrate-mfa-server-to-azure-mfa.md)+- [Migrate from Microsoft MFA Server to Azure MFA (Overview)](how-to-migrate-mfa-server-to-azure-mfa.md) - [Migrate applications from Windows Active Directory to Azure Active Directory](../manage-apps/migrate-application-authentication-to-azure-active-directory.md) - [Plan your cloud authentication strategy](../fundamentals/active-directory-deployment-plans.md) |
active-directory | How To Migrate Mfa Server To Azure Mfa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-azure-mfa.md | There are multiple possible end states to your migration, depending on your goal | <br> | Goal: Decommission MFA Server ONLY | Goal: Decommission MFA Server and move to Azure AD Authentication | Goal: Decommission MFA Server and AD FS | |||-|--| |MFA provider | Change MFA provider from MFA Server to Azure AD Multi-Factor Authentication. | Change MFA provider from MFA Server to Azure AD Multi-Factor Authentication. | Change MFA provider from MFA Server to Azure AD Multi-Factor Authentication. |-|User authentication |Continue to use federation for Azure AD authentication. | Move to Azure AD with Password Hash Synchronization (preferred) or Passthrough Authentication **and** Seamless Single Sign-On (SSO).| Move to Azure AD with Password Hash Synchronization (preferred) or Passthrough Authentication **and** SSO. | +|User authentication |Continue to use federation for Azure AD authentication. | Move to Azure AD with Password Hash Synchronization (preferred) or Passthrough Authentication **and** Seamless single sign-on (SSO).| Move to Azure AD with Password Hash Synchronization (preferred) or Passthrough Authentication **and** SSO. | |Application authentication | Continue to use AD FS authentication for your applications. | Continue to use AD FS authentication for your applications. | Move apps to Azure AD before migrating to Azure AD Multi-Factor Authentication. | If you can, move both your multifactor authentication and your user authentication to Azure. For step-by-step guidance, see [Moving to Azure AD Multi-Factor Authentication and Azure AD user authentication](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md). MicrosoftΓÇÖs MFA server can be integrated with many systems, and you must evalu ### Migrating MFA user information -Common ways to think about moving users in batches include moving them by regions, departments, or roles such as administrators. -Whichever strategy you choose, ensure that you move users iteratively, starting with test and pilot groups, and that you have a rollback plan in place. +Common ways to think about moving users in batches include moving them by regions, departments, or roles such as administrators. You should move user accounts iteratively, starting with test and pilot groups, and make sure you have a rollback plan in place. -While you can migrate usersΓÇÖ registered multifactor authentication phone numbers and hardware tokens, you can't migrate device registrations such as their Microsoft Authenticator app settings. -Users will need to register and add a new account on the Authenticator app and remove the old account. +You can use the [MFA Server Migration Utility](how-to-mfa-server-migration-utility.md) to synchronize MFA data stored in the on-premises Azure MFA Server to Azure AD MFA and use [Staged Rollout](../hybrid/how-to-connect-staged-rollout.md) to reroute users to Azure MFA. Staged Rollout helps you test without making any changes to your domain federation settings. To help users to differentiate the newly added account from the old account linked to the MFA Server, make sure the Account name for the Mobile App on the MFA Server is named in a way to distinguish the two accounts. -For example, the Account name that appears under Mobile App on the MFA Server has been renamed to On-Premises MFA Server. -The account name on the Authenticator App will change with the next push notification to the user. +For example, the Account name that appears under Mobile App on the MFA Server has been renamed to **On-Premises MFA Server**. +The account name on Microsoft Authenticator will change with the next push notification to the user. Migrating phone numbers can also lead to stale numbers being migrated and make users more likely to stay on phone-based MFA instead of setting up more secure methods like Microsoft Authenticator in passwordless mode. We therefore recommend that regardless of the migration path you choose, that you have all users register for [combined security information](howto-registration-mfa-sspr-combined.md). - #### Migrating hardware security keys -Azure AD provides support for OATH hardware tokens. -In order to migrate the tokens from MFA Server to Azure AD Multi-Factor Authentication, the [tokens must be uploaded into Azure AD using a CSV file](concept-authentication-oath-tokens.md#oath-hardware-tokens-preview), commonly referred to as a "seed file". +Azure AD provides support for OATH hardware tokens. You can use the [MFA Server Migration Utility](how-to-mfa-server-migration-utility.md) to synchronize MFA settings between MFA Server and Azure AD MFA and use [Staged Rollout](../hybrid/how-to-connect-staged-rollout.md) to test user migrations without changing domain federation settings. ++If you only want to migrate OATH hardware tokens, you need to [upload tokens to Azure AD by using a CSV file](concept-authentication-oath-tokens.md#oath-hardware-tokens-preview), commonly referred to as a "seed file". The seed file contains the secret keys, token serial numbers, and other necessary information needed to upload the tokens into Azure AD. If you no longer have the seed file with the secret keys, it isn't possible to export the secret keys from MFA Server. If you no longer have access to the secret keys, contact your hardware vendor for support. The MFA Server Web Service SDK can be used to export the serial number for any OATH tokens assigned to a given user. -Using this information along with the seed file, IT admins can import the tokens into Azure AD and assign the OATH token to the specified user based on the serial number. +You can use this information along with the seed file to import the tokens into Azure AD and assign the OATH token to the specified user based on the serial number. The user will also need to be contacted at the time of import to supply OTP information from the device to complete the registration. -Refer to the GetUserInfo > userSettings > OathTokenSerialNumber topic in the Multi-Factor Authentication Server help file on your MFA Server. -+Refer to the help file topic **GetUserInfo** > **userSettings** > **OathTokenSerialNumber** in Multi-Factor Authentication Server on your MFA Server. ### More migrations The decision to migrate from MFA Server to Azure AD Multi-Factor Authentication - Your willingness to use Azure AD authentication for users - Your willingness to move your applications to Azure AD -Because MFA Server is deeply integrated with both applications and user authentication, you may want to consider moving both of those functions to Azure as a part of your MFA migration, and eventually decommissioning AD FS. +Because MFA Server is integral to both application and user authentication, consider moving both of those functions to Azure as a part of your MFA migration, and eventually decommission AD FS. Our recommendations: - Use Azure AD for authentication as it enables more robust security and governance - Move applications to Azure AD if possible -To select the user authentication method best for your organization, see [Choose the right authentication method for your Azure AD hybrid identity solution](../hybrid/choose-ad-authn.md). +To select the best user authentication method for your organization, see [Choose the right authentication method for your Azure AD hybrid identity solution](../hybrid/choose-ad-authn.md). We recommend that you use Password Hash Synchronization (PHS). ### Passwordless authentication -As part of enrolling users to use Microsoft Authenticator as a second factor, we recommend you enable passwordless phone sign-in as part of their registration. For more information, including other passwordless methods such as FIDO and Windows Hello for Business, visit [Plan a passwordless authentication deployment with Azure AD](howto-authentication-passwordless-deployment.md#plan-for-and-deploy-microsoft-authenticator). +As part of enrolling users to use Microsoft Authenticator as a second factor, we recommend you enable passwordless phone sign-in as part of their registration. For more information, including other passwordless methods such as FIDO2 security keys and Windows Hello for Business, visit [Plan a passwordless authentication deployment with Azure AD](howto-authentication-passwordless-deployment.md#plan-for-and-deploy-microsoft-authenticator). ### Microsoft Identity Manager self-service password reset Check with the service provider for supported product versions and their capabil - The NPS extension doesn't use Azure AD Conditional Access policies. If you stay with RADIUS and use the NPS extension, all authentication requests going to NPS will require the user to perform MFA. - Users must register for Azure AD Multi-Factor Authentication prior to using the NPS extension. Otherwise, the extension fails to authenticate the user, which can generate help desk calls. - When the NPS extension invokes MFA, the MFA request is sent to the user's default MFA method. - - Because the sign-in happens on non-Microsoft applications, it's unlikely that the user will see visual notification that multifactor authentication is required and that a request has been sent to their device. + - Because the sign-in happens on non-Microsoft applications, the user often can't see visual notification that multifactor authentication is required and that a request has been sent to their device. - During the multifactor authentication requirement, the user must have access to their default authentication method to complete the requirement. They can't choose an alternative method. Their default authentication method will be used even if it's disabled in the tenant authentication methods and multifactor authentication policies. - Users can change their default multifactor authentication method in the Security Info page (aka.ms/mysecurityinfo). - Available MFA methods for RADIUS clients are controlled by the client systems sending the RADIUS access requests.- - MFA methods that require user input after they enter a password can only be used with systems that support access-challenge responses with RADIUS. Input methods might include OTP, hardware OATH tokens or the Microsoft Authenticator application. + - MFA methods that require user input after they enter a password can only be used with systems that support access-challenge responses with RADIUS. Input methods might include OTP, hardware OATH tokens or Microsoft Authenticator. - Some systems might limit available multifactor authentication methods to Microsoft Authenticator push notifications and phone calls. - >[!NOTE] >The password encryption algorithm used between the RADIUS client and the NPS system, and the input methods the client can use affect which authentication methods are available. For more information, see [Determine which authentication methods your users can use](howto-mfa-nps-extension.md). Others might include: - [Moving to Azure AD Multi-Factor Authentication with federation](how-to-migrate-mfa-server-to-azure-mfa-with-federation.md) - [Moving to Azure AD Multi-Factor Authentication and Azure AD user authentication](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md)-+- [How to use the MFA Server Migration Utility](how-to-mfa-server-migration-utility.md) |
active-directory | Active Directory Certificate Credentials | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-certificate-credentials.md | To compute the assertion, you can use one of the many JWT libraries in the langu | | | | `alg` | Should be **RS256** | | `typ` | Should be **JWT** |-| `x5t` | Base64-encoded SHA-1 thumbprint of the X.509 certificate thumbprint. For example, given an X.509 certificate hash of `84E05C1D98BCE3A5421D225B140B36E86A3D5534` (Hex), the `x5t` claim would be `hOBcHZi846VCHSJbFAs26Go9VTQ=` (Base64). | +| `x5t` | Base64-encoded SHA-1 thumbprint of the X.509 certificate. For example, given an X.509 certificate hash of `84E05C1D98BCE3A5421D225B140B36E86A3D5534` (Hex), the `x5t` claim would be `hOBcHZi846VCHSJbFAs26Go9VTQ=` (Base64). | ### Claims (payload) |
active-directory | Msal Error Handling Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-error-handling-dotnet.md | do } else if (retryAfter.Date.HasValue) {- delay = retryAfter.Date.Value.Offset; + delay = (retryAfter.Date.Value ΓÇô DateTimeOffset.Now).TotalMilliseconds; } } } |
active-directory | Check Status Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/check-status-workflow.md | + + Title: Check status of a Lifecycle workflow - Azure Active Directory +description: This article guides a user on checking the status of a Lifecycle workflow ++++++ Last updated : 03/10/2022++++++# Check the status of a workflow (Preview) ++When a workflow is created, it's important to check its status, and run history to make sure it ran properly for the users it processed both by schedule and by on-demand. To get information about the status of workflows, Lifecycle Workflows allows you to check run and user processing history. This history also gives you summaries to see how often a workflow has run, and who it ran successfully for. You're also able to check the status of both the workflow, and its tasks. Checking the status of workflows and their tasks allows you to troubleshoot potential problems that could come up during their execution. +++## Run workflow history using the Azure portal ++You're able to retrieve run information of a workflow using Lifecycle Workflows. To check the runs of a workflow using the Azure portal, you would do the following steps: ++1. Sign in to the [Azure portal](https://portal.azure.com). ++1. Select **Azure Active Directory** and then select **Identity Governance**. ++1. On the left menu, select **Lifecycle Workflows (Preview)**. ++1. On the Lifecycle Workflows overview page, select **Workflows (Preview)**. ++1. Select the workflow you want to run history of. ++1. On the workflow overview screen, select **Audit logs**. ++1. On the history page, select the **Runs** button. ++1. Here you'll see a summary of workflow runs. + :::image type="content" source="media/check-status-workflow/run-list.png" alt-text="Screenshot of a workflow Runs list."::: +1. The runs summary cards include the total number of processed runs, the number of successful runs, the number of failed runs, and the total number of failed tasks. ++## User workflow history using the Azure portal ++To get further information than just the runs summary for a workflow, you're also able to get information about users processed by a workflow. To check the status of users a workflow has processed using the Azure portal, you would do the following steps: ++ +1. In the left menu, select **Lifecycle Workflows (Preview)**. ++1. select **Workflows (Preview)**. ++1. select the workflow you want to see user processing information for. ++1. On the workflow overview screen, select **Workflow history (Preview)**. + :::image type="content" source="media/check-status-workflow/workflow-history.png" alt-text="Screenshot of a workflow overview history."::: +1. On the workflow history page, you're presented with a summary of every user processed by the workflow along with counts of successful and failed users and tasks. + :::image type="content" source="media/check-status-workflow/workflow-history-list.png" alt-text="Screenshot of a list of workflow summaries."::: +1. By selecting total tasks by a user you're able to see which tasks have successfully completed, or are currently in progress. + :::image type="content" source="media/check-status-workflow/task-history-status.png" alt-text="Screenshot of workflow task history status."::: +1. By selecting failed tasks, you're able to see which tasks have failed for a specific user. + :::image type="content" source="media/check-status-workflow/task-history-failed.png" alt-text="Screenshot of workflow failed tasks history."::: +1. By selecting unprocessed tasks, you're able to see which tasks are unprocessed. + :::image type="content" source="media/check-status-workflow/task-history-unprocessed.png" alt-text="Screenshot of unprocessed tasks of a workflow."::: +++## User workflow history using Microsoft Graph ++### List user processing results using Microsoft Graph ++To view a status list of users processed by a workflow, which are UserProcessingResults, you'd make the following API call: ++```http +GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/userProcessingResults +``` ++By default **userProcessingResults** returns only information from the last 7 days. To get information as far back as 30 days, you would run the following API call: ++```http +GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/userProcessingResults?$filter=<Date range for processing results> +``` ++by default **userProcessingResults** returns only information from the last 7 days. To filter information as far back as 30 days, you would run the following API call: ++```http +GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<id>/userProcessingResults?$filter=<Date range for processing results> +``` ++An example of a call to get **userProcessingResults** for a month would be as follows: ++```http +GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/userProcessingResults?$filter=< startedDateTime ge 2022-05-23T00:00:00Z and startedDateTime le 2022-06-22T00:00:00Z +``` ++### User processing results using Microsoft Graph ++When multiple user events are processed by a workflow, running the **userProcessingResults** may give incomprehensible information. To get a summary of information such as total users and tasks, and failed users and tasks, Lifecycle Workflows provides a call to get count totals. ++To view a summary in count form, you would run the following API call: +```http +GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/userProcessingResults/summary(<Date Range>) +``` ++An example to get the summary between May 1, and May 30, you would run the following call: ++```http +GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/userProcessingResults/summary(startDateTime=2022-05-01T00:00:00Z,endDateTime=2022-05-30T00:00:00Z) +``` ++### List task processing results of a given user processing result ++```http +GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/userProcessingResults/<userProcessingResultId>/taskProcessingResults/ +``` ++## Run workflow history via Microsoft Graph ++### List runs using Microsoft Graph ++With Microsoft Graph, you're able to get full details of workflow and user processing run information. ++To view a list of runs, you'd make the following API call: ++```http +GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/runs +``` ++### Get a summary of runs using Microsoft Graph ++To get a summary of runs for a workflow, which includes detailed information for counts of failed runs and tasks, along with successful runs and tasks for a time range, you'd make the following API call: ++```http +GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/runs/summary(startDateTime=<time>,endDateTime=<time>) +``` +An example to get a summary of runs of a workflow through the time interval of May 2022 would be as follows: ++```http +GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/runs/summary(startDateTime=2022-05-01T00:00:00Z,endDateTime=202205-31T00:00:00Z) +``` ++### List user and task processing results of a given run using Microsoft Graph ++With Lifecycle Workflows, you're able to check the status of each user and task who had a workflow processed for them as part of a run. ++ +You're also able to use **userProcessingResults** with the run call to get users processed for a run by making the following API call: ++```http +GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/runs/<runId>/userProcessingResults +``` ++This API call will also return a **userProcessingResults ID** value, which can be used to retrieve task processing information in the following call: ++```http +GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId> /runs/<runId>/userProcessingResults/<userProcessingResultId>/taskProcessingResults +``` ++> [!NOTE] +> A workflow must have activity in the past 7 days to get **userProcessingResults ID**. If there has not been any activity in that time-frame, the **userProcessingResults** call will not return a value. +++## Next steps ++- [Manage workflow versions](manage-workflow-tasks.md) +- [Delete Lifecycle Workflows](delete-lifecycle-workflow.md) |
active-directory | Configure Logic App Lifecycle Workflows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/configure-logic-app-lifecycle-workflows.md | + + Title: Configure a Logic App for Lifecycle Workflow use +description: Configure an Azure Logic App for use with Lifecycle Workflows ++++ Last updated : 08/28/2022++++++# Configure a Logic App for Lifecycle Workflow use (Preview) ++Before you can use an existing Azure Logic App with the custom task extension feature of Lifecycle Workflows, it must first be made compatible. This reference guide provides a list of steps that must be taken to make the Azure Logic App compatible the custom task extension. For a simpler guide on creating a new Logic App with the custom task extension via the Lifecycle Workflows portal, see [Trigger Logic Apps based on custom task extensions (preview)](trigger-custom-task.md). ++## Configure existing Logic Apps for LCW use with Microsoft Graph ++Making an Azure Logic app compatible to run with the **Custom Task Extension** requires the following steps: ++- Configure the logic app trigger +- Configure the callback action (only applicable to the callback scenario) +- Enable system assigned managed identity. +- Configure AuthZ policies. ++> [!NOTE] +> For our public preview we will provide a UI and a deployment script that will automate the following steps. ++To configure those you'll follow these steps: ++1. Open the Azure Logic App you want to use with Lifecycle Workflow. Logic Apps may greet you with an introduction screen, which you can close with the X in the upper right corner. ++1. On the left of the screen select **Logic App code view**. ++1. In the editor paste the following code: + ```LCW Logic App code view template + { + "definition": { + "$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#", + "actions": { + "HTTP": { + "inputs": { + "authentication": { + "audience": "https://graph.microsoft.com", + "type": "ManagedServiceIdentity" + }, + "body": { + "data": { + "operationStatus": "Completed" + }, + "source": "sample", + "type": "lifecycleEvent" + }, + "method": "POST", + "uri": "https://graph.microsoft.com/beta@{triggerBody()?['data']?['callbackUriPath']}" + }, + "runAfter": {}, + "type": "Http" + } + }, + "contentVersion": "1.0.0.0", + "outputs": {}, + "parameters": {}, + "triggers": { + "manual": { + "inputs": { + "schema": { + "properties": { + "data": { + "properties": { + "callbackUriPath": { + "description": "CallbackUriPath used for Resume Action", + "title": "Data.CallbackUriPath", + "type": "string" + }, + "subject": { + "properties": { + "displayName": { + "description": "DisplayName of the Subject", + "title": "Subject.DisplayName", + "type": "string" + }, + "email": { + "description": "Email of the Subject", + "title": "Subject.Email", + "type": "string" + }, + "id": { + "description": "Id of the Subject", + "title": "Subject.Id", + "type": "string" + }, + "manager": { + "properties": { + "displayName": { + "description": "DisplayName parameter for Manager", + "title": "Manager.DisplayName", + "type": "string" + }, + "email": { + "description": "Mail parameter for Manager", + "title": "Manager.Mail", + "type": "string" + }, + "id": { + "description": "Id parameter for Manager", + "title": "Manager.Id", + "type": "string" + } + }, + "type": "object" + }, + "userPrincipalName": { + "description": "UserPrincipalName of the Subject", + "title": "Subject.UserPrincipalName", + "type": "string" + } + }, + "type": "object" + }, + "task": { + "properties": { + "displayName": { + "description": "DisplayName for Task Object", + "title": "Task.DisplayName", + "type": "string" + }, + "id": { + "description": "Id for Task Object", + "title": "Task.Id", + "type": "string" + } + }, + "type": "object" + }, + "taskProcessingResult": { + "properties": { + "createdDateTime": { + "description": "CreatedDateTime for TaskProcessingResult Object", + "title": "TaskProcessingResult.CreatedDateTime", + "type": "string" + }, + "id": { + "description": "Id for TaskProcessingResult Object", + "title": "TaskProcessingResult.Id", + "type": "string" + } + }, + "type": "object" + }, + "workflow": { + "properties": { + "displayName": { + "description": "DisplayName for Workflow Object", + "title": "Workflow.DisplayName", + "type": "string" + }, + "id": { + "description": "Id for Workflow Object", + "title": "Workflow.Id", + "type": "string" + }, + "workflowVerson": { + "description": "WorkflowVersion for Workflow Object", + "title": "Workflow.WorkflowVersion", + "type": "integer" + } + }, + "type": "object" + } + }, + "type": "object" + }, + "source": { + "description": "Context in which an event happened", + "title": "Request.Source", + "type": "string" + }, + "type": { + "description": "Value describing the type of event related to the originating occurrence.", + "title": "Request.Type", + "type": "string" + } + }, + "type": "object" + } + }, + "kind": "Http", + "type": "Request" + } + } + }, + "parameters": {} + } + + ``` +1. Select Save. ++1. Switch to the **Logic App designer** and inspect the configured trigger and callback action. To build your custom business logic, add other actions between the trigger and callback action. If you're only interested in the fire-and-forget scenario, you may remove the callback action. ++1. On the left of the screen select **Identity**. ++1. Under the system assigned tab enable the status to register it with Azure Active Directory. ++1. Select Save. ++1. For Logic Apps authorization policy, we'll need the managed identities **Application ID**. Since the Azure portal only shows the Object ID, we need to look up the Application ID. You can search for the managed identity by Object ID under **Enterprise Applications in the Azure AD Portal** to find the required Application ID. ++1. Go back to the logic app you created, and select **Authorization**. ++1. Create a new authorization policy based on the table below: ++ |Claim |Value | + ||| + |Issuer | https://sts.windows.net/(Tenant ID)/ | + |Audience | Application ID of your Logic Apps Managed Identity | + |appID | ce79fdc4-cd1d-4ea5-8139-e74d7dbe0bb7 | +++1. Save the Authorization policy. +> [!NOTE] +> Due to a current bug in the Logic Apps UI you may have to save the authorization policy after each claim before adding another. ++> [!CAUTION] +> Please pay attention to the details as minor differences can lead to problems later. +- For Issuer, ensure you did include the slash after your Tenant ID +- For Audience, ensure you're using the Application ID and not the Object ID of your Managed Identity +- For appid, ensure the custom claim is ΓÇ£appidΓÇ¥ in all lowercase. The appid value represents Lifecycle Workflows and is always the same. ++ ++## Linking Lifecycle Workflows with Logic Apps using Microsoft Graph ++After the Logic App, we can now integrate it with Lifecycle Workflows. As outlined in the high-level steps we first need to create the customTaskExtension and afterwards, we can reference the customTaskExtension in our ΓÇ£Run a custom task extensionΓÇ¥ task. ++The API call for creating a customTaskExtension is as follows: +```http +POST https://graph.microsoft.com/beta/identityGovernance/lifecycleManagement/customTaskExtensions +Content-type: application/json ++{ + "displayName": "<Custom task extension name>", + "description": "<description for custom task extension>", + "callbackConfiguration": { + "@odata.type": "#microsoft.graph.identityGovernance.customTaskExtensionCallbackConfiguration", + "durationBeforeTimeout": "PT1H" + }, + "endpointConfiguration": { + "@odata.type": "#microsoft.graph.logicAppTriggerEndpointConfiguration", + "subscriptionId": "<Your Azure subscription>", + "resourceGroupName": "<Resource group where the Logic App is located>", + "logicAppWorkflowName": "<Logic App workflow name>" + }, + "authenticationConfiguration": { + "@odata.type": "#microsoft.graph.azureAdTokenAuthentication", + "resourceId": " f9c5dc6b-d72b-4226-8ccd-801f7a290428" + }, + "clientConfiguration": { + "timeoutInMilliseconds": 1000, + "maximumRetries": 1 + } +} +``` +> [!NOTE] +> To create a custom task extension instance that does not wait for a response from the logic app, remove the **callbackConfiguration** parameter. ++After the task is created, you can run the following GET call to retrieve its details: ++```http +GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/customTaskExtensions +``` ++An example response is as follows: + ```Example Custom Task Extension return +{ + "@odata.context": "https://graph.microsoft.com/beta/$metadata#identityGovernance/lifecycleWorkflows/customTaskExtensions", + "@odata.count": 1, + "value": [ + { + "@odata.context": "https://graph.microsoft.com/beta/$metadata#identityGovernance/lifecycleWorkflows/customTaskExtensions", + "@odata.count": 1, + "value": [ + { + "id": "def9685c-e0f6-45aa-8fe8-a9f7ee6d30d6", + "displayName": "My Custom Task Extension", + "description": "My Custom Task Extension to test Lifecycle workflows Logic App integration", + "createdDateTime": "2022-06-28T10:47:08.9359567Z", + "lastModifiedDateTime": "2022-06-28T10:47:08.936017Z", + "endpointConfiguration": { + "@odata.type": "#microsoft.graph.logicAppTriggerEndpointConfiguration", + "subscriptionId": "c500b67c-e9b7-4ad2-a90d-77d41385ae55", + "resourceGroupName": "RG-LCM", + "logicAppWorkflowName": "LcwDocsTest" + }, + "authenticationConfiguration": { + "@odata.type": "#microsoft.graph.azureAdTokenAuthentication", + "resourceId": "f74118f0-849a-457d-a7e4-ee97eab6017a" + }, + "clientConfiguration": { + "maximumRetries": 1, + "timeoutInMilliseconds": 1000 + }, + "callbackConfiguration": { + "@odata.type": "#microsoft.graph.identityGovernance.customTaskExtensionCallbackConfiguration", + "timeoutDuration": "PT1H" + } + } + ] +} ++``` ++You'll then take the custom extension **ID**, and use it as the value in the customTaskExtensionId parameter for the custom task example here: ++> [!NOTE] +> The new ΓÇ£Run a Custom Task ExtensionΓÇ¥ task is already available in the Public Preview UI. ++```Example of Custom Task extension task +"tasks":[ + { + "taskDefinitionId": "4262b724-8dba-4fad-afc3-43fcbb497a0e", + "continueOnError": false, + "displayName": "<Custom Task Extension displayName>", + "description": "<Custom Task Extension description>", + "isEnabled": true, + "arguments": [ + { + "name": "customTaskExtensionID", + "value": "<ID of your Custom Task Extension>" + } + ] +} +++``` ++## Next steps ++- [Lifecycle workflow extensibility (Preview)](lifecycle-workflow-extensibility.md) +- [Manage Workflow Versions](manage-workflow-tasks.md) |
active-directory | Create Lifecycle Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-lifecycle-workflow.md | + + Title: Create a Lifecycle Workflow- Azure AD (preview) +description: This article guides a user to creating a workflow using Lifecycle Workflows ++++++ Last updated : 02/15/2022+++++# Create a Lifecycle workflow (Preview) +Lifecycle Workflows allows for tasks associated with the lifecycle process to be run automatically for users as they move through their life cycle in your organization. Workflows are made up of: + - tasks - Actions taken when a workflow is triggered. + - execution conditions - Define the who and when of a workflow. That is, who (scope) should this workflow run against, and when (trigger) should it run. ++Workflows can be created and customized for common scenarios using templates, or you can build a template from scratch without using a template. Currently if you use the Azure portal, a created workflow must be based off a template. If you wish to create a workflow without using a template, you must create it using Microsoft Graph. ++## Prerequisites +++## Create a Lifecycle workflow using a template in the Azure portal ++If you are using the Azure portal to create a workflow, you can customize existing templates to meet your organization's needs. This means you can customize the pre-hire common scenario template. To create a workflow based on one of these templates using the Azure portal do the following steps: ++1. Sign in to the [Azure portal](https://portal.azure.com). ++1. Select **Azure Active Directory** and then select **Identity Governance**. ++1. In the left menu, select **Lifecycle Workflows (Preview)**. ++1. select **Workflows (Preview)** ++1. On the workflows screen, select the workflow template that you want to use. + :::image type="content" source="media/create-lifecycle-workflow/template-list.png" alt-text="Screenshot of a list of lifecycle workflows templates."::: +1. Enter a display name and description for the workflow. The display name must be unique and not match the name of any other workflow you've created. + :::image type="content" source="media/create-lifecycle-workflow/template-basics.png" alt-text="Screenshot of workflow template basic information."::: ++1. Select the **Trigger type** to be used for this workflow. ++1. On **Days from event**, you enter a value of days when you want the workflow to go into effect. The valid values are 0 to 60. ++1. **Event timing** allows you to choose if the days from event are either before or after. ++1. **Event user attribute** is the event being used to trigger the workflow. For example, with the pre hire workflow template, an event user attribute is the employee hire date. +++1. Select the **Property**, **Operator**, and give it a **value**. The following picture gives an example of a rule being set up for a sales department. ++ :::image type="content" source="media/create-lifecycle-workflow/template-scope.png" alt-text="Screenshot of Lifecycle Workflows template scope configuration options."::: ++1. To view your rule syntax, select the **View rule syntax** button. + :::image type="content" source="media/create-lifecycle-workflow/template-syntax.png" alt-text="Screenshot of workflow rule syntax."::: ++1. You can copy and paste multiple user property rules on this screen. For more detailed information on which properties that can be included see: [User Properties](/graph/aad-advanced-queries?tabs=http#user-properties) ++1. To Add a task to the template, select **Add task**. ++ :::image type="content" source="media/create-lifecycle-workflow/template-tasks.png" alt-text="Screenshot of adding tasks to templates."::: ++1. To enable an existing task on the list, select **enable**. You're also able to disable a task by selecting **disable**. ++1. To remove a task from the template, select **Remove** on the selected task. ++1. Review the workflow's settings. ++ :::image type="content" source="media/create-lifecycle-workflow/template-review.png" alt-text="Screenshot of reviewing and creating a template."::: ++1. Select **Create** to create the workflow. +++> [!IMPORTANT] +> By default, a newly created workflow is disabled to allow for the testing of it first on smaller audiences. For more information about testing workflows before rolling them out to many users, see: [run an on-demand workflow](on-demand-workflow.md). ++## Create a workflow using Microsoft Graph ++Workflows can be created using Microsoft Graph API. Creating a workflow using the Graph API allows you to automatically set it to enabled. Setting it to enabled is done using the `isEnabled` parameter. ++The table below shows the parameters that must be defined during workflow creation: ++|Parameter |Description | +||| +|category | A string that identifies the category of the workflow. String is "joiner", "mover", or "leaver. Category of tasks within a workflow must also contain the category of the workflow to run. For full task definitions, see: [Lifecycle workflow tasks and definitions](lifecycle-workflow-tasks.md) | +|displayName | A unique string that identifies the workflow. | +|description | A string that describes the purpose of the workflow for administrative use. (Optional) | +|isEnabled | A boolean value that denotes whether the workflow is set to run or not. If set to ΓÇ£true" then the workflow will run. | +|IsSchedulingEnabled | A Boolean value that denotes whether scheduling is enabled or not. Unlike isEnbaled, a workflow can still be run on demand if this value is set to false. | +|executionConditions | An argument that contains: A time-based attribute and an integer parameter defining when a workflow will run between -60 and a scope attribute defining who the workflow runs for. | +|tasks | An argument in a workflow that has a unique displayName and a description. It defines the specific tasks to be executed in the workflow. The specified task is outlined by the taskDefinitionID and its parameters. For a list of supported tasks, and their corresponding IDs, see [Lifecycle Workflow tasks and definitions](lifecycle-workflow-tasks.md). | +++++To create a joiner workflow, in Microsoft Graph, use the following request and body: +```http +POST https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows +Content-type: application/json +``` ++```Request body +{ + "category": "joiner", + "displayName": "<Unique workflow name string>", + "description": "<Unique workflow description>", + "isEnabled":true, + "tasks":[ + { + "category": "joiner", + "isEnabled": true, + "taskTemplateId": "<Unique Task template>", + "displayName": "<Unique task name>", + "description": "<Task template description>", + "arguments": "<task arguments>" + } + ], + "executionConditions": { + "@odata.type" : "microsoft.graph.identityGovernance.scopeAndTriggerBasedCondition", + "trigger": { + "@odata.type" : "microsoft.graph.identityGovernance.timeBasedAttributeTrigger", + "timeBasedAttribute":"<time-based trigger argument>", + "arguments": -7 + }, + "scope": { + "@odata.type" : "microsoft.graph.identityGovernance.ruleBasedScope", + "rule": "employeeType eq '<Employee type attribute>' AND department -eq '<department attribute>'" + } + } +} ++> [!NOTE] +> time based trigger arguments can be from -60 to 60. The negative value denotes **Before** a time based argument, while a positive value denotes **After**. For example the -7 in the workflow example above denotes the workflow will run 1 week before the time-based argument happens. ++``` ++To change this workflow from joiner to leaver, replace the category parameters to "leaver". To get a list of the task definitions that can be added to your workflow run the following call: ++```http +GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/taskDefinitions +``` ++The response to the code will look like: ++```Response body +{ + "@odata.context": "https://graph.microsoft-ppe.com/testppebetalcwpp4/$metadata#identityGovernance/lifecycleWorkflows/taskDefinitions", + "@odata.count": 13, + "value": [ + { + "category": "joiner,leaver", + "description": "Add user to a group", + "displayName": "Add User To Group", + "id": "22085229-5809-45e8-97fd-270d28d66910", + "version": 1, + "parameters": [ + { + "name": "groupID", + "values": [], + "valueType": "string" + } + ] + }, + { + "category": "joiner,leaver", + "description": "Disable user account in the directory", + "displayName": "Disable User Account", + "id": "1dfdfcc7-52fa-4c2e-bf3a-e3919cc12950", + "version": 1, + "parameters": [] + }, + { + "category": "joiner,leaver", + "description": "Enable user account in the directory", + "displayName": "Enable User Account", + "id": "6fc52c9d-398b-4305-9763-15f42c1676fc", + "version": 1, + "parameters": [] + }, + { + "category": "joiner,leaver", + "description": "Run a custom task extension", + "displayName": "run a Custom Task Extension", + "id": "4262b724-8dba-4fad-afc3-43fcbb497a0e", + "version": 1, + "parameters": + { + "name": "customtaskextensionID", + "values": [], + "valueType": "string" + } + ] + }, + { + "category": "joiner,leaver", + "description": "Remove user from membership of selected Azure AD groups", + "displayName": "Remove user from selected groups", + "id": "1953a66c-751c-45e5-8bfe-01462c70da3c", + "version": 1, + "parameters": [ + { + "name": "groupID", + "values": [], + "valueType": "string" + } + ] + }, + { + "category": "joiner", + "description": "Generate Temporary Access Password and send via email to user's manager", + "displayName": "Generate TAP And Send Email", + "id": "1b555e50-7f65-41d5-b514-5894a026d10d", + "version": 1, + "parameters": [ + { + "name": "tapLifetimeMinutes", + "values": [], + "valueType": "string" + }, + { + "name": "tapIsUsableOnce", + "values": [ + "true", + "false" + ], + "valueType": "enum" + } + ] + }, + { + "category": "joiner", + "description": "Send welcome email to new hire", + "displayName": "Send Welcome Email", + "id": "70b29d51-b59a-4773-9280-8841dfd3f2ea", + "version": 1, + "parameters": [] + }, + { + "category": "joiner,leaver", + "description": "Add user to a team", + "displayName": "Add User To Team", + "id": "e440ed8d-25a1-4618-84ce-091ed5be5594", + "version": 1, + "parameters": [ + { + "name": "teamID", + "values": [], + "valueType": "string" + } + ] + }, + { + "category": "leaver", + "description": "Delete user account in Azure AD", + "displayName": "Delete User Account", + "id": "8d18588d-9ad3-4c0f-99d0-ec215f0e3dff", + "version": 1, + "parameters": [] + }, + { + "category": "joiner,leaver", + "description": "Remove user from membership of selected Teams", + "displayName": "Remove user from selected Teams", + "id": "06aa7acb-01af-4824-8899-b14e5ed788d6", + "version": 1, + "parameters": [ + { + "name": "teamID", + "values": [], + "valueType": "string" + } + ] + }, + { + "category": "leaver", + "description": "Remove user from all Azure AD groups memberships", + "displayName": "Remove user from all groups", + "id": "b3a31406-2a15-4c9a-b25b-a658fa5f07fc", + "version": 1, + "parameters": [] + }, + { + "category": "leaver", + "description": "Remove user from all Teams memberships", + "displayName": "Remove user from all Teams", + "id": "81f7b200-2816-4b3b-8c5d-dc556f07b024", + "version": 1, + "parameters": [] + }, + { + "category": "leaver", + "description": "Remove all licenses assigned to the user", + "displayName": "Remove all licenses for user", + "id": "8fa97d28-3e52-4985-b3a9-a1126f9b8b4e", + "version": 1, + "parameters": [] + } + ] +} ++``` +For further details on task definitions and their parameters, see [Lifecycle Workflow Tasks](lifecycle-workflow-tasks.md). +++## Next steps ++- [Manage a workflow's properties](manage-workflow-properties.md) +- [Manage Workflow Versions](manage-workflow-tasks.md) |
active-directory | Customize Workflow Schedule | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/customize-workflow-schedule.md | + + Title: 'Customize workflow schedule - Azure Active Directory' +description: Describes how to customize the schedule of a Lifecycle Workflow. ++++++ Last updated : 01/20/2022+++++++# Customize the schedule of workflows (Preview) ++Workflows created using Lifecycle Workflows can be fully customized to match the schedule that fits your organization's needs. By default, workflows are scheduled to run every 3 hours, but the interval can be set as frequent as 1 hour, or as infrequent as 24 hours. +++## Customize the schedule of workflows using Microsoft Graph +++First, to view the current schedule interval of your workflows, run the following get call: ++```http +GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/settings +``` +++To customize a workflow in Microsoft Graph, use the following request and body: +```http +PATCH https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/settings +Content-type: application/json ++{ +"workflowScheduleIntervalInHours":<Interval between 0-24> +} ++``` ++## Next steps ++- [Manage workflow properties](manage-workflow-properties.md) +- [Delete Lifecycle Workflows](delete-lifecycle-workflow.md) |
active-directory | Delete Lifecycle Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/delete-lifecycle-workflow.md | + + Title: 'Delete a Lifecycle workflow - Azure Active Directory' +description: Describes how to delete a Lifecycle Workflow using. ++++++ Last updated : 01/20/2022+++++++# Delete a Lifecycle workflow (Preview) ++You can remove workflows that are no longer needed. Deleting these workflows allows you to make sure your lifecycle strategy is up to date. When a workflow is deleted, it enters a soft delete state. During this period, it's still able to be viewed within the deleted workflows list, and can be restored if needed. 30 days after a workflow enters a soft delete state it will be permanently removed. If you don't wish to wait 30 days for a workflow to permanently delete you can always manually delete it yourself. +++## Delete a workflow using the Azure portal ++1. Sign in to the [Azure portal](https://portal.azure.com). ++1. Select **Azure Active Directory** and then select **Identity Governance**. ++1. In the left menu, select **Lifecycle Workflows (Preview)**. ++1. select **Workflows (Preview)**. ++1. On the workflows screen, select the workflow you want to delete. ++ :::image type="content" source="media/delete-lifecycle-workflow/delete-button.png" alt-text="Screenshot of list of Workflows to delete."::: ++1. With the workflow highlighted, select **Delete**. ++1. Confirm you want to delete the selected workflow. + + :::image type="content" source="media/delete-lifecycle-workflow/delete-workflow.png" alt-text="Screenshot of confirming to delete a workflow."::: ++## View deleted workflows ++After deleting workflows, you can view them on the **Deleted Workflows (Preview)** page. +++1. On the left of the screen, select **Deleted Workflows (Preview)**. ++1. On this page, you'll see a list of deleted workflows, a description of the workflow, what date it was deleted, and its permanent delete date. By default the permanent delete date for a workflow is always 30 days after it was originally deleted. ++ :::image type="content" source="media/delete-lifecycle-workflow/deleted-list.png" alt-text="Screenshot of a list of deleted workflows."::: + +1. To restore a deleted workflow, select the workflow you want to restore and select **Restore workflow**. ++1. To permanently delete a workflow immediately, you select the workflow you want to delete from the list, and select **Delete permanently**. +++ ++## Delete a workflow using Microsoft Graph + You're also able to delete, view deleted, and restore deleted Lifecycle workflows using Microsoft Graph. ++Workflows can be deleted by running the following call: +```http +DELETE https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<id> +``` +## View deleted workflows using Microsoft Graph +You can view a list of deleted workflows by running the following call: +```http +GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/deletedItems/workflows +``` ++## Permanently delete a workflow using Microsoft Graph +Deleted workflows can be permanently deleted by running the following call: +```http +DELETE https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/deletedItems/workflows/<id> +``` ++## Restore deleted workflows using Microsoft Graph ++Deleted workflows are available to be restored for 30 days before they're permanently deleted. To restore a deleted workflow, run the following API call: +```http +GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/deletedItems/workflows/<id>/restore +``` +> [!NOTE] +> Permanently deleted workflows are not able to be restored. ++## Next steps +- [What are Lifecycle Workflows?](what-are-lifecycle-workflows.md) +- [Manage Lifecycle Workflow Versions](manage-workflow-tasks.md) |
active-directory | How To Lifecycle Workflow Sync Attributes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/how-to-lifecycle-workflow-sync-attributes.md | + + Title: 'How to synchronize attributes for Lifecycle workflows' +description: Describes overview of Lifecycle workflow attributes. ++++++ Last updated : 01/20/2022+++++# How to synchronize attributes for Lifecycle workflows +Workflows, contain specific tasks, which can run automatically against users based on the specified execution conditions. Automatic workflow scheduling is supported based on the employeeHireDate and employeeLeaveDateTime user attributes in Azure AD. ++To take full advantage of Lifecycle Workflows, user provisioning should be automated, and the scheduling relevant attributes should be synchronized. ++## Scheduling relevant attributes +The following table shows the scheduling (trigger) relevant attributes and the methods of synchronization that are supported. ++|Attribute|Type|Supported in HR Inbound Provisioning|Support in Azure AD Connect Cloud Sync|Support in Azure AD Connect Sync| +|--|--|--|--|--| +|employeeHireDate|DateTimeOffset|Yes|Yes|Yes| +|employeeLeaveDateTime|DateTimeOffset|Not currently(manually setting supported)|Not currently(manually setting supported)|Not currently(manually setting supported)| ++These attributes **are not** automatically populated using such synchronization methods such as Azure AD Connect or Azure AD Connect cloud sync. ++> [!NOTE] +> Currently, automatic synchronization of the employeeLeaveDateTime attribute for HR Inbound scenarios is not available. To take advantaged of leaver scenarios, you can set the employeeLeaveDateTime manually. Manually setting the attribute can be done in the portal or with Graph. For more information see [User profile in Azure](../fundamentals/active-directory-users-profile-azure-portal.md) and [Update user](/graph/api/user-update?view=graph-rest-1.0&tabs=http). ++This document explains how to set up synchronization from on-premises Azure AD Connect cloud sync and Azure AD Connect for the required attributes. ++>[!NOTE] +> There's no corresponding EmployeeHireDate or EmployeeLeaveDateTime attribute in Active Directory. If you're importing from on-premises AD, you'll need to identify an attribute in AD that can be used. This attribute must be a string. +++## Understanding EmployeeHireDate and EmployeeLeaveDateTime formatting +The EmployeeHireDate and EmployeeLeaveDateTime contain dates and times that must be formatted in a specific way. This means that you may need to use an expression to convert the value of your source attribute to a format that will be accepted by the EmployeeHireDate or EmployeeLeaveDateTime. The table below outlines the format that is expected and provides an example expression on how to convert the values. ++|Scenario|Expression/Format|Target|More Information| +|--|--|--|--| +|Workday to Active Directory User Provisioning|FormatDateTime([StatusHireDate], , "yyyy-MM-ddzzz", "yyyyMMddHHmmss.fZ")|On-premises AD string attribute|[Attribute mappings for Workday](../saas-apps/workday-inbound-tutorial.md#below-are-some-example-attribute-mappings-between-workday-and-active-directory-with-some-common-expressions)| +|SuccessFactors to Active Directory User Provisioning|FormatDateTime([endDate], ,"M/d/yyyy hh:mm:ss tt"," yyyyMMddHHmmss.fZ ")|On-premises AD string attribute|[Attribute mappings for SAP Success Factors](../saas-apps/sap-successfactors-inbound-provisioning-tutorial.md)| +|Custom import to Active Directory|Must be in the format "yyyyMMddHHmmss.fZ"|On-premises AD string attribute|| +|Microsoft Graph User API|Must be in the format "YYYY-MM-DDThh:mm:ssZ"|EmployeeHireDate and EmployeeLeaveDateTime|| +|Workday to Azure AD User Provisioning|Can use a direct mapping. No expression is needed but may be used to adjust the time portion of EmployeeHireDate and EmployeeLeaveDateTime|EmployeeHireDate and EmployeeLeaveDateTime|| +|SuccessFactors to Azure AD User Provisioning|Can use a direct mapping. No expression is needed but may be used to adjust the time portion of EmployeeHireDate and EmployeeLeaveDateTime|EmployeeHireDate and EmployeeLeaveDateTime|| ++For more information on expressions, see [Reference for writing expressions for attribute mappings in Azure Active Directory](../app-provisioning/functions-for-customizing-application-data.md) ++The expression examples above use endDate for SAP and StatusHireDate for Workday. However, you may opt to use different attributes. ++For example, you might use StatusContinuesFirstDayOfWork instead of StatusHireDate for Workday. In this instance your expression would be: ++ `FormatDateTime([StatusContinuesFirstDayOfWork], , "yyyy-MM-ddzzz", "yyyyMMddHHmmss.fZ")` +++The following table has a list of suggested attributes and their scenario recommendations. ++|HR Attribute|HR System|Scenario|Azure AD attribute| +|--|--|--|--| +|StatusHireDate|Workday|Joiner|EmployeeHireDate| +|StatusContinuousFirstDayOfWork|Workday|Joiner|EmployeeHireDate| +StatusDateEnteredWorkforce|Workday|Joiner|EmployeeHireDate| +StatusOriginalHireDate|Workday|Joiner|EmployeeHireDate| +|StatusEndEmploymentDate|Workday|Leaver|EmployeeLeaveDateTime| +|StatusResignationDate|Workday|Leaver|EmployeeLeaveDateTime| +|StatusRetirementDate|Workday|Leaver|EmployeeLeaveDateTime| +|StatusTerminationDate|Workday|Leaver|EmployeeLeaveDateTime| +|startDate|SAP SF|Joiner|EmployeeHireDate| +|firstDateWorked|SAP SF|Joiner|EmployeeHireDate| +|lastDateWorked|SAP SF|Leaver|EmployeeLeaveDateTime| +|endDate|SAP SF|Leaver|EmployeeLeaveDateTime| ++For more attributes, see the [Workday attribute reference](../app-provisioning/workday-attribute-reference.md) and [SAP SuccessFactors attribute reference](../app-provisioning/sap-successfactors-attribute-reference.md) +++## Importance of time +To ensure timing accuracy of scheduled workflows itΓÇÖs curial to consider: ++- The time portion of the attribute must be set accordingly, for example the `employeeHireDate` should have a time at the beginning of the day like 1AM or 5AM and the `employeeLeaveDateTime` should have time at the end of the day like 9PM or 11PM + - Workflow won't run earlier than the time specified in the attribute, however the [tenant schedule (default 3h)](customize-workflow-schedule.md) may delay the workflow run. For instance, if you set the `employeeHireDate` to 8AM but the tenant schedule doesn't run until 9AM, the workflow won't be processed until then. If a new hire is starting at 8AM, you would want to set the time to something like (start time - tenant schedule) to ensure it had run before the employee arrives. +- It's recommended, that if you're using temporary access pass (TAP), that you set the maximum lifetime to 24 hours. Doing this will help ensure that the TAP hasn't expired after being sent to an employee who may be in a different timezone. For more information, see [Configure Temporary Access Pass in Azure AD to register Passwordless authentication methods.](../authentication/howto-authentication-temporary-access-pass.md#enable-the-temporary-access-pass-policy) +- When importing the data, you should understand if and how the source provides time zone information for your users to potentially make adjustments to ensure timing accuracy. +++## Create a custom synch rule in Azure AD Connect cloud sync for EmployeeHireDate + The following steps will guide you through creating a synchronization rule using cloud sync. + 1. In the Azure portal, select **Azure Active Directory**. + 2. Select **Azure AD Connect**. + 3. Select **Manage cloud sync**. + 4. Under **Configuration**, select your configuration. + 5. Select **Click to edit mappings**. This link opens the **Attribute mappings** screen. + 6. Select **Add attribute**. + 7. Fill in the following information: + - Mapping Type: Direct + - Source attribute: msDS-cloudExtensionAttribute1 + - Default value: Leave blank + - Target attribute: employeeHireDate + - Apply this mapping: Always + 8. Select **Apply**. + 9. Back on the **Attribute mappings** screen, you should see your new attribute mapping. + 10. Select **Save schema**. ++For more information on attributes, see [Attribute mapping in Azure AD Connect cloud sync.](../cloud-sync/how-to-attribute-mapping.md) ++## How to create a custom synch rule in Azure AD Connect for EmployeeHireDate +The following example will walk you through setting up a custom synchronization rule that synchronizes the Active Directory attribute to the employeeHireDate attribute in Azure AD. ++ 1. Open a PowerShell window as administrator and run `Set-ADSyncScheduler -SyncCycleEnabled $false`. + 2. Go to Start\Azure AD Connect\ and open the Synchronization Rules Editor + 3. Ensure the direction at the top is set to **Inbound**. + 4. Select **Add Rule.** + 5. On the **Create Inbound synchronization rule** screen, enter the following information and select **Next**. + - Name: In from AD - EmployeeHireDate + - Connected System: contoso.com + - Connected System Object Type: user + - Metaverse Object Type: person + - Precedence: 200 +  + 6. On the **Scoping filter** screen, select **Next.** + 7. On the **Join rules** screen, select **Next**. + 8. On the **Transformations** screen, Under **Add transformations,** enter the following information. + - FlowType: Direct + - Target Attribute: employeeHireDate + - Source: msDS-cloudExtensionAttribute1 +  + 9. Select **Add**. + 10. In the Synchronization Rules Editor, ensure the direction at the top is set to **Outbound**. + 11. Select **Add Rule.** + 12. On the **Create Outbound synchronization rule** screen, enter the following information and select **Next**. + - Name: Out to Azure AD - EmployeeHireDate + - Connected System: <your tenant> + - Connected System Object Type: user + - Metaverse Object Type: person + - Precedence: 201 + 13. On the **Scoping filter** screen, select **Next.** + 14. On the **Join rules** screen, select **Next**. + 15. On the **Transformations** screen, Under **Add transformations,** enter the following information. + - FlowType: Direct + - Target Attribute: employeeHireDate + - Source: employeeHireDate +  + 16. Select **Add**. + 17. Close the Synchronization Rules Editor +++++++For more information, see [How to customize a synchronization rule](../hybrid/how-to-connect-create-custom-sync-rule.md) and [Make a change to the default configuration.](../hybrid/how-to-connect-sync-change-the-configuration.md) ++++## Next steps +- [What are lifecycle workflows?](what-are-lifecycle-workflows.md) +- [Create a custom workflow using the Azure portal](tutorial-onboard-custom-workflow-portal.md) +- [Create a Lifecycle workflow](create-lifecycle-workflow.md) |
active-directory | Identity Governance Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-overview.md | For many organizations, identity lifecycle for employees is tied to the represen Increasingly, scenarios require collaboration with people outside your organization. [Azure AD B2B](/azure/active-directory/b2b/) collaboration enables you to securely share your organization's applications and services with guest users and external partners from any organization, while maintaining control over your own corporate data. [Azure AD entitlement management](entitlement-management-overview.md) enables you to select which organization's users are allowed to request access and be added as B2B guests to your organization's directory, and ensures that these guests are removed when they no longer need access. +Organizations are able to automate the identity lifecycle management process by using [Lifecycle Workflows](what-are-lifecycle-workflows.md). Workflows can be created to automatically run tasks for a user before they enter the organization, as they change states during their time in the organization, and as they leave the organization. For example, a workflow can be configured to send an email with a temporary password to a new user's manager, or a welcome email to the user on their first day. + ## Access lifecycle Organizations need a process to manage access beyond what was initially provisioned for a user when that user's identity was created. Furthermore, enterprise organizations need to be able to scale efficiently to be able to develop and enforce access policy and controls on an ongoing basis. Typically, IT delegates access approval decisions to business decision makers. Organizations can automate the access lifecycle process through technologies such as [dynamic groups](../enterprise-users/groups-dynamic-membership.md), coupled with user provisioning to [SaaS apps](../saas-apps/tutorial-list.md) or [apps integrated with SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md). Organizations can also control which [guest users have access to on-premises applications](../external-identities/hybrid-cloud-to-on-premises.md). These access rights can then be regularly reviewed using recurring [Azure AD access reviews](access-reviews-overview.md). [Azure AD entitlement management](entitlement-management-overview.md) also enables you to define how users request access across packages of group and team memberships, application roles, and SharePoint Online roles. For more information, see the [simplifying identity governance tasks with automation](#simplifying-identity-governance-tasks-with-automation) section below to select the appropriate Azure AD features for your access lifecycle automation scenarios. +Lifecycle access can be automated using workflows. [Workflows can be created](create-lifecycle-workflow.md) to automatically add user to groups, where access to applications and resources are granted. Users can also be moved when their condition within the organization changes to different groups, and can even be removed entirely from all groups. + When a user attempts to access applications, Azure AD enforces [Conditional Access](../conditional-access/index.yml) policies. For example, Conditional Access policies can include displaying a [terms of use](../conditional-access/terms-of-use.md) and [ensuring the user has agreed to those terms](../conditional-access/require-tou.md) prior to being able to access an application. For more information, see [govern access to applications in your environment](identity-governance-applications-prepare.md). ## Privileged access lifecycle In addition to the features listed above, additional Azure AD features frequentl |Identity lifecycle (employees)|Admins can enable user account provisioning from Workday or SuccessFactors cloud HR, or on-premises HR.|[cloud HR to Azure AD user provisioning](../app-provisioning/plan-cloud-hr-provision.md)| |Identity lifecycle (guests)|Admins can enable self-service guest user onboarding from another Azure AD tenant, direct federation, One Time Passcode (OTP) or Google accounts. Guest users are automatically provisioned and deprovisioned subject to lifecycle policies.|[Entitlement management](entitlement-management-overview.md) using [B2B](../external-identities/what-is-b2b.md)| |Entitlement management|Resource owners can create access packages containing apps, Teams, Azure AD and Microsoft 365 groups, and SharePoint Online sites.|[Entitlement management](entitlement-management-overview.md)|+|Lifecycle Workflows|Admins can enable the automation of the lifecycle process based user conditions.|[Lifecycle Workflows](what-are-lifecycle-workflows.md)| |Access requests|End users can request group membership or application access. End users, including guests from other organizations, can request access to access packages.|[Entitlement management](entitlement-management-overview.md)| |Workflow|Resource owners can define the approvers and escalation approvers for access requests and approvers for role activation requests. |[Entitlement management](entitlement-management-overview.md) and [PIM](../privileged-identity-management/pim-configure.md)| |Policy and role management|Admin can define conditional access policies for run-time access to applications. Resource owners can define policies for user's access via access packages.|[Conditional access](../conditional-access/overview.md) and [Entitlement management](entitlement-management-overview.md) policies| |
active-directory | Lifecycle Workflow Extensibility | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-extensibility.md | + + Title: Workflow Extensibility - Azure Active Directory +description: Conceptual article discussing workflow extensibility with Lifecycle Workflows ++++++ Last updated : 07/06/2022+++++# Lifecycle Workflows Custom Task Extension (Preview) +++Lifecycle Workflows allow you to create workflows that can be triggered based on joiner, mover, or leaver scenarios. While Lifecycle Workflows provide several built-in tasks to automate common scenarios throughout the lifecycle of users, eventually you may reach the limits of these built-in tasks. With the extensibility feature, you'll be able to utilize the concept of custom task extensions to call-out to external systems as part of a workflow. By calling out to the external systems, you're able to accomplish things, which can extend the purpose of your workflows. When a user joins your organization you can have a workflow with a custom task extension that assigns a Teams number, or have a separate workflow that grants access to an email account for a manager when a user leaves. With the extensibility feature, Lifecycle Workflows currently support creating custom tasks extensions to call-out to [Azure Logic Apps](/azure/logic-apps/logic-apps-overview). +++## Prerequisite Logic App roles required for integration with the custom task extension ++When linking your Azure Logic App with the custom task extension task, there are certain permissions that must be completed before the link can be established. ++The roles on the Azure Logic App, which allows it to be compatible with the custom task extension, are as follows: ++- **Logic App contributor** +- **Contributor** +- **Owner** ++> [!NOTE] +> The **Logic App Operator** role alone will not make an Azure Logic App compatible with the custom task extension. For more information on the required **Logic App contributor** role, see: [Logic App Contributor](/azure/role-based-access-control/built-in-roles#logic-app-contributor). ++## Custom task extension deployment scenarios ++When creating custom task extensions, the scenarios for how it will interact with Lifecycle Workflows can be one of two ways: ++ :::image type="content" source="media/lifecycle-workflow-extensibility/task-extension-deployment-scenarios.png" alt-text="Screenshot of custom task deployment scenarios."::: ++- **Launch and complete**- The Azure Logic App is started, and the following task execution immediately continues with no response expected from the Azure Logic App. This scenario is best suited if the Lifecycle workflow doesn't require any feedback (including status) from the Azure Logic App. With this scenario, as long as the workflow is started successfully, the workflow is viewed as a success. +- **Launch and wait**- The Azure Logic App is started, and the following task's execution waits on the response from the Logic App. You enter a time duration for how long the custom task extension should wait for a response from the Azure Logic App. If no response is received within a customer defined duration window, the task will be considered failed. + :::image type="content" source="media/lifecycle-workflow-extensibility/custom-task-launch-wait.png" alt-text="Screenshot of custom task launch and wait task choice."::: ++## Custom task extension integration with Azure Logic Apps high-level steps ++The high-level steps for the Azure Logic Apps integration are as follows: ++> [!NOTE] +> Creating a custom task extension and logic app through the workflows page in the Azure portal will automate most of these steps. For a guide on creating a custom task extension this way, see: [Trigger Logic Apps based on custom task extensions (Preview)](trigger-custom-task.md). ++- **Create a consumption-based Azure Logic App**: A consumption-based Azure Logic App that is used to be called to from the custom task extension. +- **Configure the Azure Logic App so its compatible with Lifecycle workflows**: Configuring the consumption-based Azure Logic App so that it can be used with the custom task extension. +- **Build your custom business logic within your Azure Logic App**: Set up your business logic within the Azure Logic App using Logic App designer. +- **Create a lifecycle workflow customTaskExtension which holds necessary information about the Azure Logic App**: Creating a custom task extension that references the configured Azure Logic App. +- **Update or create a Lifecycle workflow with the ΓÇ£Run a custom task extensionΓÇ¥ task, referencing your created customTaskExtension**: Adding the newly created custom task extension to a new workflow, or updating the information to an existing workflow. ++## Logic App parameters used by the custom task ++When creating a custom task extension from the Azure portal, you're able to create a Logic App, or link it to an existing one. ++The following information is supplied to the custom task from the Logic App: ++- Subscription +- Resource group +- Logic App name +++For a guide on supplying this information to a custom task extension via Microsoft Graph, see: [Configure a Logic App for Lifecycle Workflow use](configure-logic-app-lifecycle-workflows.md). ++## Next steps ++- [Trigger Logic Apps based on custom task extensions (Preview)](trigger-custom-task.md) +- [Configure a Logic App for Lifecycle Workflow use (Preview)](configure-logic-app-lifecycle-workflows.md) |
active-directory | Lifecycle Workflow History | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-history.md | + + Title: Lifecycle Workflow History +description: Conceptual article about Lifecycle Workflows reporting and history capabilities ++++++ Last updated : 08/01/2022+++++# Lifecycle Workflows history (Preview) ++++Workflows created using Lifecycle Workflows allow for the automation of lifecycle task for users no matter where they fall in the Joiner-Mover-Leaver (JML) model of their identity lifecycle in your organization. Making sure workflows are processed correctly is an important part of an organization's lifecycle management process. Workflows that aren't processed correctly can lead to many issues in terms of security and compliance. With Audit logs every action that Lifecycle Workflows complete are recorded. With history features, Lifecycle Workflows allow you to specify workflow events based on user, runs, or task summaries. This reporting feature allows you to quickly see what ran for who, and rather or not it was successful. In this article you'll learn the difference between auditing logs and 3 different type of history summaries you can query with Lifecycle Workflows. You'll also learn when you would use each when getting more information about how your workflows were utilized for users in your organization. ++++## Audit Logs ++Every time a workflow is processed, an event is logged. These events are stored in the **Audit Logs** section, and can be used to gain information about workflows for historical, and auditing, purposes. +++On the **Audit Log** page you're presented a sequential list, by date, of every action Lifecycle Workflows has taken. From this information you're able to filter based on the following parameters: +++|Filter |Description | +||| +|Date | You can filter a specific range for the audit logs from as short as 24 hours up to 30 days. | +|Date option | You can filter by your tenant's local time, or by UTC. | +|Service | The Lifecycle Workflow service. | +|Category | Categories of the event being logged. Separated into <br><br> **All**- All events logged by Lifecycle Workflows.<br><br> **TaskManagement**- Task specific related events logged by Lifecycle Workflows. <br><br> **WorkflowManagement**- Events dealing with the workflow itself. | +|Activity | You can filter based on specific activities, which are based on categories. | ++After filtering this information, you're also able to see other information in the log such as: ++- **Status**: Whether or not the logged event was successful or not. +- **Status Reason**: If the event failed, a reason is given why. +- **Target(s)**: Who the logged event ran for. Information given as their Azure Active Directory object ID. +- **Initiated by (actor)**: Who did the event being logged. Information given by the user name. ++## Lifecycle Workflow History Summaries ++While the large set of information contained in audit logs can be useful for compliance reasons, for regular administration use it might be too much information. To make this large set of information processed easier to read, Lifecycle Workflows provide summaries for quick use. You can view these history summaries in three ways: ++- **Users summary**: Shows a summary of users processed by a workflow, and which tasks failed, successfully, and totally ran for each specific user. +- **Runs summary**: Shows a summary of workflow runs in terms of the workflow. Successful, failed, and total task information when workflow runs are noted. +- **Tasks summary**: Shows a summary of tasks processed by a workflow, and which tasks failed, successfully, and totally ran in the workflow. ++Summaries allow you to quickly gain details about how a workflow ran for itself, or users, without going into further details in logs. For a step by step guide on getting this information, see [Check the status of a workflow (Preview)](check-status-workflow.md) ++## Users Summary information ++User summaries allow you to view workflow information through the lens of users it has processed. ++++Within the user summary you're able to find the following information: +++|Parameter |Description | +||| +|Total Processed | The total number of users processed by a workflow during the selected time frame. | +|Successful | The total number of successful users processed by a workflow during the selected time frame. | +|Failed | The total number of failed users processed by a workflow during the selected time frame. | +|Total tasks | The total number of tasks processed for users in a workflow during the selected time frame. | +|Failed tasks | The total number of failed tasks processed for users in a workflow during the selected time frame. | +++User summaries allow you to filter based on: ++- **Date**: You can filter a specific range from as short as 24 hours up to 30 days of when workflow ran. +- **Status**: You can filter a specific status of the user processed. The supported statuses are: **Completed**, **In Progress**, **Queued**, **Canceled**, **Completed with errors**, and **Failed**. +- **Workflow execution type**: You can filter on workflow execution type such as **Scheduled** or **on-demand** +- **Completed date**: You can filter a specific range from as short as 24 hours up to 30 days of when the user was processed in a workflow. ++For a complete guide on getting user processed summary information, see: [User workflow history using the Azure portal](check-status-workflow.md#user-workflow-history-using-the-azure-portal). +++## Runs Summary ++Runs summaries allow you to view workflow information through the lens of its run history +++Within the runs summary you're able to find the following information: +++|Parameter |Description | +||| +|Total Processed | The total number of workflows that have run. | +|Successful | Workflows that successfully ran. | +|Failed | Workflows that failed to run. | +|Failed tasks | Workflows that ran with failed tasks. | ++Runs summaries allow you to filter based on: ++- **Date**: You can filter a specific range from as short as 24 hours up to 30 days of when workflow ran. +- **Status**: You can filter a specific status of the workflow run. The supported statuses are: **Completed**, **In Progress**, **Queued**, **Canceled**, **Completed with errors**, and **Failed**. +- **Workflow execution type**: You can filter on workflow execution type such as **Scheduled** or **On-demand**. +- **Completed date**: You can filter a specific range from as short as 24 hours up to 30 days of when the workflow ran. ++For a complete guide on getting runs information, see: [Run workflow history using the Azure portal](check-status-workflow.md#run-workflow-history-using-the-azure-portal) +++## Tasks summary ++Task summaries allow you to view workflow information through the lens of its tasks. +++Within the tasks summary you're able to find the following information: +++|Parameter |Description | +||| +|Total Processed | The total number of tasks processed by a workflow. | +|Successful | The number of successfully processed tasks by a workflow. | +|Failed | The number of failed processed tasks by a workflow. | +|Unprocessed | The number of unprocessed tasks by a workflow. | ++Task summaries allow you to filter based on: ++- **Date**: You can filter a specific range from as short as 24 hours up to 30 days of when workflow ran. +- **Status**: You can filter a specific status of the workflow run. The supported statuses are: **Completed**, **In Progress**, **Queued**, **Canceled**, **Completed with errors**, and **Failed**. +- **Completed date**: You can filter a specific range from as short as 24 hours up to 30 days of when the workflow ran. +- **Tasks**: You can filter based on specific task names. ++Separating processing of the workflow from the tasks is important because, in a workflow, processing a user certain tasks could be successful, while others could fail. Whether or not a task runs after a failed task in a workflow depends on parameters such as enabling continue On Error, and their placement within the workflow. For more information, see [Common task parameters](lifecycle-workflow-tasks.md#common-task-parameters-preview). ++## Next steps ++- [Understanding Lifecycle Workflows](understanding-lifecycle-workflows.md) +- [Lifecycle Workflow templates](lifecycle-workflow-templates.md) + |
active-directory | Lifecycle Workflow Tasks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-tasks.md | + + Title: Lifecycle Workflows tasks and definitions - Azure Active Directory +description: This article guides a user on Workflow task definitions and task parameters. ++++++ Last updated : 03/23/2022+++# Lifecycle Workflow built-in tasks (preview) ++Lifecycle Workflows come with many pre-configured tasks that are designed to automate common lifecycle management scenarios. These built-in tasks can be utilized to make customized workflows to suit your organization's needs. These tasks can be configured within seconds to create new workflows. These tasks also have categories based on the Joiner-Mover-Leaver model so that they can be easily placed into workflows based on need. In this article you'll get the complete list of tasks, information on common parameters each task has, and a list of unique parameters needed for each specific task. +++## Supported tasks (preview) + +Lifecycle Workflow's built-in tasks each include an identifier, known as **taskDefinitionID**, and can be used to create either new workflows from scratch, or inserted into workflow templates so that they fit the needs of your organization. For more information on templates available for use with Lifecycle Workflows, see: [Lifecycle Workflow Templates](lifecycle-workflow-templates.md). ++Lifecycle Workflows currently support the following tasks: ++|Task |taskDefinitionID | +||| +|[Send welcome email to new hire](lifecycle-workflow-tasks.md#send-welcome-email-to-new-hire) | 70b29d51-b59a-4773-9280-8841dfd3f2ea | +|[Generate Temporary Access Password and send via email to user's manager](lifecycle-workflow-tasks.md#generate-temporary-access-password-and-send-via-email-to-users-manager) | 1b555e50-7f65-41d5-b514-5894a026d10d | +|[Add user to group](lifecycle-workflow-tasks.md#add-user-to-group) | 22085229-5809-45e8-97fd-270d28d66910 | +|[Add user to team](lifecycle-workflow-tasks.md#add-user-to-team) | e440ed8d-25a1-4618-84ce-091ed5be5594 | +|[Enable user account](lifecycle-workflow-tasks.md#enable-user-account) | 6fc52c9d-398b-4305-9763-15f42c1676fc | +|[Run a custom task extension](lifecycle-workflow-tasks.md#run-a-custom-task-extension) | 4262b724-8dba-4fad-afc3-43fcbb497a0e | +|[Disable user account](lifecycle-workflow-tasks.md#disable-user-account) | 1dfdfcc7-52fa-4c2e-bf3a-e3919cc12950 | +|[Remove user from group](lifecycle-workflow-tasks.md#remove-user-from-groups) | 1953a66c-751c-45e5-8bfe-01462c70da3c | +|[Remove users from all groups](lifecycle-workflow-tasks.md#remove-users-from-all-groups) | b3a31406-2a15-4c9a-b25b-a658fa5f07fc | +|[Remove user from teams](lifecycle-workflow-tasks.md#remove-user-from-teams) | 06aa7acb-01af-4824-8899-b14e5ed788d6 | +|[Remove user from all teams](lifecycle-workflow-tasks.md#remove-users-from-all-teams) | 81f7b200-2816-4b3b-8c5d-dc556f07b024 | +|[Remove all license assignments from user](lifecycle-workflow-tasks.md#remove-all-license-assignments-from-user) | 8fa97d28-3e52-4985-b3a9-a1126f9b8b4e | +|[Delete user](lifecycle-workflow-tasks.md#delete-user) | 8d18588d-9ad3-4c0f-99d0-ec215f0e3dff | +|[Send email to manager before user last day](lifecycle-workflow-tasks.md#send-email-to-manager-before-user-last-day) | 52853a3e-f4e5-4eb8-bb24-1ac09a1da935 | +|[Send email on users last day](lifecycle-workflow-tasks.md#send-email-on-users-last-day) | 9c0a1eaf-5bda-4392-9d9e-6e155bb57411 | +|[Send offboarding email to users manager after their last day](lifecycle-workflow-tasks.md#send-offboarding-email-to-users-manager-after-their-last-day) | 6f22ddd4-b3a5-47a4-a846-0d7c201a49ce | +++## Common task parameters (preview) ++Common task parameters are the non-unique parameters contained in every task. When adding tasks to a new workflow, or a workflow template, you can customize and configure these parameters so that they match your requirements. +++|Parameter |Definition | +||| +|category | A read-only string that identifies the category or categories of the task. Automatically determined when the taskDefinitionID is chosen. | +|taskDefinitionId | A string referencing a taskDefinition which determines which task to run. | +|isEnabled | A boolean value that denotes whether the task is set to run or not. If set to ΓÇ£true" then the task will run. Defaults to true. | +|displayName | A unique string that identifies the task. | +|description | A string that describes the purpose of the task for administrative use. (Optional) | +|executionSequence | An integer that is read-only which states in what order the task will run in a workflow. For more information about executionSequence and workflow order, see: [Execution conditions](understanding-lifecycle-workflows.md#parts-of-a-workflow). | +|continueOnError | A boolean value that determines if the failure of this task stops the subsequent workflows from running. | +|arguments | Contains unique parameters relevant for the given task | ++++## Task details (preview) ++Below is each specific task, and detailed information such as parameters and prerequisites, required for them to run successfully. The parameters are noted as they appear both in the Azure portal, and within Microsoft Graph. For information about editing Lifecycle Workflow tasks in general, see: [Manage workflow Versions](manage-workflow-tasks.md). +++### Send welcome email to new hire +++Lifecycle Workflows allow you to automate the sending of welcome emails to new hires in your organization. You're able to customize the task name and description for this task in the Azure portal. +++The Azure AD prerequisite to run the **Send welcome email to new hire** task is: ++- A populated mail attribute for the user. +++For Microsoft Graph the parameters for the **Send welcome email to new hire** task are as follows: ++|Parameter |Definition | +||| +|category | joiner | +|displayName | Send Welcome Email (Customizable by user) | +|description | Send welcome email to new hire (Customizable by user) | +|taskDefinitionId | 70b29d51-b59a-4773-9280-8841dfd3f2ea | ++++```Example for usage within the workflow +{ + "category": "joiner", + "description": "Send welcome email to new hire", + "displayName": "Send Welcome Email", + "isEnabled": true, + "continueOnError": true, + "taskDefinitionId": "70b29d51-b59a-4773-9280-8841dfd3f2ea", + "arguments": [] +} ++``` ++### Generate Temporary Access Password and send via email to user's manager ++When a compatible user joins your organization, Lifecycle Workflows allow you to automatically generate a Temporary Access Password(TAP) and have it sent to the new user's manager. ++With this task in the Azure portal, you're able to give the task a name and description. You must also set the following: ++**Activation duration**- How long the password is active. +**One time use**- If the password is one use only. + ++The Azure AD prerequisites to run the **Generate Temporary Access Password and send via email to user's manager** task are: ++- A populated manager attribute for the user. +- A populated manager's mail attribute for the user. +- An enabled TAP tenant policy. For more information, see [Enable the Temporary Access Pass policy](../authentication/howto-authentication-temporary-access-pass.md#enable-the-temporary-access-pass-policy) + ++> [!IMPORTANT] +> A user having this task run for them in a workflow must also not have any other authentication methods, sign ins, or AAD role assignments for this task to work for them. ++For Microsoft Graph the parameters for the **Generate Temporary Access Password and send via email to user's manager** task are as follows: ++|Parameter |Definition | +||| +|category | joiner | +|displayName | GenerateTAPAndSendEmail (Customizable by user) | +|description | Generate Temporary Access Password and send via email to user's manager (Customizable by user) | +|taskDefinitionId | 1b555e50-7f65-41d5-b514-5894a026d10d | +|arguments | Argument contains the name parameter "tapLifetimeInMinutes", which is the lifetime of the temporaryAccessPass in minutes starting at startDateTime. Minimum 10, Maximum 43200 (equivalent to 30 days). The argument also contains the tapIsUsableOnce parameter, which determines whether the password is limited to a one time use. If true, the pass can be used once; if false, the pass can be used multiple times within the temporaryAccessPass lifetime. | +++```Example for usage within the workflow +{ + "category": "joiner", + "description": "Generate Temporary Access Password and send via email to user's manager", + "displayName": "GenerateTAPAndSendEmail", + "isEnabled": true, + "continueOnError": true, + "taskDefinitionId": "1b555e50-7f65-41d5-b514-5894a026d10d", + "arguments": [ + { + "name": "tapLifetimeMinutes", + "value": "60" + }, + { + "name": "tapIsUsableOnce", + "value": "true" + } + ] +} ++``` ++> [!NOTE] +> The employee hire date is the same as the startDateTime used for the tapLifetimeInMinutes parameter. +++### Add user to group ++Allows users to be added to a cloud-only group. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md). ++You're able to customize the task name and description for this task. +++For Microsoft Graph the parameters for the **Add user to group** task are as follows: ++|Parameter |Definition | +||| +|category | joiner,leaver | +|displayName | AddUserToGroup (Customizable by user) | +|description | Add user to group (Customizable by user) | +|taskDefinitionId | 22085229-5809-45e8-97fd-270d28d66910 | +|arguments | Argument contains a name parameter that is the "groupID", and a value parameter which is the group ID of the group you are adding the user to. | +++```Example for usage within the workflow +{ + "category": "joiner,leaver", + "description": "Add user to group", + "displayName": "AddUserToGroup", + "isEnabled": true, + "continueOnError": true, + "taskDefinitionId": "22085229-5809-45e8-97fd-270d28d66910", + "arguments": [ + { + "name": "groupID", + "value": "0732f92d-6eb5-4560-80a4-4bf242a7d501" + } + ] +} ++``` +++### Add user to team ++You're able to add a user to an existing static team. You're able to customize the task name and description for this task. +++For Microsoft Graph the parameters for the **Add user to team** task are as follows: ++|Parameter |Definition | +||| +|category | joiner,leaver | +|displayName | AddUserToTeam (Customizable by user) | +|description | Add user to team (Customizable by user) | +|taskDefinitionId | e440ed8d-25a1-4618-84ce-091ed5be5594 | +|argument | Argument contains a name parameter that is the "teamID", and a value parameter which is the team ID of the existing team you are adding a user to. | ++++```Example for usage within the workflow +{ + "category": "joiner,leaver", + "description": "Add user to team", + "displayName": "AddUserToTeam", + "isEnabled": true, + "continueOnError": true, + "taskDefinitionId": "e440ed8d-25a1-4618-84ce-091ed5be5594", + "arguments": [ + { + "name": "teamID", + "value": "e3cc382a-c4b6-4a8c-b26d-a9a3855421bd" + } + ] +} ++``` ++### Enable user account ++Allows cloud-only user accounts to be enabled. You're able to customize the task name and description for this task in the Azure portal. ++++For Microsoft Graph the parameters for the **Enable user account** task are as follows: ++|Parameter |Definition | +||| +|category | joiner,leaver | +|displayName | EnableUserAccount (Customizable by user) | +|description | Enable user account (Customizable by user) | +|taskDefinitionId | 6fc52c9d-398b-4305-9763-15f42c1676fc | ++++```Example for usage within the workflow + { + "category": "joiner,leaver", + "description": "Enable user account", + "displayName": "EnableUserAccount", + "isEnabled": true, + "continueOnError": true, + "taskDefinitionId": "6fc52c9d-398b-4305-9763-15f42c1676fc", + "arguments": [] +} ++``` ++### Run a Custom Task Extension ++Workflows can be configured to launch a custom task extension. You're able to customize the task name and description for this task using the Azure portal. +++The Azure AD prerequisite to run the **Run a Custom Task Extension** task is: ++- A Logic App that is compatible with the custom task extension. For more information, see: [Lifecycle workflow extensibility](lifecycle-workflow-extensibility.md). ++For Microsoft Graph the parameters for the **Run a Custom Task Extension** task are as follows: ++|Parameter |Definition | +||| +|category | joiner,leaver | +|displayName | Run a Custom Task Extension (Customizable by user) | +|description | Run a custom Task Extension (Customizable by user) | +|taskDefinitionId | "d79d1fcc-16be-490c-a865-f4533b1639ee | +|argument | Argument contains a name parameter that is the "LogicAppURL", and a value parameter which is the Logic App HTTP trigger. | +++++```Example for usage within the workflow +{ + "category": "joiner,leaver", + "description": "Run a Custom Task Extension to call-out to an external system.", + "displayName": "Run a Custom Task Extension", + "isEnabled": true, + "continueOnError": true, + "taskDefinitionId": "d79d1fcc-16be-490c-a865-f4533b1639ee", + "arguments": [ + { + "name": "CustomTaskExtensionID", + "value": ""<ID of your Custom Task Extension>"" + } + ] +} ++``` ++For more information on setting up a Logic app to run with Lifecycle Workflows, see:[Trigger Logic Apps with custom Lifecycle Workflow tasks](trigger-custom-task.md). ++### Disable user account ++Allows cloud-only user accounts to be disabled. You're able to customize the task name and description for this task in the Azure portal. ++++For Microsoft Graph the parameters for the **Disable user account** task are as follows: ++|Parameter |Definition | +||| +|category | joiner,leaver | +|displayName | DisableUserAccount (Customizable by user) | +|description | Disable user account (Customizable by user) | +|taskDefinitionId | 1dfdfcc7-52fa-4c2e-bf3a-e3919cc12950 | +++```Example for usage within the workflow +{ + "category": "joiner,leaver", + "description": "Disable user account", + "displayName": "DisableUserAccount", + "isEnabled": true, + "continueOnError": true, + "taskDefinitionId": "1dfdfcc7-52fa-4c2e-bf3a-e3919cc12950", + "arguments": [] +} ++``` ++### Remove user from groups ++Allows you to remove a user from cloud-only groups. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md). ++You're able to customize the task name and description for this task in the Azure portal. ++++For Microsoft Graph the parameters for the **Remove user from groups** task are as follows: ++|Parameter |Definition | +||| +|category | leaver | +|displayName | Remove user from selected groups (Customizable by user) | +|description | Remove user from membership of selected Azure AD groups (Customizable by user) | +|taskDefinitionId | 1953a66c-751c-45e5-8bfe-01462c70da3c | +|argument | Argument contains a name parameter that is the "groupID", and a value parameter which is the group Id(s) of the group or groups you are removing the user from. | ++++```Example for usage within the workflow +{ + "category": "leaver", + "continueOnError": true, + "displayName": "Remove user from selected groups", + "description": "Remove user from membership of selected Azure AD groups", + "isEnabled": true, + "continueOnError": true, + "taskDefinitionId": "1953a66c-751c-45e5-8bfe-01462c70da3c", + "arguments": [ + { + "name": "groupID", + "value": "GroupId1, GroupId2, GroupId3, ..." + } + ] +} ++``` ++### Remove users from all groups ++Allows users to be removed from every cloud-only group they are a member of. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md). +++You're able to customize the task name and description for this task in the Azure portal. ++ :::image type="content" source="media/lifecycle-workflow-task/remove-all-groups-task.png" alt-text="Screenshot of Workflows task: remove user from all groups."::: +++For Microsoft Graph the parameters for the **Remove users from all groups** task are as follows: ++|Parameter |Definition | +||| +|category | leaver | +|displayName | Remove user from all groups (Customizable by user) | +|description | Remove user from all Azure AD groups memberships (Customizable by user) | +|taskDefinitionId | b3a31406-2a15-4c9a-b25b-a658fa5f07fc | ++++```Example for usage within the workflow +{ + "category": "leaver", + "continueOnError": true, + "displayName": "Remove user from all groups", + "description": "Remove user from all Azure AD groups memberships", + "isEnabled": true, + "continueOnError": true, + "taskDefinitionId": "b3a31406-2a15-4c9a-b25b-a658fa5f07fc", + "arguments": [] +} ++``` ++### Remove User from Teams ++Allows a user to be removed from one or multiple static teams. You're able to customize the task name and description for this task in the Azure portal. ++For Microsoft Graph the parameters for the **Remove User from Teams** task are as follows: ++|Parameter |Definition | +||| +|category | joiner,leaver | +|displayName | Remove user from selected Teams (Customizable by user) | +|description | Remove user from membership of selected Teams (Customizable by user) | +|taskDefinitionId | 06aa7acb-01af-4824-8899-b14e5ed788d6 | +|arguments | Argument contains a name parameter that is the "teamID", and a value parameter which is the Teams ID of the Teams you are removing the user from. | +++```Example for usage within the workflow +{ + "category": "joiner,leaver", + "continueOnError": true, + "displayName": "Remove user from selected Teams", + "description": "Remove user from membership of selected Teams", + "isEnabled": true, + "continueOnError": true, + "taskDefinitionId": "06aa7acb-01af-4824-8899-b14e5ed788d6", + "arguments": [ + { + "name": "teamID", + "value": "TeamId1, TeamId2, TeamId3, ..." + } + ] +} ++``` ++### Remove users from all teams ++Allows users to be removed from every static team they are a member of. You're able to customize the task name and description for this task in the Azure portal. ++For Microsoft Graph the parameters for the **Remove users from all teams** task are as follows: ++|Parameter |Definition | +||| +|category | leaver | +|displayName | Remove user from all Teams memberships (Customizable by user) | +|description | Remove user from all Teams (Customizable by user) | +|taskDefinitionId | 81f7b200-2816-4b3b-8c5d-dc556f07b024 | ++++```Example for usage within the workflow +{ + "category": "leaver", + "continueOnError": true, + "description": "Remove user from all Teams", + "displayName": "Remove user from all Teams memberships", + "isEnabled": true, + "continueOnError": true, + "taskDefinitionId": "81f7b200-2816-4b3b-8c5d-dc556f07b024", + "arguments": [] +} ++``` ++### Remove all license assignments from User ++Allows all direct license assignments to be removed from a user. For group-based license assignments, you would run a task to remove the user from the group the license assignment is part of. ++You're able to customize the task name and description for this task in the Azure portal. ++For Microsoft Graph the parameters for the **Remove all license assignment from user** task are as follows: ++|Parameter |Definition | +||| +|category | leaver | +|displayName | Remove all licenses for user (Customizable by user) | +|description | Remove all licenses assigned to the user (Customizable by user) | +|taskDefinitionId | 8fa97d28-3e52-4985-b3a9-a1126f9b8b4e | ++++```Example for usage within the workflow +{ + "category": "leaver", + "continueOnError": true, + "displayName": "Remove all licenses for user", + "description": "Remove all licenses assigned to the user", + "isEnabled": true, + "continueOnError": true, + "taskDefinitionId": "8fa97d28-3e52-4985-b3a9-a1126f9b8b4e", + "arguments": [] +} ++``` ++### Delete User ++Allows cloud-only user accounts to be deleted. You're able to customize the task name and description for this task in the Azure portal. +++For Microsoft Graph the parameters for the **Delete User** task are as follows: ++|Parameter |Definition | +||| +|category | leaver | +|displayName | Delete user account (Customizable by user) | +|description | Delete user account in Azure AD (Customizable by user) | +|taskDefinitionId | 8d18588d-9ad3-4c0f-99d0-ec215f0e3dff | ++++```Example for usage within the workflow +{ + "category": "leaver", + "continueOnError": true, + "displayName": "Delete user account", + "description": "Delete user account in Azure AD", + "isEnabled": true, + "continueOnError": true, + "taskDefinitionId": "8d18588d-9ad3-4c0f-99d0-ec215f0e3dff", + "arguments": [] +} ++``` ++## Send email to manager before user last day ++Allows an email to be sent to a user's manager before their last day. You're able to customize the task name and the description for this task in the Azure portal. +++The Azure AD prerequisite to run the **Send email before user last day** task are: ++- A populated manager attribute for the user. +- A populated manager's mail attribute for the user. ++For Microsoft Graph the parameters for the **Send email before user last day** task are as follows: ++|Parameter |Definition | +||| +|category | leaver | +|displayName | Send email before userΓÇÖs last day (Customizable by user) | +|description | Send offboarding email to userΓÇÖs manager before the last day of work (Customizable by user) | +|taskDefinitionId | 52853a3e-f4e5-4eb8-bb24-1ac09a1da935 | ++```Example for usage within the workflow +{ + "category": "leaver", + "continueOnError": true, + "displayName": "Send email before userΓÇÖs last day", + "description": "Send offboarding email to userΓÇÖs manager before the last day of work", + "isEnabled": true, + "continueOnError": true, + "taskDefinitionId": "52853a3e-f4e5-4eb8-bb24-1ac09a1da935", + "arguments": [] +} ++``` ++## Send email on users last day ++Allows an email to be sent to a user's manager on their last day. You're able to customize the task name and the description for this task in the Azure portal. ++The Azure AD prerequisite to run the **Send email on user last day** task are: ++- A populated manager attribute for the user. +- A populated manager's mail attribute for the user. ++For Microsoft Graph the parameters for the **Send email on user last day** task are as follows: ++|Parameter |Definition | +||| +|category | leaver | +|displayName | Send email on userΓÇÖs last day (Customizable by user) | +|description | Send offboarding email to userΓÇÖs manager on the last day of work (Customizable by user) | +|taskDefinitionId | 9c0a1eaf-5bda-4392-9d9e-6e155bb57411 | ++```Example for usage within the workflow +{ + "category": "leaver", + "continueOnError": true, + "displayName": "Send email on userΓÇÖs last day", + "description": "Send offboarding email to userΓÇÖs manager on the last day of work", + "isEnabled": true, + "continueOnError": true, + "taskDefinitionId": "9c0a1eaf-5bda-4392-9d9e-6e155bb57411", + "arguments": [] +} ++``` ++## Send offboarding email to users manager after their last day ++Allows an email containing offboarding information to be sent to the user's manager after their last day. You're able to customize the task name and description for this task in the Azure portal. ++The Azure AD prerequisite to run the **Send offboarding email to users manager after their last day** task are: ++- A populated manager attribute for the user. +- A populated manager's mail attribute for the user. +++For Microsoft Graph the parameters for the **Send offboarding email to users manager after their last day** task are as follows: ++|Parameter |Definition | +||| +|category | leaver | +|displayName | Send offboarding email to userΓÇÖs manager after the last day of work (Customizable by user) | +|description | Remove user from all Teams (Customizable by user) | +|taskDefinitionId | 6f22ddd4-b3a5-47a4-a846-0d7c201a49ce | ++```Example for usage within the workflow +{ + "category": "leaver", + "continueOnError": true, + "displayName": "Send offboarding email to userΓÇÖs manager after the last day of work", + "description": "Send email after userΓÇÖs last day", + "isEnabled": true, + "continueOnError": true, + "taskDefinitionId": "6f22ddd4-b3a5-47a4-a846-0d7c201a49ce", + "arguments": [] +} ++``` ++## Next steps ++- [Manage lifecycle workflows properties](manage-workflow-properties.md) +- [Manage lifecycle workflow versions](delete-lifecycle-workflow.md) |
active-directory | Lifecycle Workflow Templates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-templates.md | + + Title: Workflow Templates and categories - Azure Active Directory +description: Conceptual article discussing workflow templates and categories with Lifecycle Workflows ++++++ Last updated : 07/06/2022++++# Lifecycle Workflows templates (Preview) +++Lifecycle Workflows allows you to automate the lifecycle management process for your organization by creating workflows that contain both built-in tasks, and custom task extensions. These workflows, and the tasks within them, all fall into categories based on the Joiner-Mover-Leaver(JML) model of lifecycle management. To make this process even more efficient, Lifecycle Workflows also provide you templates, which you can use to accelerate the set up, creation, and configuration of common lifecycle management processes. You can create workflows based on these templates as is, or you can customize them even further to match the requirements for users within your organization. In this article you'll get the complete list of workflow templates, common template parameters, default template parameters for specific templates, and the list of compatible tasks for each template. For full task definitions, see [Lifecycle Workflow tasks and definitions](lifecycle-workflow-tasks.md). +++## Lifecycle Workflow Templates ++Lifecycle Workflows currently have six built-in templates you can use or customize: +++The list of templates are as follows: ++- [Onboard pre-hire employee](lifecycle-workflow-templates.md#onboard-pre-hire-employee) +- [Onboard new hire employee](lifecycle-workflow-templates.md#onboard-new-hire-employee) +- [Real-time employee termination](lifecycle-workflow-templates.md#real-time-employee-termination) +- [Pre-Offboarding of an employee](lifecycle-workflow-templates.md#pre-offboarding-of-an-employee) +- [Offboard an employee](lifecycle-workflow-templates.md#offboard-an-employee) +- [Post-Offboarding of an employee](lifecycle-workflow-templates.md#post-offboarding-of-an-employee) ++For a complete guide on creating a new workflow from a template, see: [Tutorial: On-boarding users to your organization using Lifecycle workflows with Azure portal](tutorial-onboard-custom-workflow-portal.md). ++### Onboard pre-hire employee ++ The **Onboard pre-hire employee** template is designed to configure tasks that must be completed before an employee's start date. +++The default specific parameters and properties for the **Onboard pre-hire employee** template are as follows: ++++|parameter |description |Customizable | +|||| +|Category | Joiner | ❌ | +|Trigger Type | Trigger and Scope Based | ❌ | +|Days from event | -7 | ✔️ | +|Event timing | Before | ❌ | +|Event User attribute | EmployeeHireDate | ❌ | +|Scope type | Rule based | ❌ | +|Execution conditions | (department eq 'Marketing') | ✔️ | +|Tasks | **Generate TAP And Send Email** | ✔️ | +++### Onboard new hire employee ++The **Onboard new-hire employee** template is designed to configure tasks that will be completed on an employee's start date. +++The default specific parameters for the **Onboard new hire employee** template are as follows: +++|parameter |description |Customizable | +|||| +|Category | Joiner | ❌ | +|Trigger Type | Trigger and Scope Based | ❌ | +|Days from event | 0 | ✔️ | +|Event timing | On | ❌ | +|Event User attribute | EmployeeHireDate | ❌ | +|Scope type | Rule based | ❌ | +|Execution conditions | (department eq 'Marketing') | ✔️ | +|Tasks | **Add User To Group**, **Enable User Account**, **Send Welcome Email** | ✔️ | ++++### Real-time employee termination ++The **Real-time employee termination** template is designed to configure tasks that will be completed immediately when an employee is terminated. +++The default specific parameters for the **Real-time employee termination** template are as follows: +++|parameter |description |Customizable | +|||| +|Category | Leaver | ❌ | +|Trigger Type | On-demand | ❌ | +|Tasks | **Remove user from all groups**, **Delete User Account**, **Remove user from all Teams** | ✔️ | ++> [!NOTE] +> As this template is designed to run on-demand, no execution condition is present. ++++### Pre-Offboarding of an employee ++The **Pre-Offboarding of an employee** template is designed to configure tasks that will be completed before an employee's last day of work. +++The default specific parameters for the **Pre-Offboarding of an employee** template are as follows: +++|parameter |description |Customizable | +|||| +|Category | Leaver | ❌ | +|Trigger Type | Trigger and Scope Based | ❌ | +|Days from event | 7 | ✔️ | +|Event timing | Before | ❌ | +|Event User attribute | employeeLeaveDateTime | ❌ | +|Scope type | Rule based | ❌ | +|Execution condition | None | ✔️ | +|Tasks | **Remove user from selected groups**, **Remove user from selected Teams** | ✔️ | +++++### Offboard an employee ++The **Offboard an employee** template is designed to configure tasks that will be completed on an employee's last day of work. +++The default specific parameters for the **Offboard an employee** template are as follows: +++|parameter |description |Customizable | +|||| +|Category | Leaver | ❌ | +|Trigger Type | Trigger and Scope Based | ❌ | +|Days from event | 0 | ✔️ | +|Event timing | On | ❌ | +|Event User attribute | employeeLeaveDateTime | ❌ | +|Scope type | Rule based | ❌ | +|Execution condition | (department eq 'Marketing') | ✔️ | +|Tasks | **Disable User Account**, **Remove user from all groups**, **Remove user from all Teams** | ✔️ | +++### Post-Offboarding of an employee ++The **Post-Offboarding of an employee** template is designed to configure tasks that will be completed after an employee's last day of work. +++The default specific parameters for the **Post-Offboarding of an employee** template are as follows: ++|parameter |description |Customizable | +|||| +|Category | Leaver | ❌ | +|Trigger Type | Trigger and Scope Based | ❌ | +|Days from event | 7 | ✔️ | +|Event timing | After | ❌ | +|Event User attribute | employeeLeaveDateTime | ❌ | +|Scope type | Rule based | ❌ | +|Execution condition | (department eq 'Marketing') | ✔️ | +|Tasks | **Remove all licenses for user**, **Remove user from all Teams**, **Delete User Account** | ✔️ | +++++## Next steps ++- [Lifecycle Workflow tasks and definitions](lifecycle-workflow-tasks.md) +- [Create a Lifecycle workflow](create-lifecycle-workflow.md) ++ |
active-directory | Lifecycle Workflow Versioning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-versioning.md | + + Title: Workflow Versioning - Azure Active Directory +description: An article discussing Lifecycle workflow versioning and history ++++++ Last updated : 07/06/2022++++# Lifecycle Workflows Versioning ++++Workflows created using Lifecycle Workflows can be updated as needed to satisfy organizational requirements in terms of auditing the lifecycle of users in your organization. To manage updates in workflows, Lifecycle Workflows introduce the concept of workflow versioning. Workflow versions are new versions of existing workflows, triggered by updating execution conditions or its tasks. Workflow versions can change the actions or even scope of an existing workflow. Understanding how workflow versioning is handled during the workflow update process allows you to strategically set up workflows so that workflows tasks, and conditions, are always relevant for users processed by a workflow. +++## Versioning benefits ++Versioning with Lifecycle Workflows provides many benefits over the alternative of creating a new workflow for each use case. These benefits show up in its ability to improve the reporting process for both troubleshooting, and record keeping, capabilities in the following ways: ++- **Long-term retention**- Versioning allows for longer retention of workflow information than by only using the audit logs. While the audit logs only store information from the previous 30 days, with versioning you're able to keep track of workflow details from creation. +- **Traceability**- Allows tracking of which specific version of a workflow processed a user. ++## Workflow properties and versions ++While updates to workflows can trigger the creation of a new version, this isn't always the case. There are parameters of workflows, known as basic properties, that can be updated without a new version of the workflow being created. The list of these parameters are as follows: ++- displayName +- description +- isEnabled +- IsSchedulingEnabled +++You'll find these corresponding parameters in the Azure portal under the **Properties** section of the workflow you're updating. ++For a step by step guide on updating these properties using both the Azure portal and the API via Microsoft Graph, see: [Manage workflow properties](manage-workflow-properties.md). ++Properties that will trigger the creation of a new version are as follows: ++- tasks +- executionConditions + +++While new versions of these workflows are made as soon as you make the updates in the Azure portal, making a new version of a workflow using the API with Microsoft Graph requires running the workflow creation call again with the changes included. For a step by step guide for updating either tasks, or execution conditions, see: [Manage Workflow Versions](manage-workflow-tasks.md). ++> [!NOTE] +> If the workflow is on-demand, the configure information associated with execution conditions will not be present. ++## What details are contained in workflow version history ++Unlike with changing basic properties of a workflow, newly created workflow versions can be vastly different from previous versions. Tasks can be added or removed, and who the workflow runs for can be different. Due to the vast changes that can happen to a workflow between versions, version details are also there to give detailed information about not only the current version of the workflow, but also its previous iterations. ++Details contained in version information as shown in the Azure portal: ++++Detailed **Version information** are as follows: +++|parameter |description | +||| +|Version Number | An integer denoting which version of the workflow the information is for. Sequentially goes up with each new workflow version. | +|Last modified date | The last time the workflow was updated. For previous versions of workflows, the last modified date will always be the time the next version was created. | +|Last modified by | Who last modified this workflow version. | +|Created date | The date and time for when a workflow version was created. | +|Created by | Who created this specific version of the workflow. | +|Name | Name of the workflow at this version. | +|Description | Description of the workflow at this version. | +|Category | Category of the workflow. | +|Execution Conditions | Defines for who and when the workflow runs in this version. | +|Tasks | The tasks present in this workflow version. If viewing through the API, you're also able to see task arguments. For specific task definitions, see: [Lifecycle Workflow tasks and definitions](lifecycle-workflow-tasks.md) | ++++## Next steps ++- [Manage workflow Properties (Preview)](manage-workflow-properties.md) +- [Manage workflow versions (Preview)](manage-workflow-tasks.md) + |
active-directory | Lifecycle Workflows Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflows-deployment.md | + + Title: Plan a Lifecycle Workflow deployment +description: Planning guide for a successful Lifecycle Workflow deployment in Azure AD. ++documentationCenter: '' +++editor: +++ na ++ Last updated : 04/16/2021++++++# Plan a Lifecycle Workflow deployment ++[Lifecycle Workflows](what-are-lifecycle-workflows.md) help your organization to manage Azure AD users by increasing automation. With Lifecycle Workflows, you can: ++- **Extend** your HR-driven provisioning process with other workflows that simplify and automate tasks. +- **Centralize** your workflow process so you can easily create and manage workflows all in one location. +- **Troubleshoot** workflow scenarios with the Workflow history and Audit logs with minimal effort. +- **Manage** user lifecycle at scale. As your organization grows, the need for other resources to manage user lifecycles is minimalized. +- **Reduce** or remove manual tasks that were done in the past with automated Lifecycle Workflows +- **Apply** logic apps to extend workflows for more complex scenarios using your existing Logic apps ++Lifecycle Workflows are an [Azure AD Identity Governance](identity-governance-overview.md) capability. The other capabilities are [entitlement management](entitlement-management-overview.md), [access reviews](access-reviews-overview.md),[Privileged Identity Management (PIM)](../privileged-identity-management/pim-configure.md), and [terms of use](../conditional-access/terms-of-use.md). Together, they help you address these questions: ++ - Which users should have access to which resources? + - What are those users doing with that access? + - Is there effective organizational control for managing access? + - Can auditors verify that the controls are working? + - Are users ready to go on day one or do they have access removed in a timely manner? + +Planning your Lifecycle Workflow deployment is essential to make sure you achieve your desired governance strategy for users in your organization. ++For more information on deployment plans, see [Azure AD deployment plans](../fundamentals/active-directory-deployment-plans.md) ++## Licenses ++++>[!Note] +>Be aware that if your license expires, any workflows that you have created will stop working. +> +>Workflows that are in progress when a license expires will continue to exectue, but no new ones will be processed. ++### Plan the Lifecycle Workflow deployment project ++Consider your organizational needs to determine the strategy for deploying Lifecycle Workflows in your environment. ++### Engage the right stakeholders ++When technology projects fail, they typically do so because of mismatched expectations on impact, outcomes, and responsibilities. To avoid these pitfalls, [ensure that you're engaging the right stakeholders](../fundamentals/active-directory-deployment-plans.md) and that project roles are clear. ++For Lifecycle Workflows, you'll likely include representatives from the following teams within your organization: ++- **IT administration** manages your IT infrastructure and administers your cloud investments and software as a service (SaaS) apps. This team: ++ * Reviews Lifecycle Workflows to infrastructure and apps, including Microsoft 365 and Azure AD. + * Schedules and runs Lifecycle Workflows on users. + * Ensures that programmatic Lifecycle Workflows, via GRAPH or extensibility, are governed and reviewed. +++- **Security Owner** ensures that the plan will meet the security requirements of your organization. This team: + - Ensure Lifecycle Workflows meet organizational security policies ++ - **Compliance manager** ensures that the organization follows internal policy and complies with regulations. This team: ++ * Requests or schedules new Lifecycle Workflow reviews. + * Assesses processes and procedures for reviewing Lifecycle Workflows, which include documentation and record keeping for compliance. + * Reviews results of past reviews for most critical resources. +- **HR Representative** - Assists with attribute mapping and population in HR provisioning scenarios. This team: + * Helps determine attributes that will be used to populate employeeHireDate and employeeLeaveDateTime. + * Ensures source attributes are populated and have values + * Identifies and suggests alternate attributes that could be mapped to employeeHireDate and employeeLeaveDateTime ++- **Development teams** build and maintain applications for your organization. This team: + * Develops custom workflows using GRAPH + * Integrates Lifecycle Workflows with Logic Apps via extensibility. ++++### Plan communications ++Communication is critical to the success of any new business process. Proactively communicate to users how and when their experience will change. Tell them how to gain support if they experience issues. ++### Communicate changes in accountability ++Lifecycle Workflows support shifting responsibility of manual processes to business owners. Decoupling these processes from the IT department drives more accuracy and automation. This shift is a cultural change in the resource owner's accountability and responsibility. Proactively communicate this change and ensure resource owners are trained and able to use the insights to make good decisions. ++++## Introduction to Lifecycle Workflows ++This section introduces Lifecycle Workflow concepts you should know before you plan your deployment. +++++## Prerequisites to deploying Lifecycle Workflows +The following is important information about your organization and the technologies that need to be in place prior to deploying Lifecycle Workflows. Ensure that you can answer yes to each of the items before attempting to deploy Lifecycle Workflows. ++|Item|Description|Documentation| +|--|--|--| +|Inbound Provisioning|You have a process to create user accounts for employees in Azure AD such as HR inbound, SuccessFactors, or MIM.<br><br> Alternatively you have a process to create user accounts in Active Directory and those accounts are provisioned to Azure AD.|[Workday to Active Directory](../saas-apps/workday-inbound-tutorial.md)<br><br>[Workday to Azure AD](../saas-apps/workday-inbound-tutorial.md)<br><br>[SuccessFactors to Active Directory](../saas-apps/sap-successfactors-inbound-provisioning-tutorial.md)</br></br>[SuccessFactors to Azure AD](../saas-apps/sap-successfactors-inbound-provisioning-cloud-only-tutorial.md)<br><br>[Azure AD Connect](../hybrid/whatis-azure-ad-connect-v2.md)<br><br>[Azure AD Connect cloud sync](../cloud-sync/what-is-cloud-sync.md)| +|Attribute synchronization|The accounts in Azure AD have the employeeHireDate and employeeLeaveDateTime attributes populated. The values may be populated when the accounts are created from an HR system or synchronized from AD using Azure AD Connect or cloud sync. You have additional attributes, that will be used to determine the scope, such as department, populated or the ability to populate, with data.|[How to synchronize attributes for Lifecycle Workflows](how-to-lifecycle-workflow-sync-attributes.md) ++## Understanding parts of a workflow +Before you begin planning a Lifecycle Workflow deployment, you should become familiar with the parts of workflow and the terminology around Lifecycle Workflows. ++The [Understanding Lifecycle Workflows](understanding-lifecycle-workflows.md) document, uses the portal to explain the parts of a workflow. The [Developer API reference Lifecycle Workflows](lifecycle-workflows-developer-reference.md) document, uses a GRAPH example to explain the parts of a workflow. ++You can use this document to become familiar with the parts of workflow prior to deploying them. ++## Limitations and constraints +The following table provides information that you need to be aware of as you create and deploy Lifecycle workflows. ++|Item|Description| +|--|--| +|Workflows|50 workflow limit per tenant| +|Number of custom tasks|limit of 25 per workflow| +|Value range for offsetInDays|Between -60 and 60 days| +|Workflow execution schedule|Default every 3 hours - can be set to run anywhere from 1 to 24 hours| +|Custom task extensions|Limit of 100| +|On-demand user limit|You can run an on-demand workflow against a maximum of 10 users| +|Extensibility callback timeout limit|Min 3 minutes - Maximum 5 hours| ++The following is additional information you should be aware of. ++ - You cannot enable the schedule for the Real-Time Leaver scenario. This is by design. +++++## Lifecycle workflow creation checklist +The following table provides a quick checklist of steps you can use when designing and planning your workflows. ++|Step|Description| +|--|--| +|[Determine your scenario](#determine-your-scenario)|Determine what scenario you're addressing with a workflow| +|[Determine the execution conditions](#determine-the-execution-conditions)|Determine who and when the workflow will run| +|[Review the tasks](#review-the-tasks)|Review and add additional tasks to the workflow| +|[Create your workflow](#create-your-workflow)|Create your workflow after planning and design.| +|[Plan a pilot](#plan-a-pilot)|Plan to pilot, run, and test your workflow.| ++## Determine your scenario +Before building a Lifecycle Workflow in the portal, you should determine which scenario or scenarios you wish to deploy. You can use the table below to see a current list of the available scenarios. These are based on the templates that are available in the portal and list the task associated with each one. ++|Scenario|Pre-defined Tasks| +|--|--| +|Onboard pre-hire employee| Generate TAP and Send Email| +|Onboard new hire employee|Enable User Account</br>Send Welcome Email</br>Add User To Groups| +|Real-time employee termination|Remove user from all groups</br>Remove user from all Teams</br>Delete User Account| +|Pre-Offboarding of an employee|Remove user from selected groups</br>Remove user from selected Teams| +|Offboard an employee|Disable User Account</br>Remove user from all groups</br>Remove user from all Teams| +|Post-Offboarding of an employee|Remove all licenses for user</br>Remove user from all Teams</br>Delete User Account| ++For more information on the built-in templates, see [Lifecycle Workflow templates.](lifecycle-workflow-templates.md) +++## Determine the execution conditions +Now that you've determined your scenarios, you need to look at what users in your organization the scenarios will apply to. ++An Execution condition is the part of a workflow that defines the scope of **who** and the trigger of **when** a workflow will be performed. ++The [scope](understanding-lifecycle-workflows.md#configure-scope) determines who the workflow runs against. This is defined by a rule that will filter users based on a condition. For example, the rule, `"rule": "(department eq 'sales')"` will run the task only on users who are members of the sales department. ++The [trigger](understanding-lifecycle-workflows.md#trigger-details) determines when the workflow will run. This can either be, on-demand, which is immediate, or time based. Most of the pre-defined templates in the portal are time based. ++### Attribute information +The scope of a workflow uses attributes under the rule section. You can add the following extra conditionals to further refine **who** the tasks are applied to. + - And + - And not + - Or + - Or not + +You can also choose from the numerous user attributes as well. ++[](media/lifecycle-workflows-deployment/attribute-1.png#lightbox) ++However before selecting an attribute to use in your execution condition, you need to ensure that the attribute is either populated with data or that you can begin populating it with the required data. ++Not all of these attributes are populated by default so you should verify with your HR administrator or IT administrators when using HR inbound cloud only provisioning, Azure AD Connect, or cloud sync. ++### Time information +The following is some important information regarding time zones that you should be aware of when designing workflows. +- Workday and SAP SF will always send the time in Coordinated Universal Time or UTC. +- if you're in a single time zone it's recommended that you hardcode the time portion to something that works for you. An example would be 5am for new hire scenarios and 10pm for last day of work scenarios. +- It's recommended, that if you're using temporary access pass (TAP), that you set the maximum lifetime to 24 hours. Doing this will help ensure that the TAP hasn't expired after being sent to an employee who may be in a different timezone. For more information, see [Configure Temporary Access Pass in Azure AD to register Passwordless authentication methods.](../authentication/howto-authentication-temporary-access-pass.md#enable-the-temporary-access-pass-policy) ++For more information, see [How to synchronize attributes for Lifecycle Workflows](../governance/how-to-lifecycle-workflow-sync-attributes.md) ++## Review the tasks +Now that we've determined the scenario and the who and when, you should consider whether the pre-defined tasks are sufficient or are you going to need additional tasks. The table below has a list of the pre-defined tasks that are currently in the portal. Use this table to determine if you want to add more tasks. ++|Task|Description|Relevant Scenarios| +|--|--|--| +|Add user to groups|Add user to selected groups| Joiner - Leaver| +|Add user to selected teams| Add user to Teams| Joiner - Leaver| +|Delete User Account| Delete user account in Azure AD| Leaver| +|Disable User Account| Disable user account in the directory| Joiner - Leaver| +|Enable User Account| Enable user account in the directory| Joiner - Leaver| +|Generate TAP and Send Email| Generate Temporary Access Pass and send via email to user's manager| Joiner| +|Remove all licenses of user| Remove all licenses assigned to the user| Leaver| +|Remove user from all groups| Remove user from all Azure AD group memberships| Leaver| +|Remove user from all Teams| Remove user from all Teams memberships| Leaver| +|Remove user from selected groups| Remove user from membership of selected Azure AD groups| Joiner - Leaver| +|Remove user from selected Teams| Remove user from membership of selected Teams| Joiner - Leaver| +|Run a Custom Task Extension| Run a Custom Task Extension to callout to an external system| Joiner - Leaver| +|Send email after user's last day| Send offboarding email to user's manager after the last day of work| Leaver| +|Send email before user's last day| Send offboarding email to user's manager before the last day of work| Leaver| +|Send email on user's last day| Send offboarding email to user's manager on the last day of work| Leaver| +|Send Welcome Email| Send welcome email to new hire| Joiner| +++For more information on tasks, see [Lifecycle Workflow tasks](lifecycle-workflow-tasks.md). ++### Group and team tasks +If you're using a group or team task, the workflow will need you to specify the group or groups. In the screenshot below, you'll see the yellow triangle on the task indicating that it's missing information. ++ [](media/lifecycle-workflows-deployment/group-1.png#lightbox) ++By clicking on the task, you'll be presented with a navigation bar to add or remove groups. Select the "x groups selected" link to add groups. ++ [](media/lifecycle-workflows-deployment/group-2.png#lightbox) ++### Custom task extensions +Lifecycle Workflows allow you to create workflows that can be triggered based on joiner, mover, or leaver scenarios. While Lifecycle Workflows provide several built-in tasks to automate common scenarios throughout the lifecycle of users, eventually you may reach the limits of these built-in tasks. With the extensibility feature, you'll be able to utilize the concept of custom task extensions to call-out to external systems as part of a Lifecycle Workflow. ++When creating custom task extensions, the scenarios for how it will interact with Lifecycle Workflows can be one of three ways: ++- **Fire-and-forget scenario**- The Logic App is started, and the sequential task execution immediately continues with no response expected from the Logic App. +- **Sequential task execution waiting for response from the Logic App** - The Logic app is started, and the sequential task execution waits on the response from the Logic App. +- **Sequential task execution waiting for the response of a 3rd party system**- The Logic app is started, and the sequential task execution waits on the response from a 3rd party system that triggers the Logic App to tell the Custom Task extension whether or not it ran successfully. +- For more information on custom extensions, see [Lifecycle Workflow extensibility (Preview)](lifecycle-workflow-extensibility.md) ++## Create your workflow +Now that you have design and planned your workflow, you can create it in the portal. For detailed information on creating a workflow, see [Create a Lifecycle workflow.](create-lifecycle-workflow.md) +++## Plan a pilot ++We encourage customers to initially pilot Lifecycle Workflows with a small group of users or a single test user. Piloting can help you adjust processes and communications as needed. It can help you increase users' and reviewers' ability to meet security and compliance requirements. ++In your pilot, we recommend that you: ++* Start with Lifecycle Workflows where the results are applied to a small subset of users. +* Monitor audit logs to ensure all events are properly audited. ++For more information, see [Best practices for a pilot.](../fundamentals/active-directory-deployment-plans.md). ++++#### Test and run the workflow +Once you've created a workflow, you should test it by running the workflow [on-demand](on-demand-workflow.md) ++Using the on-demand feature will allow you to test and evaluate whether the Lifecycle Workflow is working as intended. ++Once you have completed testing, you can either rework the Lifecycle Workflow or get ready for a broader distribution. ++### Audit logs +You can also get more information from the audit logs. These logs can be accessed in the portal under Azure Active Directory/monitoring. For more information, see [Audit logs in Azure AD](../reports-monitoring/concept-audit-logs.md) and [Lifecycle workflow history.](lifecycle-workflow-history.md) ++++#### Example Lifecycle Workflow plan ++|Stage|Description| +| - | - | +|Determine the scenario| A pre-hire workflow that sends email to new manager. | +|Determine the execution conditions|The workflow will run on new employees in the sales department, two(2) days before the employeeHireDate.| +|Review the tasks.|We'll use the pre-defined tasks in the workflow. No extra tasks will be added.| +|Create the workflow in the portal|Use the pre-defined template for new hire in the portal.| +|Enable and test the workflow| Use the on-demand feature to test the workflow on one user.| +|Review the test results|Review the test results and ensure the Lifecycle Workflow is working as intended.| +|Roll out the workflow to a broader audience|Communicate with stakeholders, letting them know that is going live and that HR will no longer need to send an email to the hiring manager. ++## Next steps ++Learn about the following related technologies: ++* [How to synchronize attributes for Lifecycle Workflows](how-to-lifecycle-workflow-sync-attributes.md) +* [Understanding Lifecycle Workflows](understanding-lifecycle-workflows.md) +* [Lifecycle Workflow templates.](lifecycle-workflow-templates.md) |
active-directory | Lifecycle Workflows Developer Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflows-developer-reference.md | + + Title: 'Developer API reference lifecycle workflows- Azure Active Directory' +description: Provides an API reference guide for Developers using Lifecycle Workflows. ++++++ Last updated : 01/20/2022++++++# Developer API reference lifecycle workflows +The following reference doc provides an overview of how a lifecycle workflow is constructed. The examples here are using workflows that are created using GRAPH and not in the portal. The concepts are however the same. For information using the portal, see [Understanding lifecycle workflows](understanding-lifecycle-workflows.md) ++## Parts of a workflow +Lifecycle Workflows enables you to use workflows for managing the lifecycle of your Azure AD users. In order to create a workflow, you must either specify pre-defined or custom information. Pre-defined workflows can be selected in the Azure AD portal. ++A workflow can be broken down in to three main parts. + - General information + - task information + - execution conditions ++ [](media/understanding-lifecycle-workflows/workflow-1.png#lightbox) ++### General information +This portion of a workflow covers basic information such as display name and a description of what the workflow does. ++This portion of the workflow supports the following information. ++| Parameters | Description | +|: |::| +|displayName|The name of the workflow| +|description|A description of the workflow| +|isEnabled|Boolean that determines whether the workflow or a task with in a workflow is enabled| ++### Task information +The task section describes the actions that will be taken when a workflow is executed. The actual task is defined by the taskDefinitionID. ++Lets examine the tasks section of a sample workflow. ++```Request body + "tasks":[ + { + "isEnabled":true, + "taskDefinitionId":"1b555e50-7f65-41d5-b514-5894a026d10d", + "displayName":"Generate TAP And Send Email", + "description":"Generate Temporary Access Password and send via email to user's manager", + "arguments":[ + { + "name": "tapLifetimeMinutes", + "value": "480" + }, + { + "name": "tapIsUsableOnce", + "value": "true" + } + ] + } + ], +``` +++This task uses 1b555e50-7f65-41d5-b514-5894a026d10d, which is the taskDefinitionID for [Generate Temporary Access Password and send via email to user's manager](lifecycle-workflow-tasks.md#generate-temporary-access-password-and-send-via-email-to-users-manager). This is a pre-defined task created by Microsoft and will send a user's manager an email that contains a temporary access pass. This task requires the following more arguments. ++|Parameter |Definition | +||| +|tapLifetimeMinutes | The lifetime of the temporaryAccessPass in minutes starting at startDateTime. Minimum 10, Maximum 43200 (equivalent to 30 days). | +|tapIsUsableOnce | Determines whether the pass is limited to a one time use. If true, the pass can be used once; if false, the pass can be used multiple times within the temporaryAccessPass lifetime. | ++This portion of the workflow supports the following parameters. The arguments section will be based on the actual task defined by the taskDefinitionID. ++| Parameters | Description | +|: |::| +|isEnabled|Boolean that determines whether the workflow or a task with in a workflow is enabled| +|tasks|The actions that the workflow will take when it's executed by the extensible lifecycle manager| +|taskDefinitionID|The unique ID corresponding to a supported task| +|arguments|Used to specify the activation duration of the TAP and toggle between one-time use or multiple uses| +++For additional information on pre-defined tasks, see [Lifecycle Workflow tasks and definitions](lifecycle-workflow-tasks.md) ++++### Execution conditions ++The execution condition section of a workflow sets up + - Who(scope) the workflow runs against + - When(trigger) the workflow runs ++Lets examine the execution conditions of a sample workflow. ++```Request body +{ + "executionConditions": { + "@odata.type": "microsoft.graph.identityGovernance.triggerAndScopeBasedConditions", + "scope": { + "@odata.type": "microsoft.graph.identityGovernance.ruleBasedSubjectSet", + "rule": "(department eq 'sales')" + }, + "trigger": { + "@odata.type": "microsoft.graph.identityGovernance.timeBasedAttributeTrigger", + "timeBasedAttribute": "employeeHireDate", + "offsetInDays": -2 + } + } +} +``` ++The first portion `microsoft.graph.identityGovernance.triggerAndScopeBasedConditions` tells the workflow execution that there are two settings that comprise the execution condition: + - Scope, which determines the subject set for a workflow execution. + - Trigger, which determines when a workflow will be executed ++The `triggerAndScopeBasedConditions` method is an extension of `microsoft.graph.identityGovernance.workflowExecutionConditions` which is the base type for execution settings for a workflow. ++#### Scope ++Now, lets look at the first property, `scope`. The scope uses `microsoft.graph.identityGovernance.ruleBasedSubjectSet`. ++The `ruleBasedSubjectSet` is a scope based on a filter rule for identifying in-scope subjects. This determines who the workflow runs against. ++The `ruleBasedSubjectSet` method is an extension of `microsoft.graph.subjectSet`. ++The `ruleBasedSubjectSet` has the following properties: ++| Property | Description | +|: |::| +|rule|Filter rule for the scope where the syntax is based on the filter query.| ++So in our example above, we see the property of our scope is `"rule": "(department eq 'sales')"`. ++This means that `ruleBasedSubjectSet` will filter based on the rule property we set, which is that the department attribute of a user equals sales. +++ #### Trigger ++Now, lets look at the second property, `trigger`. The trigger uses `microsoft.graph.identityGovernance.timeBasedAttributeTrigger`. ++The `timeBasedAttributeTrigger` is a trigger based on a time-based attribute for initiating workflow execution. The combination of scope and trigger conditions determine when a workflow is executed and on which identities. ++The `timeBasedAttributeTrigger` method is an extension of `microsoft.graph.identityGovernance.workflowExecutionTrigger` which is the base type for execution settings for a workflow. ++The `timeBasedAttributeTrigger` has the following properties: ++| Property | Description | +|: |::| +|timeBasedAttribute|Determines which time-based identity property to reference.| +|offsetInDays|How many days before or after the time-based attribute specified. For example, if the attribute is employeeHireDate and offsetInDays is -1, then the workflow should trigger 1 day before employee hire date.| ++So in our example above, we see the properties of our trigger are `timeBasedAttribute": "employeeHireDate` and `offsetInDays": -2`. ++This means that our workflow will trigger two days before the value specified in the employeeHireDate attribute. ++#### Summary +So now, when we put both the scope and trigger together, we get an execution condition that will: + - Carry out the tasks defined in the workflow, two days before the users employeeHireDate and only for the users in the sales department. ++### Workflow parameter reference +The following table is a summary of the parameters of a workflow. You can use this a reference for general workflow information or when creating and customizing workflows. ++| Parameters | Description | +|: |::| +|displayName|The name of the workflow| +|description|A description of the workflow| +|isEnabled|Boolean that determines whether the workflow or a task with in a workflow is enabled| +|tasks|The actions that the workflow will take when it's executed by the extensible lifecycle manager| +|taskDefinitionID|The unique ID corresponding to a supported task| +|arguments|Used to specify the activation duration of the TAP and toggle between one-time use or multiple uses| +|executionConditions|Defines for who (scope) and when (trigger) the workflow runs.| +|scope|Defines who the workflow should run against| +|trigger|Defines when the workflow should run| +++### Workflow example ++The following is a full example of a workflow. It is in the form of a POST API call that will create a pre-hire workflow, which will generate a TAP and send it via email to the user's manager. ++ ```http + POST https://graph.microsoft.com/beta/identityGovernance/lifecycleManagement/workflows + ``` ++```Request body +{ + "displayName":"Onboard pre-hire employee", + "description":"Configure pre-hire tasks for onboarding employees before their first day", + "isEnabled":false, + "tasks":[ + { + "isEnabled":true, + "taskDefinitionId":"1b555e50-7f65-41d5-b514-5894a026d10d", + "displayName":"Generate TAP And Send Email", + "description":"Generate Temporary Access Password and send via email to user's manager", + "arguments":[ + { + "name": "tapLifetimeMinutes", + "value": "480" + }, + { + "name": "tapIsUsableOnce", + "value": "true" + } + ] + } + ], + "executionConditions": { + "@odata.type": "microsoft.graph.identityGovernance.triggerAndScopeBasedConditions", + "scope": { + "@odata.type": "microsoft.graph.identityGovernance.ruleBasedSubjectSet", + "rule": "(department eq 'sales')" + }, + "trigger": { + "@odata.type": "microsoft.graph.identityGovernance.timeBasedAttributeTrigger", + "timeBasedAttribute": "employeeHireDate", + "offsetInDays": -2 + } + } +} +``` +## Next steps +- [Create a custom workflow using the Azure portal](tutorial-onboard-custom-workflow-portal.md) +- [Create a Lifecycle workflow](create-lifecycle-workflow.md) |
active-directory | Manage Lifecycle Workflows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/manage-lifecycle-workflows.md | + + Title: Manage lifecycle with Lifecycle workflows - Azure Active Directory +description: Learn how to manage user lifecycles with Lifecycle Workflows ++documentationcenter: '' +++editor: markwahl-msft +++ na ++ Last updated : 01/24/2021++++++# Manage user lifecycle with Lifecycle Workflows (preview) +With Lifecycle Workflows, you can easily ensure that users have the appropriate entitlements no matter where they fall under the Joiner-Mover-Leaver(JML) scenario. Before a new hire's start date you can add them to a group. You can generate a temporary password that is sent to their manager to help speed up the onboarding process. You can enable a user account when they join on their hire date, and send a welcome email to them. When a user is moving to a different group you can remove them from that group, and add them to a new one. When a user leaves, you can also delete user accounts. ++## Prerequisites +++The following **Delegated permissions** and **Application permissions** are required for access to Lifecycle Workflows: ++> [!IMPORTANT] +> The Microsoft Graph API permissions shown below are currently hidden from user interfaces such as Graph Explorer and Azure AD’s API permissions UI for app registrations. In such cases you can fall back to Entitlement Managements permissions which also work for Lifecycle Workflows (“EntitlementManagement.Read.All” and “EntitlementManagement.ReadWrite.All”). The Entitlement Management permissions will stop working with Lifecycle Workflows in future versions of the preview. ++|Column1 |Display String |Description |Admin Consent Required | +||||| +|LifecycleManagement.Read.All | Read all Lifecycle workflows, tasks, user states| Allows the app to list and read all workflows, tasks, user states related to lifecycle workflows on behalf of the signed-in user.| Yes +|LifecycleManagement.ReadWrite.All | Read and write all lifecycle workflows, tasks, user states.| Allows the app to create, update, list, read and delete all workflows, tasks, user states related to lifecycle workflows on behalf of the signed-in user.| Yes +++++++## Language determination within email notifications ++When sending email notifications, Lifecycle Workflows can automatically set the language that is displayed. For language priority, Lifecycle Workflows follow the following hierarchy: +- The user **preferredLanguage** property in the user object takes highest priority. +- The tenant **preferredLanguage** attribute takes next priority. +If neither can be determined, Lifecycle Workflows will default the language to English. ++## Supported languages in Lifecycle Workflows +++|Culture Code |Language | +||| +|en-us | English (United States) | +|ja-jp | Japanese (Japan) | +|de-de | German (Germany) | +|fr-fr | French (France) | +|pt-br | Portuguese (Brazil) | +|zh-cn | Chinese (Simplified, China) | +|zh-tw | Chinese (Simplified, Taiwan) | +|es-es | Spanish (Spain, International Sort) | +|ko-kr | Korean (Korea) | +|it-it | Italian (Italy) | +|nl-nl | Dutch (Netherlands) | +|ru-ru | Russian (Russia) | +|cs-cz | Czech (Czech Republic) | +|pl-pl | Polish (Poland) | +|tr-tr | Turkish (Turkey) | +|da-dk | Danish (Denmark) | +|en-gb | English (United Kingdom) | +|hu-hu | Hungarian (Hungary) | +|nb-no | Norwegian Bokmål (Norway) | +|pt-pt | Portuguese (Portugal) | +|sv-se | Swedish (Sweden) | ++## Supported user and query parameters ++Lifecycle Workflows support a rich set of user properties that are available on the user profile in Azure AD. Lifecycle Workflows also support many of the advanced query capabilities available in Graph API. This allows you, for example, to filter on the user properties when managing user execution conditions and making API calls. For more information about currently supported user properties, and query parameters, see: [User properties](/graph/aad-advanced-queries?tabs=http#user-properties) +++## Limits and constraints ++|Item |Limit | +||| +|Custom Workflows | 50 | +|Number of custom tasks | 25 per workflow | +|Value range for offsetInDays | Between -60 and 60 days | +|Default Workflow execution schedule | Every 3 hours | +++## Next steps +- [What are Lifecycle Workflows?](what-are-lifecycle-workflows.md) +- [Create Lifecycle workflows](create-lifecycle-workflow.md) |
active-directory | Manage Workflow Properties | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/manage-workflow-properties.md | + + Title: Manage workflow properties - Azure Active Directory +description: This article guides a user to editing a workflow's properties using Lifecycle Workflows ++++++ Last updated : 02/15/2022+++++# Manage workflow properties (preview) ++Managing workflows can be accomplished in one of two ways. + - Updating the basic properties of a workflow without creating a new version of it + - Creating a new version of the updated workflow. ++You can update the following basic information without creating a new workflow. + - display name + - description + - whether or not it is enabled. ++If you change any other parameters, a new version is required to be created as outlined in the [Managing workflow versions](manage-workflow-tasks.md) article. ++If done via the Azure portal, the new version is created automatically. If done using Microsoft Graph, you will have to manually create a new version of the workflow. For more information, see [Edit the properties of a workflow using Microsoft Graph](#edit-the-properties-of-a-workflow-using-microsoft-graph). ++## Edit the properties of a workflow using the Azure portal ++To edit the properties of a workflow using the Azure portal, you'll do the following steps: ++1. Sign in to the [Azure portal](https://portal.azure.com). ++1. Select **Azure Active Directory** and then select **Identity Governance**. ++1. On the left menu, select **Lifecycle workflows (Preview)**. ++1. On the left menu, select **Workflows (Preview)**. ++1. Here you'll see a list of all of your current workflows. Select the workflow that you want to edit. + + :::image type="content" source="media/manage-workflow-properties/manage-list.png" alt-text="Screenshot of the manage workflow list."::: ++6. To change the display name or description, select **Properties (Preview)**. ++ :::image type="content" source="media/manage-workflow-properties/manage-properties.png" alt-text="Screenshot of the manage basic properties screen."::: ++7. Update the display name or description how you want. +> [!NOTE] +> Display names can not be the same as other existing workflows. They must have their own unique name. ++8. Select **save**. +++## Edit the properties of a workflow using Microsoft Graph ++To view the list of current workflows you'll run the following API call: ++```http +GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/ +``` ++Lifecycle workflows can have their basic information such as "displayName", "description", and "isEnabled" edited by running this patch call and body. ++```http +PATCH https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<id> +Content-type: application/json ++{ + "displayName":"<Unique workflow name string>", + "description":"<workflow description>", + "isEnabled":<ΓÇ£trueΓÇ¥ or ΓÇ£falseΓÇ¥>, +} ++``` +++++++## Next steps ++- [Manage workflow versions](manage-workflow-tasks.md) +- [Check status of a workflows](check-status-workflow.md) |
active-directory | Manage Workflow Tasks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/manage-workflow-tasks.md | + + Title: Manage workflow versions - Azure Active Directory +description: This article guides a user on managing workflow versions with Lifecycle Workflows ++++++ Last updated : 04/06/2022+++++# Manage workflow versions (Preview) ++Workflows created with Lifecycle Workflows are able to grow and change with the needs of your organization. Workflows exist as versions from creation. When making changes to other than basic information, you create a new version of the workflow. For more information, see [Manage a workflow's properties](manage-workflow-properties.md). ++Changing a workflow's tasks or execution conditions requires the creation of a new version of that workflow. Tasks within workflows can be added, reordered, and removed at will. Updating a workflow's tasks or execution conditions within the Azure portal will trigger the creation of a new version of the workflow automatically. Making these updates in Microsoft Graph will require the new workflow version to be created manually. +++## Edit the tasks of a workflow using the Azure portal +++Tasks within workflows can be added, edited, reordered, and removed at will. To edit the tasks of a workflow using the Azure portal, you'll complete the following steps: ++1. Sign in to the [Azure portal](https://portal.azure.com). ++1. Select **Azure Active Directory** and then select **Identity Governance**. ++1. In the left menu, select **Lifecycle workflows (Preview)**. ++1. In the left menu, select **workflows (Preview)**. + +1. On the left side of the screen, select **Tasks (Preview)**. ++1. You can add a task to the workflow by selecting the **Add task** button. ++ :::image type="content" source="media/manage-workflow-tasks/manage-tasks.png" alt-text="Screenshot of adding a task to a workflow." lightbox="media/manage-workflow-tasks/manage-tasks.png"::: ++1. You can enable and disable tasks as needed by using the **Enable** and **Disable** buttons. ++1. You can reorder the order in which tasks are executed in the workflow by selecting the **Reorder** button. ++ :::image type="content" source="media/manage-workflow-tasks/manage-tasks-reorder.png" alt-text="Screenshot of reordering tasks in a workflow."::: ++1. You can remove a task from a workflow by using the **Remove** button. ++1. After making changes, select **save** to capture changes to the tasks. +++## Edit the execution conditions of a workflow using the Azure portal ++To edit the execution conditions of a workflow using the Azure portal, you'll do the following steps: +++1. On the left menu of Lifecycle Workflows, select **Workflows (Preview)**. ++1. On the left side of the screen, select **Execution conditions (Preview)**. + :::image type="content" source="media/manage-workflow-tasks/execution-conditions-details.png" alt-text="Screenshot of the execution condition details of a workflow." lightbox="media/manage-workflow-tasks/execution-conditions-details.png"::: ++1. On this screen you are presented with **Trigger details**. Here we have a trigger type and attribute details. In the template you can edit the attribute details to define when a workflow is run in relation to the attribute value measured in days. This attribute value can be from 0 to 60 days. + ++1. Select the **Scope** tab. + :::image type="content" source="media/manage-workflow-tasks/execution-conditions-scope.png" alt-text="Screenshot of the execution scope page of a workflow." lightbox="media/manage-workflow-tasks/execution-conditions-scope.png"::: ++1. On this screen you can define rules for who the workflow will run. In the template **Scope type** is set as Rule-Based, and you define the rule using expressions on user properties. For more information on supported user properties. see: [supported queries on user properties](/graph/aad-advanced-queries#user-properties). ++1. After making changes, select **save** to capture changes to the execution conditions. +++## See versions of a workflow using the Azure portal ++1. On the left menu of Lifecycle Workflows, select **Workflows (Preview)**. ++1. On this page, you see a list of all of your current workflows. Select the workflow that you want to see versions of. + +1. On the left side of the screen, select **Versions (Preview)**. ++ :::image type="content" source="media/manage-workflow-tasks/manage-versions.png" alt-text="Screenshot of versions of a workflow." lightbox="media/manage-workflow-tasks/manage-versions.png"::: ++1. On this page you see a list of the workflow versions. ++ :::image type="content" source="media/manage-workflow-tasks/manage-versions-list.png" alt-text="Screenshot of managing version list of lifecycle workflows." lightbox="media/manage-workflow-tasks/manage-versions-list.png"::: +++## Create a new version of an existing workflow using Microsoft Graph ++As stated above, creating a new version of a workflow is required to change any parameter that isn't "displayName", "description", or "isEnabled". Unlike in the Azure portal, to create a new version of a workflow using Microsoft Graph requires some additional steps. First, run the API call with the changes to the body of the workflow you want to update by doing the following: ++- Get the body of the workflow you want to create a new version of by running the API call: + ```http + GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow id> + ``` +- Copy the body of the returned workflow excluding the **id**, **"odata.context**, and **tasks@odata.context** portions of the returned workflow body. +- Make the changes in tasks and execution conditions you want for the new version of the workflow. +- Run the following **createNewVersion** API call along with the updated body of the workflow. The workflow body is wrapped in a **Workflow:{}** block. + ```http + POST https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<id>/createNewVersion + Content-type: application/json + + { + "workflow": { + "displayName":"New version of a workflow", + "description":"This is a new created version of a workflow", + "isEnabled":"true", + "tasks":[ + { + "isEnabled":"true", + "taskTemplateId":"70b29d51-b59a-4773-9280-8841dfd3f2ea", + "displayName":"Send welcome email to new hire", + "description":"Sends welcome email to a new hire", + "executionSequence": 1, + "arguments":[] + }, + { + "isEnabled":"true", + "taskTemplateId":"22085229-5809-45e8-97fd-270d28d66910", + "displayName":"Add user to group", + "description":"Adds user to a group.", + "executionSequence": 2, + "arguments":[ + { + "name":"groupID", + "value":"<group id value>" + } + ] + } + ], + "executionConditions": { + "@odata.type": "microsoft.graph.identityGovernance.triggerAndScopeBasedConditions", + "scope": { + "@odata.type": "microsoft.graph.identityGovernance.ruleBasedSubjectSet", + "rule": "(department eq 'sales')" + }, + "trigger": { + "@odata.type": "microsoft.graph.identityGovernance.timeBasedAttributeTrigger", + "timeBasedAttribute": "employeeHireDate", + "offsetInDays": -2 + } + } + } + ``` + ++### List workflow versions using Microsoft Graph ++Once a new version of a workflow is created, you can always find other versions by running the following call: +```http +GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<id>/versions +``` +Or to get a specific version: ++```http +GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<id>/versions/<version number> +``` ++### Reorder Tasks in a workflow using Microsoft Graph ++If you want to reorder tasks in a workflow, so that certain tasks run before others, you'll follow these steps: + 1. Use a GET call to return the body of the workflow in which you want to reorder the tasks. + ```http + GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflow id> + ``` + 1. Copy the body of the workflow and paste it in the body section for the new API call. + + 1. Tasks are run in the order they appear within the workflow. To update the task copy the one you want to run first in the workflow body, and place it above the tasks you want to run after it in the workflow. + + 1. Run the **createNewVersion** API call. + +++## Next steps +++- [Check status of a workflows](check-status-workflow.md) +- [Customize workflow schedule](customize-workflow-schedule.md) |
active-directory | On Demand Workflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/on-demand-workflow.md | + + Title: Run a workflow on-demand - Azure Active Directory +description: This article guides a user to running a workflow on demand using Lifecycle Workflows ++++++ Last updated : 03/04/2022++++++# Run a workflow on-demand (Preview) ++While most workflows by default are scheduled to run every 3 hours, workflows created using Lifecycle Workflows can also run on-demand so that they can be applied to specific users whenever you see fit. A workflow can be run on demand for any user and doesn't take into account whether or not a user meets the workflow's execution conditions. Workflows created in the Azure portal are disabled by default. Running a workflow on-demand allows you to run workflows that can't be run on schedule currently such as leaver workflows. It also allows you to test workflows before their scheduled run. You can test the workflow on a smaller group of users before enabling it for a broader audience. ++>[!NOTE] +>Be aware that you currently cannot run a workflow on-demand if it is set to disabled, which is the default state of newly created workflows using the Azure portal. You need to set the workflow to enabled to use the on-demand feature. ++## Run a workflow on-demand in the Azure portal ++Use the following steps to run a workflow on-demand. ++1. Sign in to the [Azure portal](https://portal.azure.com). ++1. Select **Azure Active Directory** and then select **Identity Governance**. ++1. On the left menu, select **Lifecycle workflows (Preview)**. ++1. select **Workflows (Preview)** ++1. On the workflow screen, select the specific workflow you want to run. ++ :::image type="content" source="media/on-demand-workflow/on-demand-list.png" alt-text="Screenshot of a list of Lifecycle Workflows workflows to run on-demand."::: ++1. Select **Run on demand**. ++1. On the **select users** tab, select **add users**. ++1. On the add users screen, select the users you want to run the on demand workflow for. ++ :::image type="content" source="media/on-demand-workflow/on-demand-add-users.png" alt-text="Screenshot of add users for on-demand workflow."::: ++1. Select **Add** ++1. Confirm your choices and select **Run workflow**. ++ :::image type="content" source="media/on-demand-workflow/on-demand-run.png" alt-text="Screenshot of a workflow being run on-demand."::: ++## Run a workflow on-demand using Microsoft Graph ++Running a workflow on-demand using Microsoft Graph requires users to manually be added by their user ID with a POST call. ++To run a workflow on-demand in Microsoft Graph, use the following request and body: +```http +POST https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<id>/activate +Content-type: application/json +``` ++```Request body +{ + "subjects":[ + {"id":"<userid>"}, + {"id":"<userid>"} + ] +} ++``` +++## Next steps ++- [Customize the schedule of workflows](customize-workflow-schedule.md) +- [Delete a Lifecycle workflow](delete-lifecycle-workflow.md) |
active-directory | Trigger Custom Task | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/trigger-custom-task.md | + + Title: Trigger Logic Apps based on custom task extensions +description: Trigger Logic Apps based on custom task extensions ++++++ Last updated : 07/05/2022+++++# Trigger Logic Apps based on custom task extensions (preview) ++Lifecycle Workflows can be used to trigger custom tasks via an extension to Azure Logic Apps. This can be used to extend the capabilities of Lifecycle Workflow beyond the built-in tasks. The steps for triggering a Logic App based on a custom task extension are as follows: ++- Create a custom task extension. +- Select which behavior you want the custom task extension to take. +- Link your custom task extension to a new or existing Azure Logic App. +- Add the custom task to a workflow. ++For more information about Lifecycle Workflows extensibility, see: [Workflow Extensibility](lifecycle-workflow-extensibility.md). +++## Create a custom task extension with a new Azure Logic App ++To use a custom task extension in your workflow, first a custom task extension must be created to be linked with an Azure Logic App. You're able to create a Logic App at the same time you're creating a custom task extension. To do this, you'll complete these steps: ++1. Sign in to the [Azure portal](https://portal.azure.com). ++1. Select **Azure Active Directory** and then select **Identity Governance**. ++1. In the left menu, select **Lifecycle Workflows (Preview)**. ++1. In the left menu, select **Workflows (Preview)**. ++1. On the workflows screen, select **custom task extension**. + :::image type="content" source="media/trigger-custom-task/custom-task-extension-select.png" alt-text="Screenshot of selecting a custom task extension from a workflow overview page."::: +1. On the custom task extensions page, select **create custom task extension**. + :::image type="content" source="media/trigger-custom-task/create-custom-task-extension.png" alt-text="Screenshot for creating a custom task extension selection."::: +1. On the basics page you, give a display name and description for the custom task extension and select **Next**. + :::image type="content" source="media/trigger-custom-task/custom-task-extension-basics.png" alt-text="Screenshot of the basics section for creating a custom task extension."::: +1. On the **Task behavior** page, you specify how the custom task extension will behave after executing the Azure Logic App and select **Next**. + :::image type="content" source="media/trigger-custom-task/custom-task-extension-behavior.png" alt-text="Screenshot for choose task behavior for custom task extension."::: + > [!NOTE] + > For more information about custom task extension behavior, see: [Lifecycle Workflow extensibility](lifecycle-workflow-extensibility.md) +1. On the **Logic App details** page, you select **Create new Logic App**, and specify the subscription and resource group where it will be located. You'll also give the new Azure Logic App a name. + :::image type="content" source="media/trigger-custom-task/custom-task-extension-new-logic-app.png" alt-text="screen showing to create new logic app for custom task extension."::: +1. If deployed successfully, you'll get confirmation on the **Logic App details** page immediately, and then you can select **Next**. ++1. On the **Review** page, you can review the details of the custom task extension and the Azure Logic App you've created. Select **Create** if the details match what you desire for the custom task extension. + ++## Configure a custom task extension with an existing Azure Logic App ++You can also link a custom task extension to an existing Azure Logic App. To do this, you'd complete the following steps: ++> [!IMPORTANT] +> A Logic App must be configured to be compatible with the custom task extension. For more information, see [Configure a Logic App for Lifecycle Workflow use](configure-logic-app-lifecycle-workflows.md) ++1. In the left menu, select **Lifecycle workflows (Preview)**. ++1. In the left menu, select **Workflows (Preview)**. ++1. On the workflows screen, select **custom task extension**. ++1. On the **Logic App details** page, you select **Choose an existing Logic App**, and specify the subscription and resource group where the Azure Logic App is located and select **Next**. + :::image type="content" source="media/trigger-custom-task/custom-task-extension-existing-logic-app.png" alt-text="Screenshot for selecting an existing logic app with custom task extension."::: +1. You can Review information about the updated custom task extension and the existing Logic App linked to it. Select **Create** if the details match what you desire for the custom task extension. +++## Add your custom task extension to a workflow ++After you've created your custom task extension, you can now add it to a workflow. Unlike some tasks, which can only be added to workflow templates that match its category, a custom task extension can be added to any template you choose to make a custom workflow from. ++To Add a custom task extension to a workflow, you'd do the following steps: ++1. In the left menu, select **Lifecycle workflows (Preview)**. ++1. In the left menu, select **Workflows (Preview)**. ++1. Select the workflow that you want to add the custom task extension to. ++1. On the workflow screen, select **Tasks**. ++1. On the tasks screen, select **Add task**. ++1. In the **Select tasks** drop down, select **Run a Custom Task Extension**, and select **Add**. ++1. On the custom task extension page, you can give the task a name and description. You can also choose from a list of configured custom task extensions to use. + :::image type="content" source="media/trigger-custom-task/add-custom-task-extension.png" alt-text="Screenshot showing to add a custom task extension to workflow."::: +1. When finished, select **Save**. ++## Next steps ++- [Lifecycle workflow extensibility (Preview)](lifecycle-workflow-extensibility.md) +- [Manage Workflow Versions](manage-workflow-tasks.md) |
active-directory | Tutorial Offboard Custom Workflow Graph | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-offboard-custom-workflow-graph.md | + + Title: 'Execute employee offboarding tasks in real-time on their last day of work with Microsoft Graph (preview)' +description: Tutorial for off-boarding users from an organization using Lifecycle workflows with Microsoft Graph (preview). +++++++ Last updated : 08/18/2022+++++# Execute employee offboarding tasks in real-time on their last day of work with Microsoft Graph (preview) ++This tutorial provides a step-by-step guide on how to execute a real-time employee termination with Lifecycle workflows using the GRAPH API. ++This off-boarding scenario will run a workflow on-demand and accomplish the following tasks: ++1. Remove user from all groups +2. Remove user from all Teams +3. Delete user account ++You may learn more about running a workflow on-demand [here](on-demand-workflow.md). ++## Before you begin ++As part of the prerequisites for completing this tutorial, you will need an account that has group and Teams memberships that can be deleted during the tutorial. For more comprehensive instructions on how to complete these prerequisite steps, you may refer to the [Preparing user accounts for Lifecycle workflows tutorial](tutorial-prepare-azure-ad-user-accounts.md). ++The leaver scenario can be broken down into the following: +- **Prerequisite:** Create a user account that represents an employee leaving your organization +- **Prerequisite:** Prepare the user account with groups and Teams memberships +- Create the lifecycle management workflow +- Run the workflow on-demand +- Verify that the workflow was successfully executed +++## Create a leaver workflow on-demand using Graph API ++Before introducing the API call to create this workflow, you may want to review some of the parameters that are required for this workflow creation. ++|Parameter |Description | +||| +|category | A string that identifies the category of the workflow. String is "joiner", "mover", or "leaver and can support multiple strings. Category of workflow must also contain the category of its tasks. For full task definitions, see: [Lifecycle workflow tasks and definitions](lifecycle-workflow-tasks.md) | +|displayName | A unique string that identifies the workflow. | +|description | A string that describes the purpose of the workflow for administrative use. (Optional) | +|isEnabled | A boolean value that denotes whether the workflow is set to run or not. If set to ΓÇ£true" then the workflow will run. | +|isSchedulingEnabled | A Boolean value that denotes whether scheduling is enabled or not. Unlike isEnbaled, a workflow can still be run on demand if this value is set to false. | +|executionConditions | An argument that contains: <br><br>A time-based attribute and an integer parameter defining when a workflow will run between -60 and 60 <br><br>A scope attribute defining who the workflow runs for. | +|tasks | An argument in a workflow that has a unique displayName and a description. <br><br> It defines the specific tasks to be executed in the workflow. <br><br>The specified task is outlined by the taskDefinitionID and its parameters. For a list of supported tasks, and their corresponding IDs, see [Supported Task Definitions](lifecycle-workflow-tasks.md). | ++For the purpose of this tutorial, there are three tasks that will be introduced in this workflow: ++### Remove user from all groups task ++```Example +"tasks":[ + { + "continueOnError": true, + "displayName": "Remove user from all groups", + "description": "Remove user from all Azure AD groups memberships", + "isEnabled": true, + "taskDefinitionId": "b3a31406-2a15-4c9a-b25b-a658fa5f07fc", + "arguments": [] + } + ] +``` ++> [!NOTE] +> The task does not support removing users from Privileged Access Groups, Dynamic Groups, and synchronized Groups. ++### Remove user from all Teams task ++```Example +"tasks":[ + { + "continueOnError": true, + "description": "Remove user from all Teams", + "displayName": "Remove user from all Teams memberships", + "isEnabled": true, + "taskDefinitionId": "81f7b200-2816-4b3b-8c5d-dc556f07b024", + "arguments": [] + } + ] +``` +### Delete user task ++```Example +"tasks":[ + { + "continueOnError": true, + "displayName": "Delete user account", + "description": "Delete user account in Azure AD", + "isEnabled": true, + "taskDefinitionId": "8d18588d-9ad3-4c0f-99d0-ec215f0e3dff", + "arguments": [] + } + ] +``` +### Leaver workflow on-demand ++The following POST API call will create a leaver workflow that can be executed on-demand for real-time employee terminations. ++ ```http +POST https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows +Content-type: application/json ++{ + "category": "Leaver", + "displayName": "Real-time employee termination", + "description": "Execute real-time termination tasks for employees on their last day of work", + "isEnabled": true, + "isSchedulingEnabled": false, + "executionConditions":{ + "@odata.type":"#microsoft.graph.identityGovernance.onDemandExecutionOnly" + }, + "tasks": [ + { + "continueOnError": false, + "description": "Remove user from all Azure AD groups memberships", + "displayName": "Remove user from all groups", + "executionSequence": 1, + "isEnabled": true, + "taskDefinitionId": "b3a31406-2a15-4c9a-b25b-a658fa5f07fc", + "arguments": [] + }, + { + "continueOnError": false, + "description": "Remove user from all Teams memberships", + "displayName": "Remove user from all Teams", + "executionSequence": 2, + "isEnabled": true, + "taskDefinitionId": "81f7b200-2816-4b3b-8c5d-dc556f07b024", + "arguments": [] + }, + { + "continueOnError": false, + "description": "Delete user account in Azure AD", + "displayName": "Delete User Account", + "executionSequence": 3, + "isEnabled": true, + "taskDefinitionId": "8d18588d-9ad3-4c0f-99d0-ec215f0e3dff", + "arguments": [] + } + ] +} +``` ++## Run the workflow ++Now that the workflow is created, it will automatically run the workflow every 3 hours. Lifecycle workflows will check every 3 hours for users in the associated execution condition and execute the configured tasks for those users. However, for the tutorial, we would like to run it immediately. To run a workflow immediately, we can use the on-demand feature. ++>[!NOTE] +>Be aware that you currently cannot run a workflow on-demand if it is set to disabled. You need to set the workflow to enabled to use the on-demand feature. ++To run a workflow on-demand for users using the GRAPH API do the following steps: ++1. Open [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). +2. Make sure the top is still set to **POST**, and **beta** and `https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<id>/activate` is in the box. Change `<id>` to the ID of workflows. + 3. Copy the code below in to the **Request body** + 4. Replace `<userid>` in the code below with the value of the user's ID. + 5. Select **Run query** + ```json + { + "subjects":[ + {"id":"<userid>"} + + ] +} ++``` ++## Check tasks and workflow status ++At any time, you may monitor the status of the workflows and the tasks. As a reminder, there are three different data pivots, users runs, and tasks which are currently available in public preview. You may learn more in the how-to guide [Check the status of a workflow (preview)](check-status-workflow.md). In the course of this tutorial, we will look at the status using the user focused reports. ++To begin, you will just need the ID of the workflow and the date range for which you want to see the summary of the status. You may obtain the workflow ID from the response code of the POST API call that was used to create the workflow. ++This example will show you how to list the userProcessingResults for the last 7 days. ++```http +GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow_id>/userProcessingResults +``` +Furthermore, it is possible to get a summary of the userProcessingResults to get a quicker overview of large amounts of data, but for this a time span must be specified. ++```http +GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow id>/userProcessingResults/summary(startDateTime=2022-05-01T00:00:00Z,endDateTime=2022-05-30T00:00:00Z) +``` +You may also check the full details about the tasks of a given userProcessingResults. You will need to provide the workflow ID of the workflow, as well as the userProcessingResult ID. You may obtain the userProcessingResult ID from the response of the userProcessingResults GET call above. ++```http +GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow_id>/userProcessingResults/<userProcessingResult_id>/taskProcessingResults +``` ++## Next steps +- [Preparing user accounts for Lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md) +- [Execute employee offboarding tasks in real-time on their last day of work with Azure portal (preview)](tutorial-offboard-custom-workflow-portal.md) |
active-directory | Tutorial Offboard Custom Workflow Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-offboard-custom-workflow-portal.md | + + Title: 'Execute employee offboarding tasks in real-time on their last day of work with Azure portal (preview)' +description: Tutorial for off-boarding users from an organization using Lifecycle workflows with Azure portal (preview). +++++++ Last updated : 08/18/2022++++++# Execute employee offboarding tasks in real-time on their last day of work with Azure portal (preview) ++This tutorial provides a step-by-step guide on how to execute a real-time employee termination with Lifecycle workflows using the Azure portal. ++This off-boarding scenario will run a workflow on-demand and accomplish the following tasks: + +1. Remove user from all groups +2. Remove user from all Teams +3. Delete user account ++You may learn more about running a workflow on-demand [here](on-demand-workflow.md). ++## Before you begin ++As part of the prerequisites for completing this tutorial, you will need an account that has group and Teams memberships and that can be deleted during the tutorial. For more comprehensive instructions on how to complete these prerequisite steps, you may refer to the [Preparing user accounts for Lifecycle workflows tutorial](tutorial-prepare-azure-ad-user-accounts.md). ++The leaver scenario can be broken down into the following: +- **Prerequisite:** Create a user account that represents an employee leaving your organization +- **Prerequisite:** Prepare the user account with groups and Teams memberships +- Create the lifecycle management workflow +- Run the workflow on-demand +- Verify that the workflow was successfully executed ++## Create a workflow using leaver template +Use the following steps to create a leaver on-demand workflow that will execute a real-time employee termination with Lifecycle workflows using the Azure portal. ++ 1. Sign in to Azure portal + 2. On the right, select **Azure Active Directory**. + 3. Select **Identity Governance**. + 4. Select **Lifecycle workflows (Preview)**. + 5. On the **Overview (Preview)** page, select **New workflow**. + :::image type="content" source="media/tutorial-lifecycle-workflows/new-workflow.png" alt-text="Screenshot of selecting new workflow." lightbox="media/tutorial-lifecycle-workflows/new-workflow.png"::: ++ 6. From the templates, select **Select** under **Real-time employee termination**. + :::image type="content" source="media/tutorial-lifecycle-workflows/select-template.png" alt-text="Screenshot of selecting template leaver workflow." lightbox="media/tutorial-lifecycle-workflows/select-template.png"::: ++ 7. Next, you will configure the basic information about the workflow. Select **Next:Review tasks** when you are done with this step. + :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-leaver.png" alt-text="Screenshot of review template tasks." lightbox="media/tutorial-lifecycle-workflows/real-time-leaver.png"::: ++ 8. On the following page, you may inspect the tasks if desired but no additional configuration is needed. Select **Next: Select users** when you are finished. + :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-tasks.png" alt-text="Screenshot of template tasks." lightbox="media/tutorial-lifecycle-workflows/real-time-tasks.png"::: ++ 9. For the user selection, select **Select users**. This allows you to select users for which the workflow will be executed immediately after creation. Regardless of the selection, you can run the workflow on-demand later at any time as needed. + :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-users.png" alt-text="Select real time leaver template users." lightbox="media/tutorial-lifecycle-workflows/real-time-users.png"::: + + 10. Next, select on **+Add users** to designate the users to be executed on this workflow. + :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-add-users.png" alt-text="Screenshot of real time leaver add users." lightbox="media/tutorial-lifecycle-workflows/real-time-add-users.png"::: + + 11. A panel with the list of available users will pop-up on the right side of the screen. Select **Select** when you are done with your selection. + :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-user-list.png" alt-text="Screenshot of real time leaver template selected users." lightbox="media/tutorial-lifecycle-workflows/real-time-user-list.png"::: ++ 12. Select **Next: Review and create** when you are satisfied with your selection. + :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-review-users.png" alt-text="Screenshot of reviewing template users." lightbox="media/tutorial-lifecycle-workflows/real-time-review-users.png"::: ++ 13. On the review blade, verify the information is correct and select **Create**. + :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-create.png" alt-text="Screenshot of creating real time leaver workflow." lightbox="media/tutorial-lifecycle-workflows/real-time-create.png"::: ++## Run the workflow +Now that the workflow is created, it will automatically run the workflow every 3 hours. Lifecycle workflows will check every 3 hours for users in the associated execution condition and execute the configured tasks for those users. However, for the tutorial, we would like to run it immediately. To run a workflow immediately, we can use the on-demand feature. ++>[!NOTE] +>Be aware that you currently cannot run a workflow on-demand if it is set to disabled. You need to set the workflow to enabled to use the on-demand feature. ++To run a workflow on-demand, for users using the Azure portal, do the following steps: ++ 1. On the workflow screen, select the specific workflow you want to run. + 2. Select **Run on demand**. + 3. On the **select users** tab, select **add users**. + 4. Add a user. + 5. Select **Run workflow**. + +## Check tasks and workflow status ++At any time, you may monitor the status of the workflows and the tasks. As a reminder, there are three different data pivots, users runs, and tasks which are currently available in public preview. You may learn more in the how-to guide [Check the status of a workflow (preview)](check-status-workflow.md). In the course of this tutorial, we will look at the status using the user focused reports. ++ 1. To begin, select the **Workflow history (Preview)** tab on the left to view the user summary and associated workflow tasks and statuses. + :::image type="content" source="media/tutorial-lifecycle-workflows/workflow-history-real-time.png" alt-text="Screenshot of real time history overview." lightbox="media/tutorial-lifecycle-workflows/workflow-history-real-time.png"::: ++1. Once the **Workflow history (Preview)** tab has been selected, you will land on the workflow history page as shown. + :::image type="content" source="media/tutorial-lifecycle-workflows/user-summary-real-time.png" alt-text="Screenshot of real time workflow history." lightbox="media/tutorial-lifecycle-workflows/user-summary-real-time.png"::: ++1. Next, you may select **Total tasks** for the user Jane Smith to view the total number of tasks created and their statuses. In this example, there are three total tasks assigned to the user Jane Smith. + :::image type="content" source="media/tutorial-lifecycle-workflows/total-tasks-real-time.png" alt-text="Screenshot of total tasks for real time workflow." lightbox="media/tutorial-lifecycle-workflows/total-tasks-real-time.png"::: ++1. To add an extra layer of granularity, you may select **Failed tasks** for the user Wade Warren to view the total number of failed tasks assigned to the user Wade Warren. + :::image type="content" source="media/tutorial-lifecycle-workflows/failed-tasks-real-time.png" alt-text="Screenshot of failed tasks for real time workflow." lightbox="media/tutorial-lifecycle-workflows/failed-tasks-real-time.png"::: ++1. Similarly, you may select **Unprocessed tasks** for the user Wade Warren to view the total number of unprocessed or canceled tasks assigned to the user Wade Warren. + :::image type="content" source="media/tutorial-lifecycle-workflows/canceled-tasks-real-time.png" alt-text="Screenshot of unprocessed tasks for real time workflow." lightbox="media/tutorial-lifecycle-workflows/canceled-tasks-real-time.png"::: ++## Next steps +- [Preparing user accounts for Lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md) +- [Execute employee offboarding tasks in real-time on their last day of work with Microsoft Graph (preview)](tutorial-offboard-custom-workflow-graph.md) |
active-directory | Tutorial Onboard Custom Workflow Graph | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-onboard-custom-workflow-graph.md | + + Title: 'Automate employee onboarding tasks before their first day of work with Microsoft Graph (preview)' +description: Tutorial for onboarding users to an organization using Lifecycle workflows with Microsoft Graph (preview). +++++++ Last updated : 08/18/2022+++++# Automate employee onboarding tasks before their first day of work with Microsoft Graph (preview) ++This tutorial provides a step-by-step guide on how to automate pre-hire tasks with Lifecycle workflows using the GRAPH API. ++This pre-hire scenario will generate a temporary password for our new employee and send it via email to the user's new manager. ++## Before you begin ++Two accounts are required for the tutorial, one account for the new hire and another account that acts as the manager of the new hire. The new hire account must have the following attributes set: +- employeeHireDate must be set to today +- department must be set to sales +- manager attribute must be set, and the manager account should have a mailbox to receive an email. ++For more comprehensive instructions on how to complete these prerequisite steps, you may refer to the [Preparing user accounts for Lifecycle workflows tutorial](tutorial-prepare-azure-ad-user-accounts.md). The [TAP policy](/azure/active-directory/authentication/howto-authentication-temporary-access-pass#enable-the-temporary-access-pass-policy) must also be enabled to run this tutorial. ++Detailed breakdown of the relevant attributes: ++ | Attribute | Description |Set on| + |: |::|--| + |mail|Used to notify manager of the new employees temporary access pass|Both| + |manager|This attribute that is used by the lifecycle workflow|Employee| + |employeeHireDate|Used to trigger the workflow|Both| + |department|Used to provide the scope for the workflow|Both| ++The pre-hire scenario can be broken down into the following: + - **Prerequisite:** Create two user accounts, one to represent an employee and one to represent a manager + - **Prerequisite:** Edit the manager attribute for this scenario using Microsoft Graph Explorer + - **Prerequisite:** Enabling and using Temporary Access Pass (TAP) + - Creating the lifecycle management workflow + - Triggering the workflow + - Verifying the workflow was successfully executed ++## Create a pre-hire workflow using Graph API ++Now that the pre-hire workflow attributes have been updated and correctly populated, a pre-hire workflow can then be created to generate a Temporary Access Pass (TAP) and send it via email to a user's manager. Before introducing the API call to create this workflow, you may want to review some of the parameters that are required for this workflow creation. ++|Parameter |Description | +||| +|category | A string that identifies the category of the workflow. String is "joiner", "mover", or "leaver and can support multiple strings. Category of workflow must also contain the category of its tasks. For full task definitions, see: [Lifecycle workflow tasks and definitions](lifecycle-workflow-tasks.md) | +|displayName | A unique string that identifies the workflow. | +|description | A string that describes the purpose of the workflow for administrative use. (Optional) | +|isEnabled | A boolean value that denotes whether the workflow is set to run or not. If set to ΓÇ£true" then the workflow will run. | +|isSchedulingEnabled | A Boolean value that denotes whether scheduling is enabled or not. Unlike isEnbaled, a workflow can still be run on demand if this value is set to false. | +|executionConditions | An argument that contains: <br><br> A time-based attribute and an integer parameter defining when a workflow will run between -60 and 60 <br><br>a scope attribute defining who the workflow runs for. | +|tasks | An argument in a workflow that has a unique displayName and a description. <br><br> It defines the specific tasks to be executed in the workflow. The specified task is outlined by the taskDefinitionID and its parameters. For a list of supported tasks, and their corresponding IDs, see [Supported Task Definitions](lifecycle-workflow-tasks.md). | ++The following POST API call will create a pre-hire workflow that will generate a TAP and send it via email to the user's manager. ++ ```http + POST https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows +Content-type: application/json ++{ + "displayName":"Onboard pre-hire employee", + "description":"Configure pre-hire tasks for onboarding employees before their first day", + "isEnabled":true, + "isSchedulingEnabled": false, + "executionConditions": { + "@odata.type": "microsoft.graph.identityGovernance.triggerAndScopeBasedConditions", + "scope": { + "@odata.type": "microsoft.graph.identityGovernance.ruleBasedSubjectSet", + "rule": "(department eq 'sales')" + }, + "trigger": { + "@odata.type": "microsoft.graph.identityGovernance.timeBasedAttributeTrigger", + "timeBasedAttribute": "employeeHireDate", + "offsetInDays": -2 + } + }, + "tasks":[ + { + "isEnabled":true, + "category": "Joiner", + "taskDefinitionId":"1b555e50-7f65-41d5-b514-5894a026d10d", + "displayName":"Generate TAP And Send Email", + "description":"Generate Temporary Access Pass and send via email to user's manager", + "arguments":[ + { + "name": "tapLifetimeMinutes", + "value": "480" + }, + { + "name": "tapIsUsableOnce", + "value": "true" + } + ] + } + ] +} +``` ++## Run the workflow +Now that the workflow is created, it will automatically run the workflow every 3 hours. Lifecycle workflows will check every 3 hours for users in the associated execution condition and execute the configured tasks for those users. However, for the tutorial, we would like to run it immediately. To run a workflow immediately, we can use the on-demand feature. ++>[!NOTE] +>Be aware that you currently cannot run a workflow on-demand if it is set to disabled. You need to set the workflow to enabled to use the on-demand feature. ++To run a workflow on-demand for users using the GRAPH API do the following steps: ++1. Open [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). +2. Make sure the top is still set to **POST**, and **beta** and `https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<id>/activate` is in the box. Change `<id>` to the ID of workflows. + 3. Copy the code below in to the **Request body** + 4. Replace `<userid>` in the code below with the value of the user's ID. + 5. Select **Run query** + ```json + { + "subjects":[ + {"id":"<userid>"} + + ] +} ++``` ++## Check tasks and workflow status ++At any time, you may monitor the status of the workflows and the tasks. As a reminder, there are three different data pivots, users runs, and tasks which are currently available in public preview. You may learn more in the how-to guide [Check the status of a workflow (preview)](check-status-workflow.md). In the course of this tutorial, we will look at the status using the user focused reports. ++To begin, you will just need the ID of the workflow and the date range for which you want to see the summary of the status. You may obtain the workflow ID from the response code of the POST API call that was used to create the workflow. ++This example will show you how to list the userProcessingResults for the last 7 days. ++```http +GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow id>/userProcessingResults +``` +Furthermore, it is possible to get a summary of the userProcessingResults to get a quicker overview of large amounts of data, but for this a time span must be specified. ++```http +GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow id>/userProcessingResults/summary(startDateTime=2022-05-01T00:00:00Z,endDateTime=2022-05-30T00:00:00Z) +``` +You may also check the full details about the tasks of a given userProcessingResults. You will need to provide the workflow ID of the workflow, as well as the userProcessingResult ID. You may obtain the userProcessingResult ID from the response of the userProcessingResults GET call above. ++```http +GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow_id>/userProcessingResults/<userProcessingResult_id>/taskProcessingResults +``` ++## Enable the workflow schedule ++After running your workflow on-demand and checking that everything is working fine, you may want to enable the workflow schedule. To enable the workflow schedule, you may run the following PATCH call. ++```http +PATCH https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<id> +Content-type: application/json ++{ + "displayName":"Onboard pre-hire employee", + "description":"Configure pre-hire tasks for onboarding employees before their first day", + "isEnabled": true, + "isSchedulingEnabled": true +} ++``` ++## Next steps +- [Preparing user accounts for Lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md) +- [Automate employee onboarding tasks before their first day of work with Azure portal (preview)](tutorial-onboard-custom-workflow-portal.md) |
active-directory | Tutorial Onboard Custom Workflow Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-onboard-custom-workflow-portal.md | + + Title: 'Automate employee onboarding tasks before their first day of work with Azure portal (preview)' +description: Tutorial for onboarding users to an organization using Lifecycle workflows with Azure portal (preview). +++++++ Last updated : 08/18/2022++++++# Automate employee onboarding tasks before their first day of work with Azure portal (preview) ++This tutorial provides a step-by-step guide on how to automate pre-hire tasks with Lifecycle workflows using the Azure portal. ++This pre-hire scenario will generate a temporary access pass for our new employee and send it via email to the user's new manager. +++## Before you begin ++Two accounts are required for this tutorial, one account for the new hire and another account that acts as the manager of the new hire. The new hire account must have the following attributes set: ++- employeeHireDate must be set to today +- department must be set to sales +- manager attribute must be set, and the manager account should have a mailbox to receive an email ++For more comprehensive instructions on how to complete these prerequisite steps, you may refer to the [Preparing user accounts for Lifecycle workflows tutorial](tutorial-prepare-azure-ad-user-accounts.md). The [TAP policy](/azure/active-directory/authentication/howto-authentication-temporary-access-pass#enable-the-temporary-access-pass-policy) must also be enabled to run this tutorial. ++Detailed breakdown of the relevant attributes: ++ | Attribute | Description |Set on| + |: |::|--| + |mail|Used to notify manager of the new employees temporary access pass|Both| + |manager|This attribute that is used by the lifecycle workflow|Employee| + |employeeHireDate|Used to trigger the workflow|Employee| + |department|Used to provide the scope for the workflow|Employee| ++The pre-hire scenario can be broken down into the following: + - **Prerequisite:** Create two user accounts, one to represent an employee and one to represent a manager + - **Prerequisite:** Editing the attributes required for this scenario in the portal + - **Prerequisite:** Edit the attributes for this scenario using Microsoft Graph Explorer + - **Prerequisite:** Enabling and using Temporary Access Pass (TAP) + - Creating the lifecycle management workflow + - Triggering the workflow + - Verifying the workflow was successfully executed ++## Create a workflow using pre-hire template +Use the following steps to create a pre-hire workflow that will generate a TAP and send it via email to the user's manager using the Azure portal. ++ 1. Sign in to Azure portal + 2. On the right, select **Azure Active Directory**. + 3. Select **Identity Governance**. + 4. Select **Lifecycle workflows (Preview)**. + 5. On the **Overview (Preview)** page, select **New workflow**. + :::image type="content" source="media/tutorial-lifecycle-workflows/new-workflow.png" alt-text="Screenshot of selecting a new workflow." lightbox="media/tutorial-lifecycle-workflows/new-workflow.png"::: ++ 6. From the templates, select **select** under **Onboard pre-hire employee**. + :::image type="content" source="media/tutorial-lifecycle-workflows/select-template.png" alt-text="Screenshot of selecting workflow template." lightbox="media/tutorial-lifecycle-workflows/select-template.png"::: + + 7. Next, you will configure the basic information about the workflow. This information includes when the workflow will trigger, known as **Days from event**. So in this case, the workflow will trigger two days before the employee's hire date. On the onboard pre-hire employee screen, add the following settings and then select **Next: Configure Scope**. ++ :::image type="content" source="media/tutorial-lifecycle-workflows/configure-scope.png" alt-text="Screenshot of selecting a configuration scope." lightbox="media/tutorial-lifecycle-workflows/configure-scope.png"::: ++ 8. Next, you will configure the scope. The scope determines which users this workflow will run against. In this case, it will be on all users in the Sales department. On the configure scope screen, under **Rule** add the following settings and then select **Next: Review tasks** ++ :::image type="content" source="media/tutorial-lifecycle-workflows/review-tasks.png" alt-text="Screenshot of selecting review tasks." lightbox="media/tutorial-lifecycle-workflows/review-tasks.png"::: ++ 9. On the following page, you may inspect the task if desired but no additional configuration is needed. Select **Next: Review + Create** when you are finished. + :::image type="content" source="media/tutorial-lifecycle-workflows/onboard-review-create.png" alt-text="Screenshot of reviewing an on-board workflow." lightbox="media/tutorial-lifecycle-workflows/onboard-review-create.png"::: ++ 10. On the review blade, verify the information is correct and select **Create**. + :::image type="content" source="media/tutorial-lifecycle-workflows/onboard-create.png" alt-text="Screenshot of creating an onboard workflow." lightbox="media/tutorial-lifecycle-workflows/onboard-create.png"::: ++ +## Run the workflow +Now that the workflow is created, it will automatically run the workflow every 3 hours. Lifecycle workflows will check every 3 hours for users in the associated execution condition and execute the configured tasks for those users. However, for the tutorial, we would like to run it immediately. To run a workflow immediately, we can use the on-demand feature. ++>[!NOTE] +>Be aware that you currently cannot run a workflow on-demand if it is set to disabled. You need to set the workflow to enabled to use the on-demand feature. ++To run a workflow on-demand, for users using the Azure portal, do the following steps: ++ 1. On the workflow screen, select the specific workflow you want to run. + 2. Select **Run on demand**. + 3. On the **select users** tab, select **add users**. + 4. Add a user. + 5. Select **Run workflow**. +++## Check tasks and workflow status ++At any time, you may monitor the status of the workflows and the tasks. As a reminder, there are three different data pivots, users runs, and tasks which are currently available in public preview. You may learn more in the how-to guide [Check the status of a workflow (preview)](check-status-workflow.md). In the course of this tutorial, we will look at the status using the user focused reports. ++ 1. To begin, select the **Workflow history (Preview)** tab on the left to view the user summary and associated workflow tasks and statuses. + :::image type="content" source="media/tutorial-lifecycle-workflows/workflow-history.png" alt-text="Screenshot of workflow History status." lightbox="media/tutorial-lifecycle-workflows/workflow-history.png"::: ++1. Once the **Workflow history (Preview)** tab has been selected, you will land on the workflow history page as shown. + :::image type="content" source="media/tutorial-lifecycle-workflows/user-summary.png" alt-text="Screenshot of workflow history overview" lightbox="media/tutorial-lifecycle-workflows/user-summary.png"::: ++1. Next, you may select **Total tasks** for the user Jane Smith to view the total number of tasks created and their statuses. In this example, there are three total tasks assigned to the user Jane Smith. + :::image type="content" source="media/tutorial-lifecycle-workflows/total-tasks.png" alt-text="Screenshot of workflow total task summary." lightbox="media/tutorial-lifecycle-workflows/total-tasks.png"::: ++1. To add an extra layer of granularity, you may select **Failed tasks** for the user Jeff Smith to view the total number of failed tasks assigned to the user Jeff Smith. + :::image type="content" source="media/tutorial-lifecycle-workflows/failed-tasks.png" alt-text="Screenshot of workflow failed tasks." lightbox="media/tutorial-lifecycle-workflows/failed-tasks.png"::: ++1. Similarly, you may select **Unprocessed tasks** for the user Jeff Smith to view the total number of unprocessed or canceled tasks assigned to the user Jeff Smith. + :::image type="content" source="media/tutorial-lifecycle-workflows/canceled-tasks.png" alt-text="Screenshot of workflow unprocessed tasks summary." lightbox="media/tutorial-lifecycle-workflows/canceled-tasks.png"::: ++## Enable the workflow schedule ++After running your workflow on-demand and checking that everything is working fine, you may want to enable the workflow schedule. To enable the workflow schedule, you may select the **Enable Schedule** checkbox on the Properties (Preview) page. +++## Next steps +- [Tutorial: Preparing user accounts for Lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md) +- [Automate employee onboarding tasks before their first day of work with Microsoft Graph (preview)](tutorial-onboard-custom-workflow-graph.md) |
active-directory | Tutorial Prepare Azure Ad User Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-prepare-azure-ad-user-accounts.md | + + Title: 'Tutorial: Preparing user accounts for Lifecycle workflows (preview)' +description: Tutorial for preparing user accounts for Lifecycle workflows (preview). +++++++ Last updated : 06/13/2022+++++# Preparing user accounts for Lifecycle workflows tutorials (Preview) ++For the on-boarding and off-boarding tutorials you'll need accounts for which the workflows will be executed, the following section will help you prepare these accounts, if you already have test accounts that meet the following requirements you can proceed directly to the on-boarding and off-boarding tutorials. Two accounts are required for the on-boarding tutorials, one account for the new hire and another account that acts as the manager of the new hire. The new hire account must have the following attributes set: ++- employeeHireDate must be set to today +- department must be set to sales +- manager attribute must be set, and the manager account should have a mailbox to receive an email ++The off-boarding tutorials only require one account that has group and Teams memberships, but the account will be deleted during the tutorial. ++## Prerequisites ++- An Azure AD tenant +- A global administrator account for the Azure AD tenant. This account will be used to create our users and workflows. ++## Before you begin ++In most cases, users are going to be provisioned to Azure AD either from an on-premises solution (Azure AD Connect, Cloud sync, etc.) or with an HR solution. These users will have the attributes and values populated at the time of creation. Setting up the infrastructure to provision users is outside the scope of this tutorial. For information, see [Tutorial: Basic Active Directory environment](../cloud-sync/tutorial-basic-ad-azure.md) and [Tutorial: Integrate a single forest with a single Azure AD tenant](../cloud-sync/tutorial-single-forest.md) ++## Create users in Azure AD ++We'll use the Graph Explorer to quickly create two users needed to execute the Lifecycle Workflows in the tutorials. One user will represent our new employee and the second will represent the new employee's manager. ++You'll need to edit the POST and replace the <your tenant name here> portion with the name of your tenant. For example: $UPN_manager = "bsimon@<your tenant name here>" to $UPN_manager = "bsimon@contoso.onmicrosoft.com". ++>[!NOTE] +>Be aware that a workflow will not trigger when the employee hire date (Days from event) is prior to the workflow creation date. You must set a employeeHiredate in the future by design. The dates used in this tutorial are a snapshot in time. Therefore, you should change the dates accordingly to accommodate for this situation. ++First we'll create our employee, Melva Prince. ++ 1. Now navigate to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). + 2. Sign-in to Graph Explorer with the global administrator account for your tenant. + 3. At the top, change **GET** to **POST** and add `https://graph.microsoft.com/v1.0/users/` to the box. + 4. Copy the code below in to the **Request body** + 5. Replace `<your tenant here>` in the code below with the value of your Azure AD tenant. + 6. Select **Run query** + 7. Copy the ID that is returned in the results. This will be used later to assign a manager. ++ ```HTTP + { + "accountEnabled": true, + "displayName": "Melva Prince", + "mailNickname": "mprince", + "department": "sales", + "mail": "mpricne@<your tenant name here>", + "employeeHireDate": "2022-04-15T22:10:00Z" + "userPrincipalName": "mprince@<your tenant name here>", + "passwordProfile" : { + "forceChangePasswordNextSignIn": true, + "password": "xWwvJ]6NMw+bWH-d" + } + } + ``` + :::image type="content" source="media/tutorial-lifecycle-workflows/graph-post-user.png" alt-text="Screenshot of POST create Melva in graph explorer." lightbox="media/tutorial-lifecycle-workflows/graph-post-user.png"::: ++Next, we'll create Britta Simon. This is the account that will be used as our manager. ++ 1. Still in [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). + 2. Make sure the top is still set to **POST** and `https://graph.microsoft.com/v1.0/users/` is in the box. + 3. Copy the code below in to the **Request body** + 4. Replace `<your tenant here>` in the code below with the value of your Azure AD tenant. + 5. Select **Run query** + 6. Copy the ID that is returned in the results. This will be used later to assign a manager. + ```HTTP + { + "accountEnabled": true, + "displayName": "Britta Simon", + "mailNickname": "bsimon", + "department": "sales", + "mail": "bsimon@<your tenant name here>", + "employeeHireDate": "2021-01-15T22:10:00Z" + "userPrincipalName": "bsimon@<your tenant name here>", + "passwordProfile" : { + "forceChangePasswordNextSignIn": true, + "password": "xWwvJ]6NMw+bWH-d" + } + } + ``` ++>[!NOTE] +> You need to change the <your tenant name here> section of the code to match your Azure AD tenant. ++As an alternative, the following PowerShell script may also be used to quickly create two users needed execute a lifecycle workflow. One user will represent our new employee and the second will represent the new employee's manager. ++>[!IMPORTANT] +>The following PowerShell script is provided to quickly create the two users required for this tutorial. These users can also be created manually by signing in to the Azure portal as a global administrator and creating them. ++In order to create this step, save the PowerShell script below to a location on a machine that has access to Azure. ++Next, you need to edit the script and replace the <your tenant name here> portion with the name of your tenant. For example: $UPN_manager = "bsimon@<your tenant name here>" to $UPN_manager = "bsimon@contoso.onmicrosoft.com". ++You need to do perform this action for both $UPN_employee and $UPN_manager ++After editing the script, save it and follow the steps below. ++ 1. Open a Windows PowerShell command prompt, with Administrative privileges, from a machine that has access to the Azure portal. +2. Navigate to the saved PowerShell script location and run it. +3. If prompted select **Yes to all** when installing the Azure AD module. +4. When prompted, sign in to the Azure portal with a global administrator for your Azure AD tenant. ++```powershell +# +# DISCLAIMER: +# Copyright (c) Microsoft Corporation. All rights reserved. This +# script is made available to you without any express, implied or +# statutory warranty, not even the implied warranty of +# merchantability or fitness for a particular purpose, or the +# warranty of title or non-infringement. The entire risk of the +# use or the results from the use of this script remains with you. +# +# +# +# +#Declare variables ++$Displayname_employee = "Melva Prince" +$UPN_employee = "mprince<your tenant name here>" +$Name_employee = "mprince" +$Password_employee = "Pass1w0rd" +$EmployeeHireDate_employee = "04/10/2022" +$Department_employee = "Sales" +$Displayname_manager = "Britta Simon" +$Name_manager = "bsimon" +$Password_manager = "Pass1w0rd" +$Department = "Sales" +$UPN_manager = "bsimon@<your tenant name here>" ++Install-Module -Name AzureAD +Connect-AzureAD -Confirm ++$PasswordProfile = New-Object -TypeName Microsoft.Open.AzureAD.Model.PasswordProfile +$PasswordProfile.Password = "<Password>" +New-AzureADUser -DisplayName $Displayname_manager -PasswordProfile $PasswordProfile -UserPrincipalName $UPN_manager -AccountEnabled $true -MailNickName $Name_manager -Department $Department +New-AzureADUser -DisplayName $Displayname_employee -PasswordProfile $PasswordProfile -UserPrincipalName $UPN_employee -AccountEnabled $true -MailNickName $Name_employee -Department $Department +``` ++Once your user(s) has been successfully created in Azure AD, you may proceed to follow the Lifecycle workflow tutorials for your workflow creation. ++## Additional steps for pre-hire scenario ++There are some additional steps that you should be aware of when testing either the [On-boarding users to your organization using Lifecycle workflows with Azure portal (preview)](tutorial-onboard-custom-workflow-portal.md) tutorial or the [On-boarding users to your organization using Lifecycle workflows with Microsoft Graph (preview)](tutorial-onboard-custom-workflow-graph.md) tutorial. ++### Edit the users attributes using the Azure portal +Some of the attributes required for the pre-hire onboarding tutorial are exposed through the Azure portal and can be set there. ++ These attributes are: ++| Attribute | Description |Set on| +|: |::|--| +|mail|Used to notify manager of the new employees temporary access pass|Manager| +|manager|This attribute that is used by the lifecycle workflow|Employee| ++For the tutorial, the **mail** attribute only needs to be set on the manager account and the **manager** attribute set on the employee account. Use the following steps below. ++ 1. Sign in to Azure portal. + 2. On the right, select **Azure Active Directory**. + 3. Select **Users**. + 4. Select **Melva Prince**. + 5. At the top, select **Edit**. + 6. Under manager, select **Change** and Select **Britta Simon**. + 7. At the top, select **Save**. + 8. Go back to users and select **Britta Simon**. + 9. At the top, select **Edit**. + 10. Under **Email**, enter a valid email address. + 11. select **Save**. ++### Edit employeeHireDate +The employeeHireDate attribute is new to Azure AD. It isn't exposed through the UI and must be updated using Graph. To edit this attribute, we can use [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). ++>[!NOTE] +>Be aware that a workflow will not trigger when the employee hire date (Days from event) is prior to the workflow creation date. You must set an employeeHiredate in the future by design. The dates used in this tutorial are a snapshot in time. Therefore, you should change the dates accordingly to accommodate for this situation. ++In order to do this, we must get the object ID for our user Melva Prince. ++ 1. Sign in to [Azure portal](https://portal.azure.com). + 2. On the right, select **Azure Active Directory**. + 3. Select **Users**. + 4. Select **Melva Prince**. + 5. Select the copy sign next to the **Object ID**. + 6. Now navigate to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). + 7. Sign-in to Graph Explorer with the global administrator account for your tenant. + 8. At the top, change **GET** to **PATCH** and add `https://graph.microsoft.com/v1.0/users/<id>` to the box. Replace `<id>` with the value we copied above. + 9. Copy the following in to the **Request body** and select **Run query** + ```Example + { + "employeeHireDate": "2022-04-15T22:10:00Z" + } + ``` + :::image type="content" source="media/tutorial-lifecycle-workflows/update-1.png" alt-text="Screenshot of the PATCH employeeHireDate." lightbox="media/tutorial-lifecycle-workflows/update-1.png"::: + + 10. Verify the change by changing **PATCH** back to **GET** and **v1.0** to **beta**. select **Run query**. You should see the attributes for Melva set. + :::image type="content" source="media/tutorial-lifecycle-workflows/update-3.png" alt-text="Screenshot of the GET employeeHireDate." lightbox="media/tutorial-lifecycle-workflows/update-3.png"::: ++### Edit the manager attribute on the employee account +The manager attribute is used for email notification tasks. It's used by the lifecycle workflow to email the manager a temporary password for the new employee. Use the following steps to ensure your Azure AD users have a value for the manager attribute. ++1. Still in [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). +2. Make sure the top is still set to **PUT** and `https://graph.microsoft.com/v1.0/users/<id>/manager/$ref` is in the box. Change `<id>` to the ID of Melva Prince. + 3. Copy the code below in to the **Request body** + 4. Replace `<managerid>` in the code below with the value of Britta Simons ID. + 5. Select **Run query** + ```Example + { + "@odata.id": "https://graph.microsoft.com/v1.0/users/<managerid>" + } + ``` ++ :::image type="content" source="media/tutorial-lifecycle-workflows/graph-add-manager.png" alt-text="Screenshot of Adding a manager in Graph explorer." lightbox="media/tutorial-lifecycle-workflows/graph-add-manager.png"::: ++ 6. Now, we can verify that the manager has been set correctly by changing the **PUT** to **GET**. + 7. Make sure `https://graph.microsoft.com/v1.0/users/<id>/manager/` is in the box. The `<id>` is still that of Melva Prince. + 8. Select **Run query**. You should see Britta Simon returned in the Response. ++ :::image type="content" source="media/tutorial-lifecycle-workflows/graph-get-manager.png" alt-text="Screenshot of getting a manager in Graph explorer." lightbox="media/tutorial-lifecycle-workflows/graph-get-manager.png"::: ++For more information about updating manager information for a user in Graph API, see [assign manager](/graph/api/user-post-manager?view=graph-rest-1.0&tabs=http) documentation. You can also set this attribute in the Azure Admin center. For more information, see [add or change profile information](/azure/active-directory/fundamentals/active-directory-users-profile-azure-portal?context=azure/active-directory/users-groups-roles/context/ugr-context). ++### Enabling the Temporary Access Pass (TAP) +A Temporary Access Pass is a time-limited pass issued by an admin that satisfies strong authentication requirements. ++In this scenario, we'll use this feature of Azure AD to generate a temporary access pass for our new employee. It will then be mailed to the employee's manager. ++To use this feature, it must be enabled on our Azure AD tenant. To do this, use the following steps. ++1. Sign in to the Azure portal as a Global admin and select **Azure Active Directory** > **Security** > **Authentication methods** > **Temporary Access Pass** +2. Select **Yes** to enable the policy and add Britta Simon and select which users have the policy applied, and any **General** settings. ++## Additional steps for leaver scenario ++There are some additional steps that you should be aware of when testing either the Off-boarding users from your organization using Lifecycle workflows with Azure portal (preview) tutorial or the Off-boarding users from your organization using Lifecycle workflows with Microsoft Graph (preview) tutorial. ++### Set up user with groups and Teams with team membership ++A user with groups and Teams memberships is required before you begin the tutorials for leaver scenario. +++## Next steps +- [On-boarding users to your organization using Lifecycle workflows with Azure portal (preview)](tutorial-onboard-custom-workflow-portal.md) +- [On-boarding users to your organization using Lifecycle workflows with Microsoft Graph (preview)](tutorial-onboard-custom-workflow-graph.md) +- [Tutorial: Off-boarding users from your organization using Lifecycle workflows with Azure portal (preview)](tutorial-offboard-custom-workflow-portal.md) +- [Tutorial: Off-boarding users from your organization using Lifecycle workflows with Microsoft Graph (preview)](tutorial-offboard-custom-workflow-graph.md) |
active-directory | Tutorial Scheduled Leaver Graph | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-scheduled-leaver-graph.md | + + Title: Automate employee offboarding tasks after their last day of work with Microsoft Graph (preview) +description: Tutorial for post off-boarding users from an organization using Lifecycle workflows with Microsoft Graph (preview). +++++++ Last updated : 08/18/2022+++++# Automate employee offboarding tasks after their last day of work with Microsoft Graph (preview) ++This tutorial provides a step-by-step guide on how to configure off-boarding tasks for employees after their last day of work with Lifecycle workflows using the GRAPH API. ++This post off-boarding scenario will run a scheduled workflow and accomplish the following tasks: + +1. Remove all licenses for user +2. Remove user from all Teams +3. Delete user account ++## Before you begin ++As part of the prerequisites for completing this tutorial, you will need an account that has licenses and Teams memberships that can be deleted during the tutorial. For more comprehensive instructions on how to complete these prerequisite steps, you may refer to the [Preparing user accounts for Lifecycle workflows tutorial](tutorial-prepare-azure-ad-user-accounts.md). ++The scheduled leaver scenario can be broken down into the following: +- **Prerequisite:** Create a user account that represents an employee leaving your organization +- **Prerequisite:** Prepare the user account with licenses and Teams memberships +- Create the lifecycle management workflow +- Run the scheduled workflow after last day of work +- Verify that the workflow was successfully executed ++## Create a scheduled leaver workflow using Graph API ++Before introducing the API call to create this workflow, you may want to review some of the parameters that are required for this workflow creation. ++|Parameter |Description | +||| +|category | A string that identifies the category of the workflow. String is "joiner", "mover", or "leaver and can support multiple strings. Category of workflow must also contain the category of its tasks. For full task definitions, see: [Lifecycle workflow tasks and definitions](lifecycle-workflow-tasks.md) | +|displayName | A unique string that identifies the workflow. | +|description | A string that describes the purpose of the workflow for administrative use. (Optional) | +|isEnabled | A boolean value that denotes whether the workflow is set to run or not. If set to ΓÇ£true" then the workflow will run. | +|isSchedulingEnabled | A Boolean value that denotes whether scheduling is enabled or not. Unlike isEnabled, a workflow can still be run on demand if this value is set to false. | +|executionConditions | An argument that contains: <br><br>a time-based attribute and an integer parameter defining when a workflow will run between -60 and 60 <br><br>A scope attribute defining who the workflow runs for. | +|tasks | An argument in a workflow that has a unique displayName and a description. <br><br> It defines the specific tasks to be executed in the workflow. The specified task is outlined by the taskDefinitionID and its parameters. For a list of supported tasks, and their corresponding IDs, see [Supported Task Definitions](lifecycle-workflow-tasks.md). | ++For the purpose of this tutorial, there are three tasks that will be introduced in this workflow: ++### Remove all licenses for user ++```Example +"tasks":[ + { + "category": "leaver", + "description": "Remove all licenses assigned to the user", + "displayName": "Remove all licenses for user", + "id": "8fa97d28-3e52-4985-b3a9-a1126f9b8b4e", + "version": 1, + "parameters": [] + } + ] +``` +### Remove user from all Teams task ++```Example +"tasks":[ + { + "category": "leaver", + "description": "Remove user from all Teams memberships", + "displayName": "Remove user from all Teams", + "id": "81f7b200-2816-4b3b-8c5d-dc556f07b024", + "version": 1, + "parameters": [] + } + ] +``` +### Delete user account ++```Example +"tasks":[ + { + "category": "leaver", + "description": "Delete user account in Azure AD", + "displayName": "Delete User Account", + "id": "8d18588d-9ad3-4c0f-99d0-ec215f0e3dff", + "version": 1, + "parameters": [] + } + ] +``` +### Scheduled leaver workflow ++The following POST API call will create a scheduled leaver workflow to configure off-boarding tasks for employees after their last day of work. ++```http +POST https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows +Content-type: application/json ++{ + "category": "leaver", + "displayName": "Post-Offboarding of an employee", + "description": "Configure offboarding tasks for employees after their last day of work", + "isEnabled": true, + "isSchedulingEnabled": false, + "executionConditions": { + "@odata.type": "#microsoft.graph.identityGovernance.triggerAndScopeBasedConditions", + "scope": { + "@odata.type": "#microsoft.graph.identityGovernance.ruleBasedSubjectSet", + "rule": "department eq 'Marketing'" + }, + "trigger": { + "@odata.type": "#microsoft.graph.identityGovernance.timeBasedAttributeTrigger", + "timeBasedAttribute": "employeeLeaveDateTime", + "offsetInDays": 7 + } + }, + "tasks": [ + { + "category": "leaver", + "continueOnError": false, + "description": "Remove all licenses assigned to the user", + "displayName": "Remove all licenses for user", + "executionSequence": 1, + "isEnabled": true, + "taskDefinitionId": "8fa97d28-3e52-4985-b3a9-a1126f9b8b4e", + "arguments": [] + }, + { + "category": "leaver", + "continueOnError": false, + "description": "Remove user from all Teams memberships", + "displayName": "Remove user from all Teams", + "executionSequence": 2, + "isEnabled": true, + "taskDefinitionId": "81f7b200-2816-4b3b-8c5d-dc556f07b024", + "arguments": [] + }, + { + "category": "leaver", + "continueOnError": false, + "description": "Delete user account in Azure AD", + "displayName": "Delete User Account", + "executionSequence": 3, + "isEnabled": true, + "taskDefinitionId": "8d18588d-9ad3-4c0f-99d0-ec215f0e3dff", + "arguments": [] + } + ] +} +``` ++## Run the workflow +Now that the workflow is created, it will automatically run the workflow every 3 hours. Lifecycle workflows will check every 3 hours for users in the associated execution condition and execute the configured tasks for those users. However, for the tutorial, we would like to run it immediately. To run a workflow immediately, we can use the on-demand feature. ++>[!NOTE] +>Be aware that you currently cannot run a workflow on-demand if it is set to disabled. You need to set the workflow to enabled to use the on-demand feature. ++To run a workflow on-demand for users using the GRAPH API do the following steps: ++1. Open [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). +2. Make sure the top is still set to **POST**, and **beta** and `https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<id>/activate` is in the box. Change `<id>` to the ID of workflows. + 3. Copy the code below in to the **Request body** + 4. Replace `<userid>` in the code below with the value of the user's ID. + 5. Select **Run query** + ```json + { + "subjects":[ + {"id":"<userid>"} + + ] +} ++``` ++## Check tasks and workflow status ++At any time, you may monitor the status of the workflows and the tasks. As a reminder, there are three different data pivots, users runs, and tasks which are currently available in public preview. You may learn more in the how-to guide [Check the status of a workflow (preview)](check-status-workflow.md). In the course of this tutorial, we will look at the status using the user focused reports. ++To begin, you will just need the ID of the workflow and the date range for which you want to see the summary of the status. You may obtain the workflow ID from the response code of the POST API call that was used to create the workflow. ++This example will show you how to list the userProcessingResults for the last 7 days. ++```http +GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow id>/userProcessingResults +``` +Furthermore, it is possible to get a summary of the userProcessingResults to get a quicker overview of large amounts of data, but for this a time span must be specified. ++```http +GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow id>/userProcessingResults/summary(startDateTime=2022-05-01T00:00:00Z,endDateTime=2022-05-30T00:00:00Z) +``` +You may also check the full details about the tasks of a given userProcessingResults. You will need to provide the workflow ID of the workflow, as well as the userProcessingResult ID. You may obtain the userProcessingResult ID from the response of the userProcessingResults GET call above. ++```http +GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow_id>/userProcessingResults/<userProcessingResult_id>/taskProcessingResults +``` +## Enable the workflow schedule ++After running your workflow on-demand and checking that everything is working fine, you may want to enable the workflow schedule. To enable the workflow schedule, you may run the following PATCH call. ++```http +PATCH https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<id> +Content-type: application/json ++{ + "displayName":"Post-Offboarding of an employee", + "description":"Configure offboarding tasks for employees after their last day of work", + "isEnabled": true, + "isSchedulingEnabled": true +} ++``` ++## Next steps +- [Preparing user accounts for Lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md) +- [Automate employee offboarding tasks after their last day of work with Azure portal (preview)](tutorial-scheduled-leaver-portal.md) |
active-directory | Tutorial Scheduled Leaver Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-scheduled-leaver-portal.md | + + Title: Automate employee offboarding tasks after their last day of work with Azure portal (preview) +description: Tutorial for post off-boarding users from an organization using Lifecycle workflows with Azure portal (preview). +++++++ Last updated : 08/18/2022+++++# Automate employee offboarding tasks after their last day of work with Azure portal (preview) ++This tutorial provides a step-by-step guide on how to configure off-boarding tasks for employees after their last day of work with Lifecycle workflows using the Azure portal. ++This post off-boarding scenario will run a scheduled workflow and accomplish the following tasks: + +1. Remove all licenses for user +2. Remove user from all Teams +3. Delete user account ++## Before you begin ++As part of the prerequisites for completing this tutorial, you will need an account that has licenses and Teams memberships that can be deleted during the tutorial. For more comprehensive instructions on how to complete these prerequisite steps, you may refer to the [Preparing user accounts for Lifecycle workflows tutorial](tutorial-prepare-azure-ad-user-accounts.md). ++The scheduled leaver scenario can be broken down into the following: +- **Prerequisite:** Create a user account that represents an employee leaving your organization +- **Prerequisite:** Prepare the user account with licenses and Teams memberships +- Create the lifecycle management workflow +- Run the scheduled workflow after last day of work +- Verify that the workflow was successfully executed ++## Create a workflow using scheduled leaver template +Use the following steps to create a scheduled leaver workflow that will configure off-boarding tasks for employees after their last day of work with Lifecycle workflows using the Azure portal. ++ 1. Sign in to Azure portal + 2. On the right, select **Azure Active Directory**. + 3. Select **Identity Governance**. + 4. Select **Lifecycle workflows (Preview)**. + 5. On the **Overview (Preview)** page, select **New workflow**. + :::image type="content" source="media/tutorial-lifecycle-workflows/new-workflow.png" alt-text="Screenshot of selecting a new workflow." lightbox="media/tutorial-lifecycle-workflows/new-workflow.png"::: ++ 6. From the templates, select **Select** under **Post-offboarding of an employee**. + :::image type="content" source="media/tutorial-lifecycle-workflows/select-leaver-template.png" alt-text="Screenshot of selecting a leaver workflow." lightbox="media/tutorial-lifecycle-workflows/select-leaver-template.png"::: ++ 7. Next, you will configure the basic information about the workflow. This information includes when the workflow will trigger, known as **Days from event**. So in this case, the workflow will trigger seven days after the employee's leave date. On the post-offboarding of an employee screen, add the following settings and then select **Next: Configure Scope**. + :::image type="content" source="media/tutorial-lifecycle-workflows/leaver-basics.png" alt-text="Screenshot of leaver template basics information for a workflow." lightbox="media/tutorial-lifecycle-workflows/leaver-basics.png"::: + + 8. Next, you will configure the scope. The scope determines which users this workflow will run against. In this case, it will be on all users in the Marketing department. On the configure scope screen, under **Rule** add the following and then select **Next: Review tasks**. + :::image type="content" source="media/tutorial-lifecycle-workflows/leaver-scope.png" alt-text="Screenshot of reviewing scope details for a leaver workflow." lightbox="media/tutorial-lifecycle-workflows/leaver-scope.png"::: ++ 9. On the following page, you may inspect the tasks if desired but no additional configuration is needed. Select **Next: Select users** when you are finished. + :::image type="content" source="media/tutorial-lifecycle-workflows/review-leaver-tasks.png" alt-text="Screenshot of leaver workflow tasks." lightbox="media/tutorial-lifecycle-workflows/review-leaver-tasks.png"::: ++10. On the review blade, verify the information is correct and select **Create**. + :::image type="content" source="media/tutorial-lifecycle-workflows/create-leaver-workflow.png" alt-text="Screenshot of a leaver workflow being created." lightbox="media/tutorial-lifecycle-workflows/create-leaver-workflow.png"::: ++>[!NOTE] +> Select **Create** with the **Enable schedule** box unchecked to run the workflow on-demand. You may enable this setting later after checking the tasks and workflow status. ++## Run the workflow +Now that the workflow is created, it will automatically run the workflow every 3 hours. Lifecycle workflows will check every 3 hours for users in the associated execution condition and execute the configured tasks for those users. However, for the tutorial, we would like to run it immediately. To run a workflow immediately, we can use the on-demand feature. ++>[!NOTE] +>Be aware that you currently cannot run a workflow on-demand if it is set to disabled. You need to set the workflow to enabled to use the on-demand feature. ++To run a workflow on-demand, for users using the Azure portal, do the following steps: ++ 1. On the workflow screen, select the specific workflow you want to run. + 2. Select **Run on demand**. + 3. On the **select users** tab, select **add users**. + 4. Add a user. + 5. Select **Run workflow**. ++ +## Check tasks and workflow status ++At any time, you may monitor the status of the workflows and the tasks. As a reminder, there are three different data pivots, users runs, and tasks which are currently available in public preview. You may learn more in the how-to guide [Check the status of a workflow (preview)](check-status-workflow.md). In the course of this tutorial, we will look at the status using the user focused reports. ++ 1. To begin, select the **Workflow history (Preview)** tab on the left to view the user summary and associated workflow tasks and statuses. + :::image type="content" source="media/tutorial-lifecycle-workflows/workflow-history-post-offboard.png" alt-text="Screenshot of the workflow history summary." lightbox="media/tutorial-lifecycle-workflows/workflow-history-post-offboard.png"::: ++1. Once the **Workflow history (Preview)** tab has been selected, you will land on the workflow history page as shown. + :::image type="content" source="media/tutorial-lifecycle-workflows/user-summary-post-offboard.png" alt-text="Screenshot of the workflow history overview." lightbox="media/tutorial-lifecycle-workflows/user-summary-post-offboard.png"::: ++1. Next, you may select **Total tasks** for the user Jane Smith to view the total number of tasks created and their statuses. In this example, there are three total tasks assigned to the user Jane Smith. + :::image type="content" source="media/tutorial-lifecycle-workflows/total-tasks-post-offboard.png" alt-text="Screenshot of workflow's total tasks." lightbox="media/tutorial-lifecycle-workflows/total-tasks-post-offboard.png"::: ++1. To add an extra layer of granularity, you may select **Failed tasks** for the user Wade Warren to view the total number of failed tasks assigned to the user Wade Warren. + :::image type="content" source="media/tutorial-lifecycle-workflows/failed-tasks-post-offboard.png" alt-text="Screenshot of workflow failed tasks." lightbox="media/tutorial-lifecycle-workflows/failed-tasks-post-offboard.png"::: ++1. Similarly, you may select **Unprocessed tasks** for the user Wade Warren to view the total number of unprocessed or canceled tasks assigned to the user Wade Warren. + :::image type="content" source="media/tutorial-lifecycle-workflows/canceled-tasks-post-offboard.png" alt-text="Screenshot of workflow unprocessed tasks." lightbox="media/tutorial-lifecycle-workflows/canceled-tasks-post-offboard.png"::: ++## Enable the workflow schedule ++After running your workflow on-demand and checking that everything is working fine, you may want to enable the workflow schedule. To enable the workflow schedule, you may select the **Enable Schedule** checkbox on the Properties (Preview) page. ++ :::image type="content" source="media/tutorial-lifecycle-workflows/enable-schedule.png" alt-text="Screenshot of workflow enabled schedule." lightbox="media/tutorial-lifecycle-workflows/enable-schedule.png"::: ++## Next steps +- [Preparing user accounts for Lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md) +- [Automate employee offboarding tasks after their last day of work with Microsoft Graph (preview)](tutorial-scheduled-leaver-graph.md) +++++++ |
active-directory | Understanding Lifecycle Workflows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/understanding-lifecycle-workflows.md | + + Title: 'Understanding lifecycle workflows- Azure Active Directory' +description: Describes an overview of Lifecycle workflows and the various parts. ++++++ Last updated : 01/20/2022+++++# Understanding lifecycle workflows ++The following reference document provides an overview of a workflow created using Lifecycle Workflows. Lifecycle Workflows allow you to create workflows that automate common tasks associated with user lifecycle in organizations. Lifecycle Workflows automate tasks based on the joiner-mover-leaver cycle of lifecycle management, and splits tasks for users up into categories of where they are in the lifecycle of an organization. These categories extend into templates where they can be quickly customized to suit the needs of users in your organization. For more information, see: [What are Lifecycle Workflows?](what-are-lifecycle-workflows.md). ++ [](media/understanding-lifecycle-workflows/workflow-2.png#lightbox) ++## Licenses and Permissions ++++|Column1 |Display String |Description |Admin Consent Required | +||||| +|LifecycleWorkflows.Read.All | Read all Lifecycle workflows, tasks, user states| Allows the app to list and read all workflows, tasks, user states related to lifecycle workflows on behalf of the signed-in user.| Yes +|LifecycleWorkflows.ReadWrite.All | Read and write all lifecycle workflows, tasks, user states.| Allows the app to create, update, list, read and delete all workflows, tasks, user states related to lifecycle workflows on behalf of the signed-in user.| Yes ++## Parts of a workflow +A workflow can be broken down in to the following three main parts. ++|Workflow part|Description| +|--|--| +|General information|This portion of a workflow covers basic information such as display name and a description of what the workflow does.| +|Tasks|Tasks are the actions that will be taken when a workflow is executed.| +|Execution conditions| The execution condition section of a workflow sets up<br><br>- Who(scope) the workflow runs against <br><br>- When(trigger) the workflow runs| ++## Templates +Creating a workflow via the portal requires the use of a template. A Lifecycle Workflow template is a framework that is used for pre-defined tasks and helps automate the creation of a workflow. ++ [](media/understanding-lifecycle-workflows/workflow-3.png#lightbox) ++The template will define the task that is to be used and then guide you through the creation of the workflow. The template provides input for description information and execution condition information. ++>[!NOTE] +>Depending on the template you select, the options that will be available may vary. This document uses the **Onboarding pre-hire employee** template to illustrate the parts of a workflow. ++For more information, see [Lifecycle workflow templates.](lifecycle-workflow-templates.md) ++## Workflow basics ++After selecting a template, on the basics screen: + - Provide the information that will be used in the description portion of the workflow. + - The trigger, defines when of the execution condition. ++ [](media/understanding-lifecycle-workflows/workflow-4.png#lightbox) ++### Workflow details +Under the workflow details section, you can provide the following information: ++ |Name|Description| + |--|--| + |Name|The name of the workflow.| + |Description|A brief description that describes the workflow.| ++### Trigger details +Under the trigger details section, you can provide the following information. ++ |Name|Description| + |--|--| + |Days for event|The number of days before or after the date specified in the **Event user attribute**.| ++This section defines **when** the workflow will run. Currently, there are two supported types of triggers: + +- Trigger and scope based - runs the task on all users in scope once the workflow is triggered. +- On-demand - can be run immediately. Typically used for real-time employee terminations. ++## Configure scope +After you define the basics tab, on the configure scope screen: + - Provide the information that will be used in the execution condition, to determine who the workflow will run against. + - Add more expressions to create more complex filtering. ++The configure scope section determines **who** the workflow will run against. ++ [](media/understanding-lifecycle-workflows/workflow-5.png#lightbox) ++You can add extra expressions using **And/Or** to create complex conditionals, and apply the workflow more granularly across your organization. ++ [](media/understanding-lifecycle-workflows/workflow-8.png#lightbox) ++For more information, see [Create a lifecycle workflow.](create-lifecycle-workflow.md) +++## Review tasks +After defining the scope the review tasks screen will allow you to: + - Verify that the correct template was selected, and the tasks associated with the workflow are correct. + - Add more tasks other than the ones in the template. ++[](media/understanding-lifecycle-workflows/workflow-6.png#lightbox) ++You can use the **Add task** button to add extra tasks for the workflow. Select the additional tasks from the list provided. ++ [](media/understanding-lifecycle-workflows/workflow-6.png#lightbox) ++For more information, see: [Lifecycle workflow tasks](lifecycle-workflow-tasks.md) ++## Review and create ++After reviewing the tasks on the review and create screen, you: + - Verify all of the information is correct, and create the workflow. ++ Based on what was defined in the previous sections our workflow will now show: +- It's named **on-board pre-hire employee**. +- Based on the date in the **EmployeeHireDate** attribute, it will trigger **seven** (7) days prior to the date. +- It will run against users who have **marketing** for the **department** attribute value. +- It will generate a **TAP (temporary access password)**, and send an email to the user in the **manager** attribute of the pre-hire employee. ++ [](media/understanding-lifecycle-workflows/workflow-7.png#lightbox) ++## Scheduling +A workflow isn't scheduled to run by default. To enable the workflow, it needs to be scheduled. ++To verify whether the workflow is scheduled, you can view the **Scheduled** column. ++To enable the workflow, select the **Enable schedule** option for the workflow. ++Once scheduled, the workflow will be evaluated every 3 hours to determine whether or not it should run based on the execution conditions. ++ [](media/understanding-lifecycle-workflows/workflow-10.png#lightbox) +++### On-demand scheduling ++A workflow can be run on-demand for testing or in situations where it's required. ++Use the **Run on demand** feature to execute the workflow immediately. The workflow must be enabled before you can run it on demand. ++>[!NOTE] +> A workflow that is run on demand for any user does not take into account whether or not a user meets the workflow's execution. It will apply the task regardless of whether the execution conditions are met or not. ++For more information, see [Run a workflow on-demand](on-demand-workflow.md) ++## Managing the workflow ++By selecting on a workflow you created, you can manage the workflow. ++You can select which portion of the workflow you wish to update or change using the left navigation bar. Select the section you wish to update. ++[](media/understanding-lifecycle-workflows/workflow-11.png#lightbox) ++For more information, see [Manage lifecycle workflow properties](manage-workflow-properties.md) ++## Versioning ++Workflow versions are separate workflows built using the same information of an original workflow, but with updated parameters so that they're reported differently within logs. Workflow versions can change the actions or even scope of an existing workflow. ++You can view versioning information by selecting **Versions** under **Manage** from the left. ++[](media/understanding-lifecycle-workflows/workflow-12.png#lightbox) ++For more information, see [Lifecycle Workflow versioning](lifecycle-workflow-versioning.md) ++## Developer information +This document covers the parts of a lifecycle workflow ++For more information, see the [Workflow API Reference](lifecycle-workflows-developer-reference.md) ++## Next steps +- [Create a custom workflow using the Azure portal](tutorial-onboard-custom-workflow-portal.md) +- [Create a Lifecycle workflow](create-lifecycle-workflow.md) |
active-directory | What Are Lifecycle Workflows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/what-are-lifecycle-workflows.md | + + Title: 'What are lifecycle workflows? - Azure Active Directory' +description: Describes overview of Lifecycle workflow feature. ++++++ Last updated : 01/20/2022++++++# What are Lifecycle Workflows? (Public Preview) ++Azure AD Lifecycle Workflows is a new Azure AD Identity Governance service that enables organizations to manage Azure AD users by automating these three basic lifecycle processes: ++- Joiner - When an individual comes into scope of needing access. An example is a new employee joining a company or organization. +- Mover - When an individual moves between boundaries within an organization. This movement may require more access or authorization. An example would be a user who was in marketing is now a member of the sales organization. +- Leaver - When an individual leaves the scope of needing access, access may need to be removed. Examples would be an employee who is retiring or an employee who has been terminated. ++Workflows contain specific processes, which run automatically against users as they move through their life cycle. Workflows are made up of [Tasks](lifecycle-workflow-tasks.md) and [Execution conditions](understanding-lifecycle-workflows.md#understanding-lifecycle-workflows). ++Tasks are specific actions that run automatically when a workflow is triggered. An Execution condition defines the 'Scope' of "“who” and the 'Trigger' of “when” a workflow will be performed. For example, send a manager an email 7 days before the value in the NewEmployeeHireDate attribute of new employees, can be described as a workflow. It consists of: + - Task: send email + - When (trigger): Seven days before the NewEmployeeHireDate attribute value + - Who (scope): new employees ++Automatic workflow schedules [trigger](understanding-lifecycle-workflows.md#trigger-details) off of user attributes. Scoping of automatic workflows is possible using a wide range of user and extended attributes; such as the "department" that a user belongs to. ++Finally, Lifecycle Workflows can even [integrate with Logic Apps](lifecycle-workflow-extensibility.md) tasks ability to extend workflows for more complex scenarios using your existing Logic apps. +++ :::image type="content" source="media/what-are-lifecycle-workflows/intro-2.png" alt-text="Lifecycle Workflows diagram." lightbox="media/what-are-lifecycle-workflows/intro-2.png"::: +++## Why use Lifecycle workflows? +Anyone who wants to modernize their identity lifecycle management process for employees, needs to ensure: ++ - **New employee on-boarding** - That when a user joins the organization, they're ready to go on day one. They have the correct access to the information, membership to groups, and applications they need. + - **Employee retirement/terminations/off-boarding** - That users who are no longer tied to the company for various reasons (termination, separation, leave of absence or retirement), have their access revoked in a timely manner + - **Easy to administer in my organization** - That there's a seamless process to accomplish the above tasks, that isn't overly burdensome or time consuming for Administrators. + - **Robust troubleshooting/auditing/compliance** - That there's the ability to easily troubleshoot issues when they arise and that there's sufficient logging to help with this and compliance related issues. ++The following are key reasons to use Lifecycle workflows. +- **Extend** your HR-driven provisioning process with other workflows that simplify and automate tasks. +- **Centralize** your workflow process so you can easily create and manage workflows all in one location. +- Easily **troubleshoot** workflow scenarios with the Workflow history and Audit logs +- **Manage** user lifecycle at scale. As your organization grows, the need for other resources to manage user lifecycles are reduced. +- **Reduce** or remove manual tasks that were done in the past with automated lifecycle workflows +- **Apply** logic apps to extend workflows for more complex scenarios using your existing Logic apps +++All of the above can help ensure a holistic experience by allowing you to remove other dependencies and applications to achieve the same result. Thus translating into, increased on-boarding and off-boarding efficiency. +++## When to use Lifecycle Workflows +You can use Lifecycle workflows to address any of the following conditions. +- **Automating and extending user onboarding/HR provisioning** - Use Lifecycle workflows when you want to extend your HR provisioning scenarios by automating tasks such as generating temporary passwords and emailing managers. If you currently have a manual process for on-boarding, use Lifecycle workflows as part of an automated process. +- **Automate group membership**: When groups in your organization are well-defined, you can automate user membership of these groups. Some of the benefits and differences from dynamic groups include: + - LCW manages static groups, where a dynamic group rule isn't needed + - No need to have one rule per group – the LCW rule determines the set/scope of users to execute workflows against not which group + - LCW helps manage users ‘ lifecycle beyond attributes supported in dynamic groups – for example, ‘X’ days before the employeeHireDate + - LCW can perform actions on the group not just the membership. +- **Workflow history and auditing** Use Lifecycle workflows when you need to create an audit trail of user lifecycle processes. Using the portal you can view history and audits for on-boarding and off-boarding scenarios. +- **Automate user account management**: Making sure users who are leaving have their access to resources revoked is a key part of the identity lifecycle process. Lifecycle Workflows allow you to automate the disabling and removal of user accounts. +- **Integrate with Logic Apps**: Ability to apply logic apps to extend workflows for more complex scenarios using your existing Logic apps. +++++## Next steps +- [Create a custom workflow using the Azure portal](tutorial-onboard-custom-workflow-portal.md) +- [Create a Lifecycle workflow](create-lifecycle-workflow.md) |
active-directory | Workflows Faqs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/workflows-faqs.md | + + Title: 'Lifecycle workflows FAQs - Azure AD (preview)' +description: Frequently asked questions about Lifecycle workflows (preview). +++++++ Last updated : 07/14/2022+++++# Lifecycle workflows - FAQs (preview) ++In this article you will find questions to commonly asked questions about [Lifecycle Workflows](what-are-lifecycle-workflows.md). Please check back to this page frequently as changes happen often, and answers are continually being added. ++## Frequently asked questions ++### Can I create custom workflows for guests? ++Yes, custom workflows can be configured for members or guests in your tenant. Workflows can run for all types of external guests, external members, internal guests and internal members. ++### Do I need to map employeeHireDate in provisioning apps like WorkDay? ++Yes, key user properties like employeeHireDate and employeeType are supported for user provisioning from HR apps like WorkDay. To use these properties in Lifecycle workflows, you will need to map them in the provisioning process to ensure the values are set. The following is an example of the mapping: ++ ++### How do I see more details and parameters of tasks and the attributes that are being updated? ++Some tasks do update existing attributes; however, we donΓÇÖt currently share those specific details. As these tasks are updating attributes related to other Azure AD features, so you can find that info in those docs. For temporary access pass, weΓÇÖre writing to the appropriate attributes listed [here](/graph/api/temporaryaccesspassauthenticationmethod-post?view=graph-rest-beta&tabs=csharp#request-body). ++### Is it possible for me to create new tasks and how? For example, triggering other graph APIs/web hooks? ++We currently donΓÇÖt support the ability to create new tasks outside of the set of tasks supported in the task templates. As an alternative, you may accomplish this by setting up a logic app and then creating a logic apps task in Lifecycle Workflows with the URL. For more information, see [Trigger Logic Apps based on custom task extensions (preview)](trigger-custom-task.md) ++## Next steps ++- [What are Lifecycle workflows? (Preview)](what-are-lifecycle-workflows.md) |
active-directory | How To Connect Install Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md | We recommend that you harden your Azure AD Connect server to decrease the securi - Follow these [additional guidelines](/windows-server/identity/ad-ds/plan/security-best-practices/reducing-the-active-directory-attack-surface) to reduce the attack surface of your Active Directory environment. - Follow the [Monitor changes to federation configuration](how-to-connect-monitor-federation-changes.md) to setup alerts to monitor changes to the trust established between your Idp and Azure AD. - Enable Multi Factor Authentication (MFA) for all users that have privileged access in Azure AD or in AD. One security issue with using AADConnect is that if an attacker can get control over the Azure AD Connect server they can manipulate users in Azure AD. To prevent a attacker from using these capabilities to take over Azure AD accounts, MFA offers protections so that even if an attacker manages to e.g. reset a user's password using Azure AD Connect they still cannot bypass the second factor.+- Disable Soft Matching on your tenant. Soft Matching is a great feature to help transfering source of autority for existing cloud only objects to Azure AD Connect, but it comes with certain security risks. If you do not require Soft Matching, you should disable it: https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-syncservice-features#blocksoftmatch ### SQL Server used by Azure AD Connect * Azure AD Connect requires a SQL Server database to store identity data. By default, a SQL Server 2019 Express LocalDB (a light version of SQL Server Express) is installed. SQL Server Express has a 10-GB size limit that enables you to manage approximately 100,000 objects. If you need to manage a higher volume of directory objects, point the installation wizard to a different installation of SQL Server. The type of SQL Server installation can impact the [performance of Azure AD Connect](./plan-connect-performance-factors.md#sql-database-factors). |
active-directory | Permissions Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md | In | Can do [Compliance Manager](/office365/securitycompliance/meet-data-protection-and-regulatory-reqs-using-microsoft-cloud) | Track, assign, and verify your organization's regulatory compliance activities [Office 365 Security & Compliance Center](https://support.office.com/article/About-Office-365-admin-roles-da585eea-f576-4f55-a1e0-87090b6aaa9d) | Manage data governance<br>Perform legal and data investigation<br>Manage Data Subject Request<br><br>This role has the same permissions as the [Compliance Administrator RoleGroup](/microsoft-365/security/office-365-security/permissions-in-the-security-and-compliance-center#permissions-needed-to-use-features-in-the-security--compliance-center) in Office 365 Security & Compliance Center role-based access control. [Intune](/intune/role-based-access-control) | View all Intune audit data-[Cloud App Security](/cloud-app-security/manage-admins) | Has read-only permissions and can manage alerts<br>Can create and modify file policies and allow file governance actions<br>Can view all the built-in reports under Data Management +[Microsoft Defender for Cloud Apps](/defender-cloud-apps/manage-admins) | Has read-only permissions and can manage alerts<br>Can create and modify file policies and allow file governance actions<br>Can view all the built-in reports under Data Management > [!div class="mx-tableFixed"] > | Actions | Description | In | Can do [Compliance Manager](/office365/securitycompliance/meet-data-protection-and-regulatory-reqs-using-microsoft-cloud) | Track, assign, and verify your organization's regulatory compliance activities [Office 365 Security & Compliance Center](https://support.office.com/article/About-Office-365-admin-roles-da585eea-f576-4f55-a1e0-87090b6aaa9d) | Manage data governance<br>Perform legal and data investigation<br>Manage Data Subject Request<br><br>This role has the same permissions as the [Compliance Data Administrator RoleGroup](/microsoft-365/security/office-365-security/permissions-in-the-security-and-compliance-center#permissions-needed-to-use-features-in-the-security--compliance-center) in Office 365 Security & Compliance Center role-based access control. [Intune](/intune/role-based-access-control) | View all Intune audit data-[Cloud App Security](/cloud-app-security/manage-admins) | Has read-only permissions and can manage alerts<br>Can create and modify file policies and allow file governance actions<br>Can view all the built-in reports under Data Management +[Microsoft Defender for Cloud Apps](/defender-cloud-apps/manage-admins) | Has read-only permissions and can manage alerts<br>Can create and modify file policies and allow file governance actions<br>Can view all the built-in reports under Data Management > [!div class="mx-tableFixed"] > | Actions | Description | Identity Protection Center | All permissions of the Security Reader role<br>Addi Azure Advanced Threat Protection | Monitor and respond to suspicious security activity [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/prepare-deployment) | Assign roles<br>Manage machine groups<br>Configure endpoint threat detection and automated remediation<br>View, investigate, and respond to alerts<br/>View machines/device inventory [Intune](/intune/role-based-access-control) | Views user, device, enrollment, configuration, and application information<br>Cannot make changes to Intune-[Cloud App Security](/cloud-app-security/manage-admins) | Add admins, add policies and settings, upload logs and perform governance actions +[Microsoft Defender for Cloud Apps](/defender-cloud-apps/manage-admins) | Add admins, add policies and settings, upload logs and perform governance actions [Microsoft 365 service health](/office365/enterprise/view-service-health) | View the health of Microsoft 365 services [Smart lockout](../authentication/howto-password-smart-lockout.md) | Define the threshold and duration for lockouts when failed sign-in events happen. [Password Protection](../authentication/concept-password-ban-bad.md) | Configure custom banned password list or on-premises password protection. Users with this role can manage alerts and have global read-only access on secur | [Office 365 Security & Compliance Center](https://support.office.com/article/About-Office-365-admin-roles-da585eea-f576-4f55-a1e0-87090b6aaa9d) | All permissions of the Security Reader role<br>View, investigate, and respond to security alerts | | [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/prepare-deployment) | All permissions of the Security Reader role<br>View, investigate, and respond to security alerts | | [Intune](/intune/role-based-access-control) | All permissions of the Security Reader role |-| [Microsoft Defender for Cloud Apps](/cloud-app-security/manage-admins) | All permissions of the Security Reader role<br>View, investigate, and respond to security alerts | +| [Microsoft Defender for Cloud Apps](/defender-cloud-apps/manage-admins) | All permissions of the Security Reader role<br>View, investigate, and respond to security alerts | | [Microsoft 365 service health](/microsoft-365/enterprise/view-service-health) | View the health of Microsoft 365 services | > [!div class="mx-tableFixed"] Identity Protection Center | Read all security reports and settings information [Office 365 Security & Compliance Center](https://support.office.com/article/About-Office-365-admin-roles-da585eea-f576-4f55-a1e0-87090b6aaa9d) | View security policies<br>View and investigate security threats<br>View reports [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/prepare-deployment) | View and investigate alerts. When you turn on role-based access control in Microsoft Defender for Endpoint, users with read-only permissions such as the Azure AD Security Reader role lose access until they are assigned to a Microsoft Defender for Endpoint role. [Intune](/intune/role-based-access-control) | Views user, device, enrollment, configuration, and application information. Cannot make changes to Intune.-[Microsoft Defender for Cloud Apps](/cloud-app-security/manage-admins) | Has read permissions. +[Microsoft Defender for Cloud Apps](/defender-cloud-apps/manage-admins) | Has read permissions. [Microsoft 365 service health](/office365/enterprise/view-service-health) | View the health of Microsoft 365 services > [!div class="mx-tableFixed"] |
aks | Azure Cni Overlay | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md | + + Title: Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS) (Preview) +description: Learn how to configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS), including deploying an AKS cluster into an existing virtual network and subnet. +++ Last updated : 08/29/2022+++# Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS) ++The traditional [Azure Container Networking Interface (CNI)](./configure-azure-cni.md) assigns a VNet IP address to every Pod either from a pre-reserved set of IPs on every node or from a separate subnet reserved for pods. This approach requires IP address planning and could lead to address exhaustion and difficulties in scaling your clusters as your application demands grow. ++With Azure CNI Overlay, the cluster nodes are deployed into an Azure Virtual Network subnet, whereas pods are assigned IP addresses from a private CIDR logically different from the VNet hosting the nodes. Pod and node traffic within the cluster use an overlay network, and Network Address Translation (via the node's IP address) is used to reach resources outside the cluster. This solution saves a significant amount of VNet IP addresses and enables you to seamlessly scale your cluster to very large sizes. An added advantage is that the private CIDR can be reused in different AKS clusters, truly extending the IP space available for containerized applications in AKS. ++> [!NOTE] +> Azure CNI overlay is currently only available in US West Central region. ++## Overview of overlay networking ++In overlay networking, only the Kubernetes cluster nodes are assigned IPs from a subnet. Pods receive IPs from a private CIDR that is provided at the time of cluster creation. Each node is assigned a `/24` address space carved out from the same CIDR. Additional nodes that are created when you scale out a cluster automatically receive `/24` address spaces from the same CIDR. Azure CNI assigns IPs to pods from this `/24` space. ++A separate routing domain is created in the Azure Networking stack for the pod's private CIDR space, which creates an overlay network for direct communication between pods. There is no need to provision custom routes on the cluster subnet or use an encapsulation method to tunnel traffic between pods. This provides connectivity performance between pods on par with VMs in a VNet. +++Communication with endpoints outside the cluster, such as on-premises and peered VNets, happens using the node IP through Network Address Translation. Azure CNI translates the source IP (overlay IP of the pod) of the traffic to the primary IP address of the VM, which enables the Azure Networking stack to route the traffic to the destination. Endpoints outside the cluster can't connect to a pod directly. You will have to publish the pod's application as a Kubernetes Load Balancer service to make it reachable on the VNet. ++Outbound (egress) connectivity to the internet for overlay pods can be provided using a [Standard SKU Load Balancer](./egress-outboundtype.md#outbound-type-of-loadbalancer) or [Managed NAT Gateway](./nat-gateway.md). You can also control egress traffic by directing it to a firewall using [User Defined Routes on the cluster subnet](./egress-outboundtype.md#outbound-type-of-userdefinedrouting). ++Ingress connectivity to the cluster can be achieved using an ingress controller such as Nginx or [HTTP application routing](./http-application-routing.md). ++## Difference between Kubenet and Azure CNI Overlay ++Like Azure CNI Overlay, Kubenet assigns IP addresses to pods from an address space logically different from the VNet but has scaling and other limitations. The below table provides a detailed comparison between Kubenet and Azure CNI Overlay. If you do not want to assign VNet IP addresses to pods due to IP shortage, then Azure CNI Overlay is the recommended solution. ++| Area | Azure CNI Overlay | Kubenet | +| -- | :--: | -- | +| Cluster scale | 1000 nodes and 250 pods/node | 400 nodes and 250 pods/node | +| Network configuration | Simple - no additional configuration required for pod networking | Complex - requires route tables and UDRs on cluster subnet for pod networking | +| Pod connectivity performance | Performance on par with VMs in a VNet | Additional hop adds minor latency | +| Kubernetes Network Policies | Azure Network Policies, Calico | Calico | +| OS platforms supported | Linux only | Linux only | ++## IP address planning ++* **Cluster Nodes**: Cluster nodes go into a subnet in your VNet, so ensure that you have a subnet big enough to account for future scale. A simple `/24` subnet can host up to 251 nodes (the first three IP addresses in a subnet are reserved for management operations). ++* **Pods**: The overlay solution assigns a `/24` address space for pods on every node from the private CIDR that you specify during cluster creation. The `/24` size is fixed and can't be increased or decreased. You can run up to 250 pods on a node. When planning the pod address space, ensure that the private CIDR is large enough to provide `/24` address spaces for new nodes to support future cluster expansion. +The following are additional factors to consider when planning pod address space: + * Pod CIDR space must not overlap with the cluster subnet range. + * Pod CIDR space must not overlap with IP ranges used in on-premises networks and peered networks. + * The same pod CIDR space can be used on multiple independent AKS clusters in the same VNet. ++* **Kubernetes service address range**: The size of the service address CIDR depends on the number of cluster services you plan to create. It must be smaller than `/12`. This range should also not overlap with the pod CIDR range, cluster subnet range, and IP range used in peered VNets and on-premises networks. ++* **Kubernetes DNS service IP address**: This is an IP address within the Kubernetes service address range that will be used by cluster service discovery. Don't use the first IP address in your address range. The first address in your subnet range is used for the kubernetes.default.svc.cluster.local address. ++## Maximum pods per node ++You can configure the maximum number of pods per node at the time of cluster creation or when you add a new node pool. The default for Azure CNI Overlay is 30. The maximum value that you can specify in Azure CNI Overlay is 250, and the minimum value is 10. The maximum pods per node value configured during creation of a node pool applies to the nodes in that node pool only. ++## Choosing a network model to use ++Azure CNI offers two IP addressing options for pods- the traditional configuration that assigns VNet IPs to pods, and overlay networking. The choice of which option to use for your AKS cluster is a balance between flexibility and advanced configuration needs. The following considerations help outline when each network model may be the most appropriate. ++Use overlay networking when: ++* You would like to scale to a large number of Pods but have limited IP address space in your VNet. +* Most of the pod communication is within the cluster. +* You don't need advanced AKS features, such as virtual nodes. ++Use the traditional VNet option when: ++* You have available IP address space. +* Most of the pod communication is to resources outside of the cluster. +* Resources outside the cluster need to reach pods directly. +* You need AKS advanced features, such as virtual nodes. ++## Limitations with Azure CNI Overlay ++The overlay solution has the following limitations today ++* Only available for Linux and not for Windows. +* You can't deploy multiple overlay clusters in the same subnet. +* Overlay can be enabled only for new clusters. Existing (already deployed) clusters can't be configured to use overlay. +* You can't use Application Gateway as an Ingress Controller (AGIC) for an overlay cluster. ++## Steps to set up overlay clusters +++The following example walks through the steps to create a new virtual network with a subnet for the cluster nodes and an AKS cluster that uses Azure CNI Overlay. Be sure to replace the variables with your own values. ++First, opt into the feature by running the following command: ++```azurecli-interactive +az feature register --namespace Microsoft.ContainerService --name AzureOverlayPreview +``` ++Create a virtual network with a subnet for the cluster nodes. ++```azurecli-interactive +resourceGroup="myResourceGroup" +vnet="myVirtualNetwork" +location="westcentralus" ++# Create the resource group +az group create --name $resourceGroup --location $location ++# Create a VNet and a subnet for the cluster nodes +az network vnet create -g $resourceGroup --location $location --name $vnet --address-prefixes 10.0.0.0/8 -o none +az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name nodesubnet --address-prefix 10.10.0.0/16 -o none +``` ++Create a cluster with Azure CNI Overlay. Use `--network-plugin-mode` to specify that this is an overlay cluster. If the pod CIDR is not specified then AKS assigns a default space, viz. 10.244.0.0/16. ++```azurecli-interactive +clusterName="myOverlayCluster" +subscription="aaaaaaa-aaaaa-aaaaaa-aaaa" ++az aks create -n $clusterName -g $resourceGroup --location $location --network-plugin azure --network-plugin-mode overlay --pod-cidr 192.168.0.0/16 --vnet-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/nodesubnet +``` ++## Frequently asked questions ++* *How do pods and cluster nodes communicate with each other?* ++ Pods and nodes talk to each other directly without any SNAT requirements. +++* *Can I configure the size of the address space assigned to each space?* ++ No, this is fixed at `/24` today and can't be changed. +++* *Can I add more private pod CIDRs to a cluster after the cluster has been created?* ++ No, a private pod CIDR can only be specified at the time of cluster creation. +++* *What are the max nodes and pods per cluster supported by Overlay?* ++ The max scale in terms of nodes and pods per cluster is the same as the max limits supported by AKS today. |
aks | Cluster Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-configuration.md | Title: Cluster configuration in Azure Kubernetes Services (AKS) description: Learn how to configure a cluster in Azure Kubernetes Service (AKS) Previously updated : 08/05/2022 Last updated : 08/31/2022 As you work with the node resource group, keep in mind that you can't: - Specify names for the managed resources within the node resource group. - Modify or delete Azure-created tags of managed resources within the node resource group. +## Node Restriction (Preview) ++The [Node Restriction](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#noderestriction) admission controller limits the Node and Pod objects a kubelet can modify. Node Restriction is on by default in AKS 1.24+ clusters. If you are using an older version use the below commands to create a cluster with Node Restriction or Update an existing cluster to add Node Restriction. +++### Before you begin ++You must have the following resource installed: ++* The Azure CLI +* The `aks-preview` extension version 0.5.95 or later ++#### Install the aks-preview CLI extension ++```azurecli-interactive +# Install the aks-preview extension +az extension add --name aks-preview ++# Update the extension to make sure you have the latest version installed +az extension update --name aks-preview +``` ++### Create an AKS cluster with Node Restriction ++To create a cluster using Node Restriction. ++```azurecli-interactive +az aks create -n aks -g myResourceGroup --enable-node-restriction +``` ++### Update an AKS cluster with Node Restriction ++To update a cluster to use Node Restriction. ++```azurecli-interactive +az aks update -n aks -g myResourceGroup --enable-node-restriction +``` ++### Remove Node Restriction from an AKS cluster ++To remove Node Restriction from a cluster. ++```azurecli-interactive +az aks update -n aks -g myResourceGroup --disable-node-restriction +``` + ## OIDC Issuer (Preview) This enables an OIDC Issuer URL of the provider which allows the API server to discover public signing keys. |
api-management | Api Management Howto Aad B2c | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-aad-b2c.md | In this section, you'll create a user flow in your Azure Active Directory B2C te 1. Enter a unique name for the user flow. 1. In **Identity providers**, select **Email signup**. 1. In **User attributes and token claims**, select the attributes and claims needed for the API Management developer portal (not needed for the legacy developer portal).-  * **Attributes**: Given Name, Surname- * **Claims**: Email Addresses, Given Name, Surname, UserΓÇÖs ObjectID + * **Claims**: Given Name, Surname, Email Addresses, UserΓÇÖs ObjectID ++  1. Select **Create**. ## Configure identity provider for developer portal |
api-management | Devops Api Development Templates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/devops-api-development-templates.md | An API developer writes an API definition by providing a specification, settings There are several tools to assist producing the API definition: * The [Azure API Management DevOps Resource Toolkit][4] includes two tools that provide an Azure Resource Manager (ARM) template. The _extractor_ creates an ARM template by extracting an API definition from an API Management service. The _creator_ produces the ARM template from a YAML specification. The DevOps Resource Toolkit supports SOAP, REST, and GraphQL APIs.-* The [Azure API Ops Toolkit][5] provides a workflow built on top of a [git][21] source code control system (such as [GitHub][22] or [Azure Repos][23]). It uses an _extractor_ similar to the DevOps Resource Toolkit to produce an API definition that is then applied to a target API Management service. API Ops supports REST only at this time. +* The [Azure APIOps Toolkit][5] provides a workflow built on top of a [git][21] source code control system (such as [GitHub][22] or [Azure Repos][23]). It uses an _extractor_ similar to the DevOps Resource Toolkit to produce an API definition that is then applied to a target API Management service. APIOps supports REST only at this time. * The [dotnet-apim][6] tool converts a well-formed YAML definition into an ARM template for later deployment. The tool is focused on REST APIs. * [Terraform][7] is an alternative to Azure Resource Manager to configure resources in Azure. You can create a Terraform configuration (together with policies) to implement the API in the same way that an ARM template is created. Once the automated tools have been run, the API definition is reviewed by the hu The API definition will be published to an API Management service through a release pipeline. The tools used to publish the API definition depend on the tool used to produce the API definition: * If using the [Azure API Management DevOps Resource Toolkit][4] or [dotnet-apim][6], the API definition is represented as an ARM template. Tasks are available for [Azure Pipelines][17] and [GitHub Actions][18] to deploy an ARM template.-* If using the [Azure API Ops Toolkit][5], the toolkit includes a publisher that writes the API definition to the service. +* If using the [Azure APIOps Toolkit][5], the toolkit includes a publisher that writes the API definition to the service. * If using [Terraform][7], CLI tools will deploy the API definition on your service. There are tasks available for [Azure Pipelines][19] and [GitHub Actions][20] > **Can I use other source code control and CI/CD systems?** >-> Yes. The process described works with any source code control system (although API Ops does require that the source code control system is [git][21] based). Similarly, you can use any CI/CD platform as long as it can be triggered by a check-in and run command line tools that communicate with Azure. +> Yes. The process described works with any source code control system (although APIOps does require that the source code control system is [git][21] based). Similarly, you can use any CI/CD platform as long as it can be triggered by a check-in and run command line tools that communicate with Azure. ## Best practices There's no industry standard for setting up a DevOps pipeline for publishing API * [Azure Repos][23] stores the API definitions in a [git][21] repository. * [Azure Pipelines][17] runs the automated API approval and API publication processes.-* [Azure API Ops Toolkit][5] provides tools and workflows for publishing APIs. +* [Azure APIOps Toolkit][5] provides tools and workflows for publishing APIs. We've seen the greatest success in customer deployments, and recommend the following practices: * Set up either [GitHub][22] or [Azure Repos][23] for your source code control system. This choice will determine your choice of pipeline runner as well. GitHub can use [Azure Pipelines][17] or [GitHub Actions][18], whereas Azure Repos must use Azure Pipelines. * Set up an Azure API Management service for each API developer so that they can develop API definitions along with the API service. Use the consumption or developer SKU when creating the service. * Use [policy fragments][24] to reduce the new policy that developers need to write for each API.-* Use the [Azure API Ops Toolkit][5] to extract a working API definition from the developer service. +* Use the [Azure APIOps Toolkit][5] to extract a working API definition from the developer service. * Set up an API approval process that runs on each pull request. The API approval process should include breaking change detection, linting, and automated API testing. -* Use the [Azure API Ops Toolkit][5] publisher to publish the API to your production API Management service. +* Use the [Azure APIOps Toolkit][5] publisher to publish the API to your production API Management service. -Review [Automated API deployments with API Ops][28] in the Azure Architecture Center for more details on how to configure and run a CI/CD deployment pipeline with API Ops. +Review [Automated API deployments with APIOps][28] in the Azure Architecture Center for more details on how to configure and run a CI/CD deployment pipeline with APIOps. ## References * [Azure DevOps Services][25] includes [Azure Repos][23] and [Azure Pipelines][17].-* [Azure API Ops Toolkit][5] provides a workflow for API Management DevOps. +* [Azure APIOps Toolkit][5] provides a workflow for API Management DevOps. * [Spectral][12] provides a linter for OpenAPI specifications. * [openapi-diff][13] provides a breaking change detector for OpenAPI v3 definitions. * [Newman][15] provides an automated test runner for Postman collections. |
applied-ai-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/language-support.md | This article lists supported human languages for Immersive Reader features. | Language | Tag | |-|--|+| Afrikaans | af | +| Afrikaans (South Africa) | af-ZA | +| Albanian | sq | +| Albanian (Albania) | sq-AL | +| Amharic | am | +| Amharic (Ethiopia) | am-ET | | Arabic (Egyptian) | ar-EG |+| Arabic (Lebanon) | ar-LB | +| Arabic (Oman) | ar-OM | | Arabic (Saudi Arabia) | ar-SA |+| Azerbaijani | az | +| Azerbaijani (Azerbaijan) | az-AZ | +| Bangla | bn | +| Bangla (Bangladesh) | bn-BD | +| Bangla (India) | bn-IN | +| Bosnian | bs | +| Bosnian (Bosnia & Herzegovina) | bs-BA | | Bulgarian | bg | | Bulgarian (Bulgaria) | bg-BG |+| Burmese | my | +| Burmese (Myanmar) | my-MM | | Catalan | ca | | Catalan (Catalan) | ca-ES | | Chinese | zh | This article lists supported human languages for Immersive Reader features. | Chinese Traditional (Hong Kong SAR) | zh-Hant-HK | | Chinese Traditional (Macao SAR) | zh-Hant-MO | | Chinese Traditional (Taiwan) | zh-Hant-TW |+| Chinese (Literary) | lzh | +| Chinese (Literary, China) | lzh-CN | | Croatian | hr | | Croatian (Croatia) | hr-HR | | Czech | cs | This article lists supported human languages for Immersive Reader features. | English (Hong Kong SAR) | en-HK | | English (India) | en-IN | | English (Ireland) | en-IE |+| English (Kenya) | en-KE | | English (New Zealand) | en-NZ |+| English (Nigeria) | en-NG | | English (Philippines) | en-PH |+| English (Singapore) | en-SG | +| English (South Africa) | en-ZA | +| English (Tanzania) | en-TZ | | English (United Kingdom) | en-GB | | English (United States) | en-US | | Estonian | et-EE |+| Filipino | fil | +| Filipino (Philippines) | fil-PH | | Finnish | fi | | Finnish (Finland) | fi-FI | | French | fr | This article lists supported human languages for Immersive Reader features. | French (Canada) | fr-CA | | French (France) | fr-FR | | French (Switzerland) | fr-CH |+| Galician | gl | +| Galician (Spain) | gl-ES | +| Georgian | ka | +| Georgian (Georgia) | ka-GE | | German | de | | German (Austria) | de-AT | | German (Germany) | de-DE | | German (Switzerland)| de-CH | | Greek | el | | Greek (Greece) | el-GR |+| Gujarati | gu | +| Gujarati (India) | gu-IN | | Hebrew | he | | Hebrew (Israel) | he-IL | | Hindi | hi | | Hindi (India) | hi-IN | | Hungarian | hu | | Hungarian (Hungary) | hu-HU |+| Icelandic | is | +| Icelandic (Iceland) | is-IS | | Indonesian | id | | Indonesian (Indonesia) | id-ID |-| Irish | ga-IE | +| Irish | ga | +| Irish (Ireland) | ga-IE | | Italian | it | | Italian (Italy) | it-IT | | Japanese | ja | | Japanese (Japan) | ja-JP |+| Javanese | jv | +| Javanese (Indonesia) | jv-ID | +| Kannada | kn | +| Kannada (India) | kn-IN | +| Kazakh | kk | +| Kazakh (Kazakhstan) | kk-KZ | +| Khmer | km | +| Khmer (Cambodia) | km-KH | | Korean | ko | | Korean (Korea) | ko-KR |+| Lao | lo | +| Lao (Laos) | lo-LA | | Latvian | Lv-LV |+| Latvian (Latvia) | lv-LV | +| Lithuanian | lt | | Lithuanian | lt-LT |+| Macedonian | mk | +| Macedonian (North Macedonia) | mk-MK | | Malay | ms | | Malay (Malaysia) | ms-MY |-| Maltese | Mt-MT | +| Malayalam | ml | +| Malayalam (India) | ml-IN | +| Maltese | mt | +| Maltese (Malta) | Mt-MT | +| Marathi | mr | +| Marathi (India) | mr-IN | +| Mongolian | mn | +| Mongolian (Mongolia) | mn-MN | +| Nepali | ne | +| Nepali (Nepal) | ne-NP | | Norwegian Bokmal| nb | | Norwegian Bokmal (Norway) | nb-NO |+| Pashto | ps | +| Pashto (Afghanistan) | ps-AF | +| Persian | fa | +| Persian (Iran) | fa-IR | | Polish | pl | | Polish (Poland) | pl-PL | | Portuguese | pt | | Portuguese (Brazil) | pt-BR |-| Portuguese (Portugal) | pt-PT | +| Portuguese (Portugal) | pt-PT | | Romanian | ro | | Romanian (Romania) | ro-RO | | Russian | ru | | Russian (Russia) | ru-RU |+| Serbian (Cyrillic) | sr-Cyrl | +| Serbian (Cyrillic, Serbia) | sr-Cyrl-RS | +| Sinhala | si | +| Sinhala (Sri Lanka) | si-LK | | Slovak | sk | | Slovak (Slovakia) | sk-SK | | Slovenian | sl | | Slovenian (Slovenia) | sl-SI |+| Somali | so | +| Somali (Somalia) | so-SO | | Spanish | es |+| Spanish (Argentina) | es-AR | +| Spanish (Colombia) | es-CO | | Spanish (Latin America) | es-419 | | Spanish (Mexico) | es-MX | | Spanish (Spain) | es-ES |+| Spanish (United States) | es-US | +| Sundanese | su | +| Sundanese (Indonesia) | su-ID | +| Swahili | sw | +| Swahili (Kenya) | sw-KE | | Swedish | sv | | Swedish (Sweden) | sv-SE | | Tamil | ta | | Tamil (India) | ta-IN |+| Tamil (Malaysia) | ta-MY | | Telugu | te | | Telugu (India) | te-IN | | Thai | th | | Thai (Thailand) | th-TH | | Turkish | tr | | Turkish (Turkey) | tr-TR |-| Ukrainian | ur-PK | +| Ukrainian | uk | +| Ukrainian (Ukraine) | uk-UA | +| Urdu | ur | +| Urdu (India) | ur-IN | +| Uzbek | uz | +| Uzbek (Uzbekistan) | uz-UZ | | Vietnamese | vi | | Vietnamese (Vietnam) | vi-VN |-| Welsh | Cy-GB | +| Welsh | cy | +| Welsh (United Kingdom) | Cy-GB | +| Zulu | zu | +| Zulu (South Africa) | zu-ZA | ## Translation This article lists supported human languages for Immersive Reader features. | Arabic (Egyptian) | ar-EG | | Arabic (Saudi Arabia) | ar-SA | | Armenian | hy |+| Assamese | as | | Azerbaijani | az |-| Afrikaans | af | | Bangla | bn |+| Bashkir | ba | | Bosnian | bs | | Bulgarian | bg | | Bulgarian (Bulgaria) | bg-BG | This article lists supported human languages for Immersive Reader features. | Chinese Traditional (Hong Kong SAR) | zh-Hant-HK | | Chinese Traditional (Macao SAR) | zh-Hant-MO | | Chinese Traditional (Taiwan) | zh-Hant-TW |+| Chinese (Literary) | lzh | | Croatian | hr | | Croatian (Croatia) | hr-HR | | Czech | cs | This article lists supported human languages for Immersive Reader features. | Danish | da | | Danish (Denmark) | da-DK | | Dari (Afghanistan) | prs |+| Divehi | dv | | Dutch | nl | | Dutch (Netherlands) | nl-NL | | English | en | This article lists supported human languages for Immersive Reader features. | English (United Kingdom) | en-GB | | English (United States) | en-US | | Estonian | et |+| Faroese | fo | | Fijian | fj | | Filipino | fil | | Finnish | fi | This article lists supported human languages for Immersive Reader features. | French (Canada) | fr-CA | | French (France) | fr-FR | | French (Switzerland) | fr-CH |+| Georgian | ka | | German | de | | German (Austria) | de-AT | | German (Germany) | de-DE | | German (Switzerland)| de-CH |-| Gujarati | gu | | Greek | el | | Greek (Greece) | el-GR |+| Gujarati | gu | | Haitian (Creole) | ht | | Hebrew | he | | Hebrew (Israel) | he-IL | This article lists supported human languages for Immersive Reader features. | Icelandic | is | | Indonesian | id | | Indonesian (Indonesia) | id-ID |+| Inuinnaqtun | ikt | +| Inuktitut | iu | +| Inuktitut (Latin) | iu-Latn | | Irish | ga | | Italian | it | | Italian (Italy) | it-IT | This article lists supported human languages for Immersive Reader features. | Korean (Korea) | ko-KR | | Kurdish (Central) | ku | | Kurdish (Northern) | kmr |+| KurdishCentral | ckb | +| Kyrgyz | ky | | Lao | lo | | Latvian | lv | | Lithuanian | lt |+| Macedonian | mk | | Malagasy | mg | | Malay | ms | | Malay (Malaysia) | ms-MY | This article lists supported human languages for Immersive Reader features. | Maltese | mt | | Maori | mi | | Marathi | mr |+| Mongolian (Cyrillic) | mn-Cyrl | +| Mongolian (Traditional) | mn-Mong | | Nepali | ne | | Norwegian Bokmal| nb | | Norwegian Bokmal (Norway) | nb-NO | This article lists supported human languages for Immersive Reader features. | Polish (Poland) | pl-PL | | Portuguese | pt | | Portuguese (Brazil) | pt-BR |-| Portuguese (Portugal) | pt-PT | +| Portuguese (Portugal) | pt-PT | | Punjabi | pa | | Querétaro Otomi | otq | | Romanian | ro | This article lists supported human languages for Immersive Reader features. | Russian (Russia) | ru-RU | | Samoan | sm | | Serbian | sr |-| Serbian(Cyrillic) | sr-Cyrl | +| Serbian (Cyrillic) | sr-Cyrl | | Serbian (Latin) | sr-Latn | | Slovak | sk | | Slovak (Slovakia) | sk-SK | | Slovenian | sl | | Slovenian (Slovenia) | sl-SI |+| Somali | so | | Spanish | es | | Spanish (Latin America) | es-419 | | Spanish (Mexico) | es-MX | This article lists supported human languages for Immersive Reader features. | Tahitian | ty | | Tamil | ta | | Tamil (India) | ta-IN |+| Tatar | tt | | Telugu | te | | Telugu (India) | te-IN | | Thai | th | | Thai (Thailand) | th-TH |+| Tibetan | bo | | Tigrinya | ti | | Tongan | to | | Turkish | tr | | Turkish (Turkey) | tr-TR |+| Turkmen | tk | | Ukrainian | uk |+| UpperSorbian | hsb | | Urdu | ur |+| Uyghur | ug | | Vietnamese | vi | | Vietnamese (Vietnam) | vi-VN | | Welsh | cy | | Yucatec Maya | yua | | Yue Chinese | yue |-+| Zulu | zu | ## Language detection |
applied-ai-services | Tutorial Ios Picture Immersive Reader | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/tutorial-ios-picture-immersive-reader.md | In the main project folder, which contains the ViewController.swift file, create Open AppDelegate.swift and replace the file with the following code. +```swift +import UIKit ++@UIApplicationMain +class AppDelegate: UIResponder, UIApplicationDelegate { + + var window: UIWindow? + + var navigationController: UINavigationController? ++ func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool { + // Override point for customization after application launch. + + window = UIWindow(frame: UIScreen.main.bounds) + + // Allow the app run without a storyboard + if let window = window { + let mainViewController = PictureLaunchViewController() + navigationController = UINavigationController(rootViewController: mainViewController) + window.rootViewController = navigationController + window.makeKeyAndVisible() + } + return true + } ++ func applicationWillResignActive(_ application: UIApplication) { + // Sent when the application is about to move from active to inactive state. This can occur for certain types of temporary interruptions (such as an incoming phone call or SMS message) or when the user quits the application and it begins the transition to the background state. + // Use this method to pause ongoing tasks, disable timers, and invalidate graphics rendering callbacks. Games should use this method to pause the game. + } ++ func applicationDidEnterBackground(_ application: UIApplication) { + // Use this method to release shared resources, save user data, invalidate timers, and store enough application state information to restore your application to its current state in case it is terminated later. + // If your application supports background execution, this method is called instead of applicationWillTerminate: when the user quits. + } ++ func applicationWillEnterForeground(_ application: UIApplication) { + // Called as part of the transition from the background to the active state; here you can undo many of the changes made on entering the background. + } ++ func applicationDidBecomeActive(_ application: UIApplication) { + // Restart any tasks that were paused (or not yet started) while the application was inactive. If the application was previously in the background, optionally refresh the user interface. + } ++ func applicationWillTerminate(_ application: UIApplication) { + // Called when the application is about to terminate. Save data if appropriate. See also applicationDidEnterBackground:. + } ++} +``` + ## Add functionality for taking and uploading photos Rename ViewController.swift to PictureLaunchViewController.swift and replace the file with the following code. +```swift +import UIKit +import immersive_reader_sdk ++class PictureLaunchViewController: UIViewController, UINavigationControllerDelegate, UIImagePickerControllerDelegate { ++ private var photoButton: UIButton! + private var cameraButton: UIButton! + private var titleText: UILabel! + private var bodyText: UILabel! + private var sampleContent: Content! + private var sampleChunk: Chunk! + private var sampleOptions: Options! + private var imagePicker: UIImagePickerController! + private var spinner: UIActivityIndicatorView! + private var activityIndicatorBackground: UIView! + private var textURL = "vision/v2.0/read/core/asyncBatchAnalyze"; + + override func viewDidLoad() { + super.viewDidLoad() + + view.backgroundColor = .white + + titleText = UILabel() + titleText.text = "Picture to Immersive Reader with OCR" + titleText.font = UIFont.boldSystemFont(ofSize: 32) + titleText.textAlignment = .center + titleText.lineBreakMode = .byWordWrapping + titleText.numberOfLines = 0 + view.addSubview(titleText) + + bodyText = UILabel() + bodyText.text = "Capture or upload a photo of handprinted text on a piece of paper, handwriting, typed text, text on a computer screen, writing on a white board and many more, and watch it be presented to you in the Immersive Reader!" + bodyText.font = UIFont.systemFont(ofSize: 18) + bodyText.lineBreakMode = .byWordWrapping + bodyText.numberOfLines = 0 + let screenSize = self.view.frame.height + if screenSize <= 667 { + // Font size for smaller iPhones. + bodyText.font = bodyText.font.withSize(16) ++ } else if screenSize <= 812.0 { + // Font size for medium iPhones. + bodyText.font = bodyText.font.withSize(18) + + } else if screenSize <= 896 { + // Font size for larger iPhones. + bodyText.font = bodyText.font.withSize(20) + + } else { + // Font size for iPads. + bodyText.font = bodyText.font.withSize(26) + } + view.addSubview(bodyText) + + photoButton = UIButton() + photoButton.backgroundColor = .darkGray + photoButton.contentEdgeInsets = UIEdgeInsets(top: 10, left: 5, bottom: 10, right: 5) + photoButton.layer.cornerRadius = 5 + photoButton.setTitleColor(.white, for: .normal) + photoButton.setTitle("Choose Photo from Library", for: .normal) + photoButton.titleLabel?.font = UIFont.systemFont(ofSize: 18, weight: .bold) + photoButton.addTarget(self, action: #selector(selectPhotoButton(sender:)), for: .touchUpInside) + view.addSubview(photoButton) + + cameraButton = UIButton() + cameraButton.backgroundColor = .darkGray + cameraButton.contentEdgeInsets = UIEdgeInsets(top: 10, left: 5, bottom: 10, right: 5) + cameraButton.layer.cornerRadius = 5 + cameraButton.setTitleColor(.white, for: .normal) + cameraButton.setTitle("Take Photo", for: .normal) + cameraButton.titleLabel?.font = UIFont.systemFont(ofSize: 18, weight: .bold) + cameraButton.addTarget(self, action: #selector(takePhotoButton(sender:)), for: .touchUpInside) + view.addSubview(cameraButton) + + activityIndicatorBackground = UIView() + activityIndicatorBackground.backgroundColor = UIColor.black + activityIndicatorBackground.alpha = 0 + view.addSubview(activityIndicatorBackground) + view.bringSubviewToFront(_: activityIndicatorBackground) + + spinner = UIActivityIndicatorView(style: .whiteLarge) + view.addSubview(spinner) + + let layoutGuide = view.safeAreaLayoutGuide + + titleText.translatesAutoresizingMaskIntoConstraints = false + titleText.topAnchor.constraint(equalTo: layoutGuide.topAnchor, constant: 25).isActive = true + titleText.leadingAnchor.constraint(equalTo: layoutGuide.leadingAnchor, constant: 20).isActive = true + titleText.trailingAnchor.constraint(equalTo: layoutGuide.trailingAnchor, constant: -20).isActive = true + + bodyText.translatesAutoresizingMaskIntoConstraints = false + bodyText.topAnchor.constraint(equalTo: titleText.bottomAnchor, constant: 35).isActive = true + bodyText.leadingAnchor.constraint(equalTo: layoutGuide.leadingAnchor, constant: 20).isActive = true + bodyText.trailingAnchor.constraint(equalTo: layoutGuide.trailingAnchor, constant: -20).isActive = true ++ cameraButton.translatesAutoresizingMaskIntoConstraints = false + if screenSize > 896 { + // Constraints for iPads. + cameraButton.heightAnchor.constraint(equalToConstant: 150).isActive = true + cameraButton.leadingAnchor.constraint(equalTo: layoutGuide.leadingAnchor, constant: 60).isActive = true + cameraButton.trailingAnchor.constraint(equalTo: layoutGuide.trailingAnchor, constant: -60).isActive = true + cameraButton.topAnchor.constraint(equalTo: bodyText.bottomAnchor, constant: 150).isActive = true + } else { + // Constraints for iPhones. + cameraButton.heightAnchor.constraint(equalToConstant: 100).isActive = true + cameraButton.leadingAnchor.constraint(equalTo: layoutGuide.leadingAnchor, constant: 30).isActive = true + cameraButton.trailingAnchor.constraint(equalTo: layoutGuide.trailingAnchor, constant: -30).isActive = true + cameraButton.topAnchor.constraint(equalTo: bodyText.bottomAnchor, constant: 100).isActive = true + } + cameraButton.bottomAnchor.constraint(equalTo: photoButton.topAnchor, constant: -40).isActive = true + + photoButton.translatesAutoresizingMaskIntoConstraints = false + if screenSize > 896 { + // Constraints for iPads. + photoButton.heightAnchor.constraint(equalToConstant: 150).isActive = true + photoButton.leadingAnchor.constraint(equalTo: layoutGuide.leadingAnchor, constant: 60).isActive = true + photoButton.trailingAnchor.constraint(equalTo: layoutGuide.trailingAnchor, constant: -60).isActive = true + } else { + // Constraints for iPhones. + photoButton.heightAnchor.constraint(equalToConstant: 100).isActive = true + photoButton.leadingAnchor.constraint(equalTo: layoutGuide.leadingAnchor, constant: 30).isActive = true + photoButton.trailingAnchor.constraint(equalTo: layoutGuide.trailingAnchor, constant: -30).isActive = true + } + + spinner.translatesAutoresizingMaskIntoConstraints = false + spinner.centerXAnchor.constraint(equalTo: view.centerXAnchor).isActive = true + spinner.centerYAnchor.constraint(equalTo: view.centerYAnchor).isActive = true + + activityIndicatorBackground.translatesAutoresizingMaskIntoConstraints = false + activityIndicatorBackground.topAnchor.constraint(equalTo: layoutGuide.topAnchor).isActive = true + activityIndicatorBackground.bottomAnchor.constraint(equalTo: layoutGuide.bottomAnchor).isActive = true + activityIndicatorBackground.leadingAnchor.constraint(equalTo: layoutGuide.leadingAnchor).isActive = true + activityIndicatorBackground.trailingAnchor.constraint(equalTo: layoutGuide.trailingAnchor).isActive = true + + // Create content and options. + sampleChunk = Chunk(content: bodyText.text!, lang: nil, mimeType: nil) + sampleContent = Content( Title: titleText.text!, chunks: [sampleChunk]) + sampleOptions = Options(uiLang: nil, timeout: nil, uiZIndex: nil) + } + + @IBAction func selectPhotoButton(sender: AnyObject) { + // Launch the photo picker. + imagePicker = UIImagePickerController() + imagePicker.delegate = self + self.imagePicker.sourceType = .photoLibrary + self.imagePicker.allowsEditing = true + self.present(self.imagePicker, animated: true, completion: nil) + self.photoButton.isEnabled = true + } + + @IBAction func takePhotoButton(sender: AnyObject) { + if !UIImagePickerController.isSourceTypeAvailable(.camera) { + // If there is no camera on the device, disable the button + self.cameraButton.backgroundColor = .gray + self.cameraButton.isEnabled = true + + } else { + // Launch the camera. + imagePicker = UIImagePickerController() + imagePicker.delegate = self + self.imagePicker.sourceType = .camera + self.present(self.imagePicker, animated: true, completion: nil) + self.cameraButton.isEnabled = true + } + } ++ func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) { + imagePicker.dismiss(animated: true, completion: nil) + photoButton.isEnabled = false + cameraButton.isEnabled = false + self.spinner.startAnimating() + activityIndicatorBackground.alpha = 0.6 + + // Retrieve the image. + let image = (info[.originalImage] as? UIImage)! + + // Retrieve the byte array from image. + let imageByteArray = image.jpegData(compressionQuality: 1.0) + + // Call the getTextFromImage function passing in the image the user takes or chooses. + getTextFromImage(subscriptionKey: Constants.computerVisionSubscriptionKey, getTextUrl: Constants.computerVisionEndPoint + textURL, pngImage: imageByteArray!, onSuccess: { cognitiveText in + print("cognitive text is: \(cognitiveText)") + DispatchQueue.main.async { + self.photoButton.isEnabled = true + self.cameraButton.isEnabled = true + } + + // Create content and options with the text from the image. + let sampleImageChunk = Chunk(content: cognitiveText, lang: nil, mimeType: nil) + let sampleImageContent = Content( Title: "Text from image", chunks: [sampleImageChunk]) + let sampleImageOptions = Options(uiLang: nil, timeout: nil, uiZIndex: nil) + + // Callback to get token for Immersive Reader. + self.getToken(onSuccess: {cognitiveToken in + + DispatchQueue.main.async { + + launchImmersiveReader(navController: self.navigationController!, token: cognitiveToken, subdomain: Constants.subdomain, content: sampleImageContent, options: sampleImageOptions, onSuccess: { + self.spinner.stopAnimating() + self.activityIndicatorBackground.alpha = 0 + self.photoButton.isEnabled = true + self.cameraButton.isEnabled = true + + }, onFailure: { error in + print("An error occured launching the Immersive Reader: \(error)") + self.spinner.stopAnimating() + self.activityIndicatorBackground.alpha = 0 + self.photoButton.isEnabled = true + self.cameraButton.isEnabled = true + + }) + } ++ }, onFailure: { error in + DispatchQueue.main.async { + self.photoButton.isEnabled = true + self.cameraButton.isEnabled = true + + } + print("An error occured retrieving the token: \(error)") + }) + + }, onFailure: { error in + DispatchQueue.main.async { + self.photoButton.isEnabled = true + self.cameraButton.isEnabled = true + } + + }) + } + + /// Retrieves the token for the Immersive Reader using Azure Active Directory authentication + /// + /// - Parameters: + /// -onSuccess: A closure that gets called when the token is successfully recieved using Azure Active Directory authentication. + /// -theToken: The token for the Immersive Reader recieved using Azure Active Directory authentication. + /// -onFailure: A closure that gets called when the token fails to be obtained from the Azure Active Directory Authentication. + /// -theError: The error that occured when the token fails to be obtained from the Azure Active Directory Authentication. + func getToken(onSuccess: @escaping (_ theToken: String) -> Void, onFailure: @escaping ( _ theError: String) -> Void) { + + let tokenForm = "grant_type=client_credentials&resource=https://cognitiveservices.azure.com/&client_id=" + Constants.clientId + "&client_secret=" + Constants.clientSecret + let tokenUrl = "https://login.windows.net/" + Constants.tenantId + "/oauth2/token" + + var responseTokenString: String = "0" + + let url = URL(string: tokenUrl)! + var request = URLRequest(url: url) + request.httpBody = tokenForm.data(using: .utf8) + request.httpMethod = "POST" + + let task = URLSession.shared.dataTask(with: request) { data, response, error in + guard let data = data, + let response = response as? HTTPURLResponse, + // Check for networking errors. + error == nil else { + print("error", error ?? "Unknown error") + onFailure("Error") + return + } + + // Check for http errors. + guard (200 ... 299) ~= response.statusCode else { + print("statusCode should be 2xx, but is \(response.statusCode)") + print("response = \(response)") + onFailure(String(response.statusCode)) + return + } + + let responseString = String(data: data, encoding: .utf8) + print("responseString = \(String(describing: responseString!))") + + let jsonResponse = try? JSONSerialization.jsonObject(with: data, options: []) + guard let jsonDictonary = jsonResponse as? [String: Any] else { + onFailure("Error parsing JSON response.") + return + } + guard let responseToken = jsonDictonary["access_token"] as? String else { + onFailure("Error retrieving token from JSON response.") + return + } + responseTokenString = responseToken + onSuccess(responseTokenString) + } + + task.resume() + } + + /// Returns the text string after it has been extracted from an Image input. + /// + /// - Parameters: + /// -subscriptionKey: The Azure subscription key. + /// -pngImage: Image data in PNG format. + /// - Returns: a string of text representing the + func getTextFromImage(subscriptionKey: String, getTextUrl: String, pngImage: Data, onSuccess: @escaping (_ theToken: String) -> Void, onFailure: @escaping ( _ theError: String) -> Void) { + + let url = URL(string: getTextUrl)! + var request = URLRequest(url: url) + request.setValue(subscriptionKey, forHTTPHeaderField: "Ocp-Apim-Subscription-Key") + request.setValue("application/octet-stream", forHTTPHeaderField: "Content-Type") + + // Two REST API calls are required to extract text. The first call is to submit the image for processing, and the next call is to retrieve the text found in the image. + + // Set the body to the image in byte array format. + request.httpBody = pngImage + + request.httpMethod = "POST" + + let task = URLSession.shared.dataTask(with: request) { data, response, error in + guard let data = data, + let response = response as? HTTPURLResponse, + // Check for networking errors. + error == nil else { + print("error", error ?? "Unknown error") + onFailure("Error") + return + } + + // Check for http errors. + guard (200 ... 299) ~= response.statusCode else { + print("statusCode should be 2xx, but is \(response.statusCode)") + print("response = \(response)") + onFailure(String(response.statusCode)) + return + } + + let responseString = String(data: data, encoding: .utf8) + print("responseString = \(String(describing: responseString!))") + + // Send the second call to the API. The first API call returns operationLocation which stores the URI for the second REST API call. + let operationLocation = response.allHeaderFields["Operation-Location"] as? String + + if (operationLocation == nil) { + print("Error retrieving operation location") + return + } + + // Wait 10 seconds for text recognition to be available as suggested by the Text API documentation. + print("Text submitted. Waiting 10 seconds to retrieve the recognized text.") + sleep(10) + + // HTTP GET request with the operationLocation url to retrieve the text. + let getTextUrl = URL(string: operationLocation!)! + var getTextRequest = URLRequest(url: getTextUrl) + getTextRequest.setValue(subscriptionKey, forHTTPHeaderField: "Ocp-Apim-Subscription-Key") + getTextRequest.httpMethod = "GET" + + // Send the GET request to retrieve the text. + let taskGetText = URLSession.shared.dataTask(with: getTextRequest) { data, response, error in + guard let data = data, + let response = response as? HTTPURLResponse, + // Check for networking errors. + error == nil else { + print("error", error ?? "Unknown error") + onFailure("Error") + return + } + + // Check for http errors. + guard (200 ... 299) ~= response.statusCode else { + print("statusCode should be 2xx, but is \(response.statusCode)") + print("response = \(response)") + onFailure(String(response.statusCode)) + return + } + + // Decode the JSON data into an object. + let customDecoding = try! JSONDecoder().decode(TextApiResponse.self, from: data) + + // Loop through the lines to get all lines of text and concatenate them together. + var textFromImage = "" + for textLine in customDecoding.recognitionResults[0].lines { + textFromImage = textFromImage + textLine.text + " " + } + + onSuccess(textFromImage) + } + taskGetText.resume() ++ } + + task.resume() + } + + // Structs used for decoding the Text API JSON response. + struct TextApiResponse: Codable { + let status: String + let recognitionResults: [RecognitionResult] + } ++ struct RecognitionResult: Codable { + let page: Int + let clockwiseOrientation: Double + let width, height: Int + let unit: String + let lines: [Line] + } ++ struct Line: Codable { + let boundingBox: [Int] + let text: String + let words: [Word] + } ++ struct Word: Codable { + let boundingBox: [Int] + let text: String + let confidence: String? + } + +} +``` + ## Build and run the app Set the archive scheme in Xcode by selecting a simulator or device target. |
azure-app-configuration | Howto Import Export Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-import-export-data.md | description: Learn how to import or export configuration data to or from Azure A - Previously updated : 04/06/2022+ Last updated : 08/24/2022 You can import or export data using either the [Azure portal](https://portal.azu ## Import data -Import brings configuration data into an App Configuration store from an existing source. Use the import function to migrate data into an App Configuration store or aggregate data from multiple sources. App Configuration supports importing from another App Configuration store, an App Service resource or a configuration file in JSON, YAML or .properties. +Import brings configuration data into an App Configuration store from an existing source. Use the import function to migrate data into an App Configuration store or aggregate data from multiple sources. -### [Portal](#tab/azure-portal) +This guide shows how to import App Configuration data: ++- [from a configuration file in Json, Yaml or Properties](#import-data-from-a-configuration-file) +- [from an App Configuration store](#import-data-from-an-app-configuration-store) +- [from Azure App Service](#import-data-from-azure-app-service) ++### Import data from a configuration file ++Follow the steps below to import key-values from a file. ++> [!NOTE] +> Importing feature flags from a file is not supported. If a configuration file contains feature flags, they will be imported as regular key-values automatically. ++#### [Portal](#tab/azure-portal) From the Azure portal, follow these steps: 1. Browse to your App Configuration store, and select **Import/export** from the **Operations** menu. - :::image type="content" source="./media/import-file.png" alt-text="Screenshot of the Azure portal, importing a file."::: + :::image type="content" source="./media/import-export/import-file.png" alt-text="Screenshot of the Azure portal, importing a file."::: -1. On the **Import** tab, select **Configuration file** under **Source service**. Other options are **App Configuration** and **App Services**. +1. On the **Import** tab, select **Configuration file** under **Source service**. 1. Fill out the form with the following parameters: - | Parameter | Description | Examples | - |--||| - | For language | Choose the language of the file you're importing between .NET, Java (Spring) and Other. | .NET | - | File type | Select the type of file you're importing between YAML, properties or JSON. | JSON | + | Parameter | Description | Example | + |--|--|-| + | For language | Choose the language of the file you're importing between .NET, Java (Spring) and Other. | *.NET* | + | File type | Select the type of file you're importing between Yaml, Properties and Json. | *Json* | 1. Select the **Folder** icon, and browse to the file to import. + > [!NOTE] + > A message is displayed on screen, indicating that the file was fetched successfully. + 1. Fill out the next part of the form: - | Parameter | Description | Example | - |--|--|-| - | Separator | The separator is the character parsed in your imported configuration file to separate key-values which will be added to your configuration store. Select one of the following options: `.`, `,`,`:`, `;`, `/`, `-`. | : | - | Prefix | Optional. A key prefix is the beginning part of a key. Prefixes can be used to manage groups of keys in a configuration store. | TestApp:Settings:Backgroundcolor | - | Label | Optional. Select an existing label or enter a new label that will be assigned to your imported key-values. | prod | - | Content type | Optional. Indicate if the file you're importing is a Key Vault reference or a JSON file. For more information about Key Vault references, go to [Use Key Vault references in an ASP.NET Core app](./use-key-vault-references-dotnet-core.md). | JSON (application/json) | + | Parameter | Description | Example | + |--|--|| + | Separator | The separator is the character parsed in your imported configuration file to separate key-values that will be added to your configuration store. Select one of the following options: *.*, *,*, *:*, *;*, */*, *-*, *_*, *ΓÇö*. | *;* | + | Prefix | Optional. A key prefix is the beginning part of a key. Prefixes can be used to manage groups of keys in a configuration store. The entered prefix will be appended to the beginning of every key you import from this file. | *TestApp:* | + | Label | Optional. Select an existing label or enter a new label that will be assigned to your imported key-values. | *prod* | + | Content type | Optional. Indicate if you're importing a JSON file or Key Vault references. For more information about Key Vault references, go to [Use Key Vault references in an ASP.NET Core app](./use-key-vault-references-dotnet-core.md). | *JSON (application/json)* | 1. Select **Apply** to proceed with the import. -### [Azure CLI](#tab/azure-cli) +You've imported key-values from a JSON file, have assigned them the label "prod" and the prefix "TestApp". The separator ":" is used and all the keys you've imported have content type set as "JSON". ++#### [Azure CLI](#tab/azure-cli) ++From the Azure CLI, follow the steps below. If you don't have the Azure CLI installed locally, you can optionally use [Azure Cloud Shell](../cloud-shell/overview.md). -Use the Azure CLI as explained below to import App Configuration data. If you don't have the Azure CLI installed locally, you can optionally use [Azure Cloud Shell](../cloud-shell/overview.md). Specify the source of the data: `appconfig`, `appservice` or `file`. Optionally specify a source label with `--src-label` and a label to apply with `--label`. +1. Enter the import command `az appconfig kv import` and add the following parameters: -Import all keys and feature flags from a file and apply test label. + | Parameter | Description | Example | + ||-|-| + | `--name` | Enter the name of the App Configuration store you want to import data to. | `my-app-config-store` | + | `--source` | Enter `file` to indicate that you're importing app configuration data from a file. | `file` | + | `--path` | Enter the local path to the file containing the data you want to import. | `C:/Users/john/Downloads/data.json` | + | `--format` | Enter yaml, properties or json to indicate the format of the file you're importing. | `json` | -```azurecli -az appconfig kv import --name <your-app-config-store-name> --label test --source file --path D:/abc.json --format json -``` +1. Optionally also add the following parameters: -Import all keys with label test and apply test2 label. + | Parameter | Description | Example | + ||-|--| + | `--separator` | Optional. The separator is the delimiter for flattening the key-values to Json/Yaml. It's required for exporting hierarchical structure and will be ignored for property files and feature flags. Select one of the following options: `.`, `,`, `:`, `;`, `/`, `-`, `_`, `ΓÇö`. | `;` | + | `--prefix` | Optional. A key prefix is the beginning part of a key. Prefixes can be used to manage groups of keys in a configuration store. This prefix will be appended to the front of imported keys. | `TestApp:` | + | `--label` | Optional. Enter a label that will be assigned to your imported key-values. | `prod` | + | `--content-type` | Optional. Enter `appconfig/kvset` or `application/json` to state that the imported content consists of a Key Vault reference or a JSON file. | `application/json` | -```azurecli -az appconfig kv import --name <your-app-config-store-name> --source appconfig --src-label test --label test2 --src-name <another-app-config-store-name> -``` + Example: import all keys and feature flags from a JSON file, apply the label "prod", and append the prefix "TestApp". Add the "application/json" content type. -Import all keys and apply null label from an App Service application. + ```azurecli + az appconfig kv import --name my-app-config-store --source file --path D:/abc.json --format json --separator ; --prefix TestApp: --label prod --content-type application/json + ``` -For `--appservice-account` use the ARM ID for AppService or use the name of the AppService, assuming it's in the same subscription and resource group as the App Configuration. +1. The command line displays a list of the coming changes. Confirm the import by selecting `y`. -```python -az appconfig kv import --name <your-app-config-store-name> --source appservice --appservice-account <your-app-service> -``` + :::image type="content" source="./media/import-export/continue-import-file-prompt.png" alt-text="Screenshot of the CLI. Import from file confirmation prompt."::: -For more details and examples, go to [az appconfig kv import](/cli/azure/appconfig/kv?view=azure-cli-latest#az-appconfig-kv-import&preserve-view=true). +You've imported key-values from a JSON file, have assigned them the label "prod" and the prefix "TestApp:". The separator ";" is used and all keys that you have imported have content type set as "JSON". ++For more optional parameters and examples, go to [az appconfig kv import](/cli/azure/appconfig/kv?view=azure-cli-latest#az-appconfig-kv-import&preserve-view=true). ++++### Import data from an App Configuration store ++You can import values from one App Configuration store to another App Configuration store, or you can import values from one App Configuration store to the same App Configuration store in order to duplicate its values and apply different parameters, such as new label or content type. ++Follow the steps below to import key-values and feature flags from an Azure App Configuration store. ++#### [Portal](#tab/azure-portal) ++From the Azure portal, follow these steps: ++1. Browse to your App Configuration store, and select **Import/export** from the **Operations** menu. ++ :::image type="content" source="./media/import-export/import-app-configuration.png" alt-text="Screenshot of the Azure portal, importing from an App Configuration store."::: ++1. On the **Import** tab, select **App Configuration** under **Source service**. ++1. Select **Select resource**, fill out the form with the following parameters, and select **Apply**: ++ | Parameter | Description | Example | + |-|-|--| + | Subscription | Your current subscription is selected by default. | *my-subscription* | + | Resource group | Select a resource group that contains the App Configuration store with configuration to import. Your current resource group is selected by default. | *my-resource-group* | + | Resource | Select the App Configuration store that contains the configuration you want to import. | *my-other-app-config-store* | ++ > [!NOTE] + > The message "Access keys fetched successfully" indicates that the connection with the App Configuration store was successful." ++1. Fill out the next part of the form: ++ | Parameter | Description | Example | + |--|-|| + | From label | Select at least one label to import values with the corresponding labels. **Select all** will import keys with any label, and **(No label)** will restrict the import to keys with no label. | *prod* | + | At a specific time | Optional. Fill out to import key-values from a specific point in time. This is the point in time of the resource you selected in the previous step. Format: "YYYY-MM-DDThh:mm:ssZ". | *07/28/2022 12:00:00 AM* | + | Override default key-value labels | Optional. By default, imported items use their current label. Check the box and enter a label to override these defaults with a custom label. | *new* | + | Override default key-value content type | Optional. By default, imported items use their current content type. Check the box and select **Key Vault Reference** or **JSON (application/json)** under **Content type** to state that the imported content consists of a Key Vault reference or a JSON file. Content type can only be overridden for imported key-values. Default content type for feature flags is "application/vnd.microsoft.appconfig.ff+json;charset=utf-8' and isn't updated by this parameter.| *JSON (application/json)* | ++1. Select **Apply** to proceed with the import. ++You've imported keys and feature flags with the "prod" label from an App Configuration store on January 28, 2021 at 12 AM, and have assigned them the label "new". All keys that you have imported have content type set as "JSON". ++#### [Azure CLI](#tab/azure-cli) ++From the Azure CLI, follow the steps below. If you don't have the Azure CLI installed locally, you can optionally use [Azure Cloud Shell](../cloud-shell/overview.md). ++1. Enter the import command `az appconfig kv import` and enter the following parameters: ++ | Parameter | Description | Example | + |--|-|-| + | `--name` | Enter the name of the App Configuration store you want to import data into | `my-app-config-store` | + | `--source` | Enter `appconfig` to indicate that you're importing data from an App Configuration store. | `appconfig` | + | `--src-name` | Enter the name of the App Configuration store you want to import data from. | `my-source-app-config` | + | `--src-label`| Restrict your import to keys with a specific label. If you don't use this parameter, only keys with a null label will be imported. Supports star sign as filter: enter `*` for all labels; `abc*` for all labels with abc as prefix.| `prod` | ++1. Optionally add the following parameters: ++ | Parameter | Description | Example | + ||--|| + | `--label` | Optional. Enter a label that will be assigned to your imported key-values. | `new` | + | `--content-type` | Optional. Enter `appconfig/kvset` or `application/json` to state that the imported content consists of a Key Vault reference or a JSON file. Content type can only be overridden for imported key-values. Default content type for feature flags is "application/vnd.microsoft.appconfig.ff+json;charset=utf-8' by default and isn't updated by this parameter. | `application/json` | ++ Example: import keys-values and feature flags with the label "prod" from another App Configuration on January 28, 2021 at 1PM, and assign them the label "new". Add the "application/json" content type. ++ ```azurecli + az appconfig kv import --name my-app-config-store --source appconfig --src-name my-source-app-config --src-label prod --label new --content-type application/json + ``` ++1. The command line displays a list of the coming changes. Confirm the import by selecting `y`. ++ :::image type="content" source="./media/import-export/continue-import-app-configuration-prompt.png" alt-text="Screenshot of the CLI. Import from App Configuration confirmation prompt."::: ++You've imported keys with the label "prod" from an App Configuration store and have assigned them the label "new". All keys that you have imported have content type set as "JSON". ++For more optional parameters and examples, go to [az appconfig kv import](/cli/azure/appconfig/kv?view=azure-cli-latest#az-appconfig-kv-import&preserve-view=true). ++++### Import data from Azure App Service ++Follow the steps below to import key-values from Azure App Service. ++> [!NOTE] +> App Service doesn't currently support feature flags. All feature flags imported to App Service are converted to key-values automatically. Your App Service resources can only contain key-values. ++#### [Portal](#tab/azure-portal) ++From the Azure portal: ++1. Browse to your App Configuration store, and select **Import/export** from the **Operations** menu. ++ :::image type="content" source="./media/import-export/import-app-service.png" alt-text="Screenshot of the Azure portal, importing from App Service."::: ++1. On the **Import** tab, select **App Services** under **Source service**. ++1. Select **Select resource**, fill out the form with the following parameters, and select **Apply**: ++ | Parameter | Description | Example | + |-|-|-| + | Subscription | Your current subscription is selected by default. | *my-subscription* | + | Resource group | Select a resource group that contains the App Service with configuration to import. | *my-resource-group* | + | Resource | Select the App Service that contains the configuration you want to import. | *my-app-service* | ++ > [!NOTE] + > A message is displayed, indicating the number of keys that were successfully fetched from the source App Service resource. ++1. Fill out the next part of the form: ++ | Parameter | Description | Example | + |--||| + | Prefix | Optional. A key prefix is the beginning part of a key. Prefixes can be used to manage groups of keys in a configuration store. This prefix will be appended to the front of imported keys. | *TestApp:* | + | Label | Optional. Select an existing label or enter a new label that will be assigned to your imported key-values. | *prod* | + | Content type | Optional. Indicate if the file you're importing is a Key Vault reference or a JSON file. For more information about Key Vault references, go to [Use Key Vault references in an ASP.NET Core app](./use-key-vault-references-dotnet-core.md). | *JSON (application/json)* | ++1. Select **Apply** to proceed with the import. ++You've imported all application settings from an App Service as key-values and assigned them the label "prod" and the prefix "TestApp". All keys that you have imported have content type set as "JSON". ++#### [Azure CLI](#tab/azure-cli) ++From the Azure CLI, follow the steps below. If you don't have the Azure CLI installed locally, you can optionally use [Azure Cloud Shell](../cloud-shell/overview.md). ++1. Enter the import command `az appconfig kv import` and add the following parameters: ++ | Parameter | Description | Example | + ||--|| + | `--name` | Enter the name of the App Configuration store you want to import data to. | `my-app-config-store` | + | `--source` | Enter `appservice` to indicate that you're importing app configuration data from Azure App Service. | `appservice` | + | `--appservice-account` | Enter the App Service's ARM ID or use the name of the App Service, if it's in the same subscription and resource group as the App Configuration. | `/subscriptions/123/resourceGroups/my-resource-group/providers/Microsoft.Web/sites/my-app-service` or `my-app-service` | ++1. Optionally also add the following parameters: ++ | Parameter | Description | Example | + |||| + | `--prefix` | Optional. A key prefix is the beginning part of a key. Prefixes can be used to manage groups of keys in a configuration store. This prefix will be appended to the front of imported keys. | `TestApp:` | + | `--label` | Optional. Enter a label that will be assigned to your imported key-values. If you don't specify a label, the null label will be assigned to your key-values. | `prod` | + | `--content-type` | Optional. Enter appconfig/kvset or application/json to state that the imported content consists of a Key Vault reference or a JSON file. | `application/json` | ++ To get the value for `--appservice-account`, use the command `az webapp show --resource-group <resource-group> --name <resource-name>`. ++ Example: import all application settings from your App Service as key-values with the label "prod", to your App Configuration store, and add a "TestApp:" prefix. ++ ```azurecli + az appconfig kv import --name my-app-config-store --source appservice --appservice-account /subscriptions/123/resourceGroups/my-resource-group/providers/Microsoft.Web/sites/my-app-service --label prod --prefix TestApp: + ``` ++1. The command line displays a list of the coming changes. Confirm the import by selecting `y`. ++ :::image type="content" source="./media/import-export/continue-import-app-service-prompt.png" alt-text="Screenshot of the CLI. Import from App Service confirmation prompt."::: ++You've imported all application settings from your App Service as key-values, have assigned them the label "prod", and have added a "TestApp:" prefix. All keys that you have imported have content type set as "JSON". ++For more optional parameters and examples, go to [az appconfig kv import](/cli/azure/appconfig/kv?view=azure-cli-latest#az-appconfig-kv-import&preserve-view=true). ## Export data -Export writes configuration data stored in App Configuration to another destination. Use the export function, for example, to save data from an App Configuration store to a file that can be embedded in your application code during deployment. You can export data from an App Configuration store, an App Service resource or a configuration file in JSON, YAML or .properties. +Export writes configuration data stored in App Configuration to another destination. Use the export function, for example, to save data from an App Configuration store to a file that can be embedded in your application code during deployment. ++This guide shows how to export App Configuration data: ++- [to a configuration file in Json, Yaml or Properties](#export-data-to-a-configuration-file) +- [to an App Configuration store](#export-data-to-an-app-configuration-store) +- [to an Azure App Service resource](#export-data-to-azure-app-service) ++### Export data to a configuration file ++Follow the steps below to export configuration data from an app configuration store to a Json, Yaml or Properties file. ++> [!NOTE] +> Exporting feature flags from an App Configuration store to a configuration file is currently only supported in the CLI. ### [Portal](#tab/azure-portal) From the [Azure portal](https://portal.azure.com), follow these steps: 1. Browse to your App Configuration store, and select **Import/export**. -1. On the **Export** tab, select **Target service** > **Configuration file**. + :::image type="content" source="./media/import-export/export-file.png" alt-text="Screenshot of the Azure portal, exporting a file"::: ++1. On the **Export** tab, select **Configuration file** under **Target service**. 1. Fill out the form with the following parameters: - | Parameter | Description | Example | - ||--|-| - | Prefix | Optional. A key prefix is the beginning part of a key. Enter a prefix to restrict your export to key-values with the specified prefix. | TestApp:Settings:Backgroundcolor | - | From label | Optional. Select an existing label to restrict your export to key-values with a specific label. If you don't select a label, only key-values without a label will be exported. See note below. | prod | - | At a specific time | Optional. Fill out to export key-values from a specific point in time. | 01/28/2021 12:00:00 AM | - | File type | Select the type of file you're importing between YAML, properties or JSON. | JSON | - | Separator | The separator is the character that will be used in the configuration file to separate the exported key-values from one another. Select one of the following options: `.`, `,`,`:`, `;`, `/`, `-`. | ; | + | Parameter | Description | Example | + |--|--|--| + | Prefix | Optional. This prefix will be trimmed from the keys. A key prefix is the beginning part of a key. Prefixes can be used to manage groups of keys in a configuration store. | *TestApp:* | + | From label | Select an existing label to restrict your export to key-values with a specific label. If you don't select a label, by default only key-values with the "No Label" label will be exported. See note below. | *prod* | + | At a specific time | Optional. Fill out to import key-values from a specific point in time. This is the point in time of the resource you selected in the previous step. Format: "YYYY-MM-DDThh:mm:ssZ". | *07/28/2022 12:00:00 AM* | + | File type | Select the type of file you're exporting between Yaml, Properties or Json. | *JSON* | + | Separator | The separator is the delimiter for flattening the key-values to Json/Yaml. It supports the configuration's hierarchical structure and doesn't apply to property files and feature flags. Select one of the following options: *.*, *,*, *:*, *;*, */*, *-*, *_*, *ΓÇö*, or *(No separator)*. | *;* | > [!IMPORTANT]- > If you don't select a label, only keys without labels will be exported. To export a key-value with a label, you must select its label. Note that you can only select one label per export, so to export keys with multiple labels, you may need to export multiple times, once per label you select. + > If you don't select a *From label*, only keys without labels will be exported. To export a key-value with a label, you must select its label. Note that you can only select one label per export in portal, in case you want to export the key-values with all labels specified please use CLI. 1. Select **Export** to finish the export. - :::image type="content" source="./media/export-file-complete.png" alt-text="Screenshot of the Azure portal, exporting a file"::: +You've exported key-values that have the "prod" label from a configuration file, at their state from 07/28/2021 12:00:00 AM, and have trimmed the prefix "TestApp". Values are separated by ";" in the file. ### [Azure CLI](#tab/azure-cli) -Use the Azure CLI as explained below to export configurations from App Configuration to another place. If you don't have the Azure CLI installed locally, you can optionally use [Azure Cloud Shell](../cloud-shell/overview.md). Specify the destination of the data: `appconfig`, `appservice` or `file`. Specify a label for the data you want to export with `--label` or export data with no label by not entering a label. +From the Azure CLI, follow the steps below. If you don't have the Azure CLI installed locally, you can optionally use [Azure Cloud Shell](../cloud-shell/overview.md). ++1. Enter the export command `az appconfig kv export` and add the following parameters: ++ | Parameter | Description | Example | + |--|-|-| + | `--name` | Enter the name of the App Configuration store that contains the key-values you want to export. | `my-app-config-store` | + | `--destination` | Enter `file` to indicate that you're exporting data to a file. | `file` | + | `--path` | Enter the path where you want to save the file. | `C:/Users/john/Downloads/data.json` | + | `--format` | Enter `yaml`, `properties` or `json` to indicate the format of the file you want to export. | `json` | + | `--label` | Enter a label to export keys and feature flags with this label. If you don't specify a label, by default, you will only export keys and feature flags with no label. You can enter one label, enter several labels by separating them with `,`, or use `*` to take all of the labels in account. | `prod` | ++ > [!IMPORTANT] + > If you don't select a label, only keys without labels will be exported. To export a key-value with a label, you must select its label. ++1. Optionally also add the following parameters: ++ | Parameter | Description | Example | + |||| + | `--separator` | Optional. The separator is the delimiter for flattening the key-values to Json/Yaml. It's required for exporting hierarchical structure and will be ignored for property files and feature flags. Select one of the following options: `.`, `,`, `:`, `;`, `/`, `-`, `_`, `ΓÇö`. | `;` | + | `--prefix` | Optional. Prefix to be trimmed from keys. A key prefix is the beginning part of a key. Prefixes can be used to manage groups of keys in a configuration store. Prefix will be ignored for feature flags. | `TestApp:` | ++ Example: export all keys and feature flags with label "prod" to a JSON file. ++ ```azurecli + az appconfig kv export --name my-app-config-store --label prod --destination file --path D:/abc.json --format json --separator ; --prefix TestApp: + ``` ++1. The command line displays a list of key-values getting exported to the file. Confirm the export by selecting `y`. ++ :::image type="content" source="./media/import-export/continue-export-file-prompt.png" alt-text="Screenshot of the CLI. Export to a file confirmation prompt."::: ++You've exported key-values and feature flags that have the "prod" label to a configuration file, and have trimmed the prefix "TestApp". Values are separated by ";" in the file. ++For more optional parameters and examples, go to [az appconfig kv export](/cli/azure/appconfig/kv?view=azure-cli-latest#az-appconfig-kv-export&preserve-view=true). ++++### Export data to an App Configuration store ++Follow the steps below to export key-values and feature flags to an Azure App Configuration store. -> [!IMPORTANT] -> If the keys you want to export have labels, do select the corresponding labels. If you don't select a label, only keys without labels will be exported. +You can export values from one App Configuration store to another App Configuration store, or you can export values from one App Configuration store to the same App Configuration store in order to duplicate its values and apply different parameters, such as new label or content type. -Export all keys and feature flags with label test to a json file. +#### [Portal](#tab/azure-portal) ++From the Azure portal, follow these steps: ++1. Browse to the App Configuration store that contains the data you want to export, and select **Import/export** from the **Operations** menu. ++ :::image type="content" source="./media/import-export/export-app-configuration.png" alt-text="Screenshot of the Azure portal, exporting from an App Configuration store."::: ++1. On the **Export** tab, select **App Configuration** under **Target service**. ++1. Select **Select resource**, fill out the form with the following parameters, and select **Apply**: ++ | Parameter | Description | Example | + |-|-|--| + | Subscription | Your current subscription is selected by default. | *my-subscription* | + | Resource group | Select a resource group that contains the App Configuration store with configuration to import. | *my-resource-group* | + | Resource | Select the App Configuration store that contains the configuration you want to import. | *my-app-config-store* | ++1. The page now displays the selected **Target service** and resource ID. The **Select resource** action lets you switch to another source App Configuration store. ++ > [!NOTE] + > A message is displayed on screen, indicating that the keys were fetched successfully. ++1. Fill out the next part of the form: ++ | Parameter | Description | Example | + |--|-|| + | From label | Select at least one label to export values with the corresponding labels. **Select all** will export keys with any label, and **(No label)** will restrict the export to keys with no label. | *prod* | + | At a specific time | Optional. Fill out to import key-values from a specific point in time. This is the point in time of the resource you selected in the previous step. Format: "YYYY-MM-DDThh:mm:ssZ". | *07/28/2022 12:00:00 AM* | + | Override default key-value labels | Optional. By default, imported items use their current label. Check the box and enter a label to override these defaults with a custom label. | *new* | ++1. Select **Apply** to proceed with the export. ++You've exported key-values and feature flags that have the label "prod" from an App Configuration store, at their state from 07/28/2022 12:00:00 AM, and have assigned them the label "new". ++#### [Azure CLI](#tab/azure-cli) ++From the Azure CLI, follow the steps below. If you don't have the Azure CLI installed locally, you can optionally use [Azure Cloud Shell](../cloud-shell/overview.md). ++1. Enter the export command `az appconfig kv export` and enter the following parameters: ++ | Parameter | Description | Example | + ||--|--| + | `--name` | Enter the name of the App Configuration store that contains the key-values you want to export. | `my-app-config-store` | + | `--destination` | Enter `appconfig` to indicate that you're exporting data to an App Configuration store. | `appconfig` | + | `--dest-name` | Enter the name of the App Configuration store you want to export data to. | `my-other-app-config-store` | + | `--label` | Enter a label to export keys and feature flags with this label. If you don't specify a label, by default, you will only export keys and feature flags with no label. You can enter one label, enter several labels by separating them with `,`, or use `*` to take all of the labels in account. | `prod` | ++ > [!IMPORTANT] + > If the keys you want to export have labels, you must use the command `--label` and enter the corresponding labels. If you don't select a label, only keys without labels will be exported. Use a comma sign (`,`) to select several labels or use `*` to include all labels, including the null label (no label). -```python -az appconfig kv export --name <your-app-config-store-name> --label test --destination file --path D:/abc.json --format json -``` +1. Optionally also add the following parameter: -Export all keys with null label excluding feature flags to a json file. + | Parameter | Description | Example | + ||--|--| + | `--dest-label` | Optional. Enter a destination label, to assign this label to exported key-values. | `new` | -```python -az appconfig kv export --name <your-app-config-store-name> --destination file --path D:/abc.json --format json --skip-features -``` + Example: export keys and feature flags with the label "prod" to another App Configuration store and add the destination label "new". -Export all keys and feature flags with all labels to another App Configuration. + ```azurecli + az appconfig kv export --name my-app-config-store --destination appconfig --dest-name my-other-app-config-store --dest-label new --label prod + ``` -```python -az appconfig kv export --name <your-app-config-store-name> --destination appconfig --dest-name <another-app-config-store-name> --key * --label * --preserve-labels -``` +1. The command line displays a list of key-values getting exported to the files. Confirm the export by selecting `y`. -Export all keys and feature flags with all labels to another App Configuration and overwrite destination labels. + :::image type="content" source="./media/import-export/continue-export-app-configuration-prompt.png" alt-text="Screenshot of the CLI. Export to App Configuration confirmation prompt."::: -```python -az appconfig kv export --name <your-app-config-store-name> --destination appconfig --dest-name <another-app-config-store-name> --key * --label * --dest-label ExportedKeys -``` +You've exported key-values and feature flags that have the label "prod" from an App Configuration store and have assigned them the label "new". -For more details and examples, go to [az appconfig kv export](/cli/azure/appconfig/kv?view=azure-cli-latest#az-appconfig-kv-export&preserve-view=true). +For more optional parameters and examples, go to [az appconfig kv import](/cli/azure/appconfig/kv?view=azure-cli-latest#az-appconfig-kv-import&preserve-view=true). +### Export data to Azure App Service ++Follow the steps below to export key-values to Azure App Service. ++> [!NOTE] +> Exporting feature flags to App Service is currently not supported. ++#### [Portal](#tab/azure-portal) ++From the Azure portal, follow these steps: ++1. Browse to your App Configuration store, and select **Import/export** from the **Operations** menu. ++ :::image type="content" source="./media/import-export/export-app-service.png" alt-text="Screenshot of the Azure portal, exporting from App Service."::: ++1. On the **Export** tab, select **App Services** under **Target service**. ++1. Select **Select resource**, fill out the form with the following parameters, and select **Apply**: ++ | Parameter | Description | Example | + |-|-|--| + | Subscription | Your current subscription is selected by default. | *my-subscription* | + | Resource group | Select a resource group that contains the App Service with configuration to export. | *my-resource-group* | + | Resource | Select the App Service that contains the configuration you want to export. | *my-app-service* | ++1. The page now displays the selected **Target service** and resource ID. The **Select resource** action lets you switch to another source App Service. ++1. Optionally fill out the next part of the form: ++ | Parameter | Description | Example | + |--|-|| + | Prefix | Optional. This prefix will be trimmed from the imported keys. A key prefix is the beginning part of a key. Prefixes can be used to manage groups of keys in a configuration store. Prefix will be ignored for feature flags. | *TestApp:* | + | From label | Select an existing label to restrict your export to key-values with a specific label. If you don't select a label, only key-values with the "No label" label will be exported. See note below. | *prod* | + | At a specific time | Optional. Fill out to import key-values from a specific point in time. This is the point in time of the resource you selected in the previous step. Format: "YYYY-MM-DDThh:mm:ssZ". | *07/28/2022 12:00:00 AM* | + | Content type | Optional. Check the box **Override default key-value content types** and select **Key Vault Reference** or **JSON** under **Content type** to state that the imported content consists of a Key Vault reference or a JSON file. | *JSON (application/json)* | ++ > [!IMPORTANT] + > If the keys you want to export have labels, you must use the command `--label` and enter the corresponding labels. If you don't select a label, only keys without labels will be exported. Use a comma sign (`,`) to select several labels or use `*` to include all labels, including the null label (no label). ++1. Select **Apply** to proceed with the export. ++You've exported key-values that have the "prod" label from an App Service resource, at their state from 07/28/2021 12:00:00 AM, and have trimmed the prefix "TestApp". The keys have been exported with a content type in JSON format. ++#### [Azure CLI](#tab/azure-cli) ++From the Azure CLI, follow the steps below. If you don't have the Azure CLI installed locally, you can optionally use [Azure Cloud Shell](../cloud-shell/overview.md). ++1. Enter the export command `az appconfig kv export` and enter the following parameters: ++ | Parameter | Description | Example | + ||--|| + | `--name` | Enter the name of the App Configuration store that contains the key-values you want to export. | `my-app-config-store` | + | `--destination` | Enter `appservice` to indicate that you're exporting data to App Service. | `appservice` | + | `--appservice-account` | Enter the App Service's ARM ID or use the name of the App Service, if it's in the same subscription and resource group as the App Configuration. | `/subscriptions/123/resourceGroups/my-as-resource-group/providers/Microsoft.Web/sites/my-app-service` or `my-app-service` | + | `--label` | Enter a label to export keys and feature flags with this label. If you don't specify a label, by default, you will only export keys and feature flags with no label. You can enter one label, enter several labels by separating them with `,`, or use `*` to take all of the labels in account. | `prod` | ++ > [!IMPORTANT] + > If the keys you want to export have labels, you must use the command `--label` and enter the corresponding labels. If you don't select a label, only keys without labels will be exported. Use a comma sign (`,`) to select several labels or use `*` to include all labels, including the null label (no label). ++ To get the value for `--appservice-account`, use the command `az webapp show --resource-group <resource-group> --name <resource-name>`. ++1. Optionally also add a prefix: ++ | Parameter | Description | Example | + ||-|--| + | `--prefix` | Optional. Prefix to be trimmed from keys. A key prefix is the beginning part of a key. Prefixes can be used to manage groups of keys in a configuration store. | `TestApp:` | ++ Example: export all key-values with the label "prod" to an App Service application and trim the prefix "TestApp". ++ ```azurecli + az appconfig kv export --name my-app-config-store --destination appservice --appservice-account /subscriptions/123/resourceGroups/my-resource-group/providers/Microsoft.Web/sites/my-app-service/config/web --label prod --prefix TestApp: + ``` ++1. The command line displays a list of key-values getting exported to the file. Confirm the export by selecting `y`. ++ :::image type="content" source="./media/import-export/continue-export-app-service-prompt.png" alt-text="Screenshot of the CLI. Export to App Service confirmation prompt."::: ++You've exported all keys with the label "prod" to an Azure App Service resource and have trimmed the prefix "TestApp:". ++For more optional parameters and examples, go to [az appconfig kv export](/cli/azure/appconfig/kv?view=azure-cli-latest#az-appconfig-kv-export&preserve-view=true). ++++## Error messages ++You may encounter the following error messages when importing or exporting App Configuration keys: ++- **Selected file must be between 1 and 2097152 bytes.**: your file is too large. Select a smaller file. +- **Public access is disabled for your store or you are accessing from a private endpoint that is not in the storeΓÇÖs private endpoint configurations**. To import keys from an App Configuration store, you need to have access to that store. If necessary, enable public access for the source store or access it from an approved private endpoint. If you just enabled public access, wait up to 5 minutes for the cache to refresh. + ## Next steps > [!div class="nextstepaction"]-> [Create an ASP.NET Core web app](./quickstart-aspnet-core-app.md) +> [Back up App Configuration stores automatically](./howto-backup-config-store.md) |
azure-arc | Cluster Connect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/cluster-connect.md | A conceptual overview of this feature is available in [Cluster connect - Azure A ``` ```console- TOKEN=$(kubectl get secret demo-user-secret -o jsonpath='{$.data.token}' | base64 -d | sed $'s/$/\\\n/g') + TOKEN=$(kubectl get secret demo-user-secret -o jsonpath='{$.data.token}' | base64 -d | sed 's/$/\\\n/g') ``` 1. Get the token to output to console |
azure-arc | Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md | This topic describes the basic requirements for installing the Connected Machine Azure Arc-enabled servers support the installation of the Connected Machine agent on physical servers and virtual machines hosted outside of Azure. This includes support for virtual machines running on platforms like: -* VMware +* VMware (including Azure VMware Solution) * Azure Stack HCI * Other cloud environments |
azure-cache-for-redis | Cache Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-overview.md | Azure Cache for Redis is available in these tiers: | Basic | An OSS Redis cache running on a single VM. This tier has no service-level agreement (SLA) and is ideal for development/test and non-critical workloads. | | Standard | An OSS Redis cache running on two VMs in a replicated configuration. | | Premium | High-performance OSS Redis caches. This tier offers higher throughput, lower latency, better availability, and more features. Premium caches are deployed on more powerful VMs compared to the VMs for Basic or Standard caches. |-| Enterprise | High-performance caches powered by Redis Inc.'s Redis Enterprise software. This tier supports Redis modules including RediSearch, RedisBloom, and RedisTimeSeries. Also, it offers even higher availability than the Premium tier. | +| Enterprise | High-performance caches powered by Redis Inc.'s Redis Enterprise software. This tier supports Redis modules including RediSearch, RedisBloom, RedisJSON, and RedisTimeSeries. Also, it offers even higher availability than the Premium tier. | | Enterprise Flash | Cost-effective large caches powered by Redis Inc.'s Redis Enterprise software. This tier extends Redis data storage to non-volatile memory, which is cheaper than DRAM, on a VM. It reduces the overall per-GB memory cost. | ### Feature comparison The [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/ | [Data persistence](cache-how-to-premium-persistence.md) |-|-|Γ£ö|Preview|Preview| | [Zone redundancy](cache-how-to-zone-redundancy.md) |-|-|Γ£ö|Γ£ö|Γ£ö| | [Geo-replication](cache-how-to-geo-replication.md) |-|-|Γ£ö|Γ£ö|Γ£ö|-| [Redis Modules](#choosing-the-right-tier) |-|-|-|Γ£ö|-| +| [Redis Modules](cache-redis-modules.md) |-|-|-|Γ£ö|Preview| | [Import/Export](cache-how-to-import-export-data.md) |-|-|Γ£ö|Γ£ö|Γ£ö| | [Reboot](cache-administration.md#reboot) |Γ£ö|Γ£ö|Γ£ö|-|-| | [Scheduled updates](cache-administration.md#schedule-updates) |Γ£ö|Γ£ö|Γ£ö|-|-| +> [!NOTE] +> The Enterprise Flash tier currently supports only the RedisJSON and RediSearch modules in preview. + ### Choosing the right tier Consider the following options when choosing an Azure Cache for Redis tier: Consider the following options when choosing an Azure Cache for Redis tier: - **High availability**: Azure Cache for Redis provides multiple [high availability](cache-high-availability.md) options. It guarantees that a Standard, Premium, or Enterprise cache is available according to our [SLA](https://azure.microsoft.com/support/legal/sla/cache/v1_0/). The SLA only covers connectivity to the cache endpoints. The SLA doesn't cover protection from data loss. We recommend using the Redis data persistence feature in the Premium and Enterprise tiers to increase resiliency against data loss. - **Data persistence**: The Premium and Enterprise tiers allow you to persist the cache data to an Azure Storage account and a Managed Disk respectively. Underlying infrastructure issues might result in potential data loss. We recommend using the Redis data persistence feature in these tiers to increase resiliency against data loss. Azure Cache for Redis offers both RDB and AOF (preview) options. Data persistence can be enabled through Azure portal and CLI. For the Premium tier, see [How to configure persistence for a Premium Azure Cache for Redis](cache-how-to-premium-persistence.md). - **Network isolation**: Azure Private Link and Virtual Network (VNET) deployments provide enhanced security and traffic isolation for your Azure Cache for Redis. VNET allows you to further restrict access through network access control policies. For more information, see [Azure Cache for Redis with Azure Private Link](cache-private-link.md) and [How to configure Virtual Network support for a Premium Azure Cache for Redis](cache-how-to-premium-vnet.md).-- **Redis Modules**: Enterprise tiers support [RediSearch](https://docs.redis.com/latest/modules/redisearch/), [RedisBloom](https://docs.redis.com/latest/modules/redisbloom/) and [RedisTimeSeries](https://docs.redis.com/latest/modules/redistimeseries/). These modules add new data types and functionality to Redis.+- **Redis Modules**: Enterprise tiers support [RediSearch](https://docs.redis.com/latest/modules/redisearch/), [RedisBloom](https://docs.redis.com/latest/modules/redisbloom/), [RedisTimeSeries](https://docs.redis.com/latest/modules/redistimeseries/), and [RedisJSON](https://docs.redis.com/latest/modules/redisjson/) (preview). These modules add new data types and functionality to Redis. You can scale your cache from the Basic tier up to Premium after it has been created. Scaling down to a lower tier isn't supported currently. For step-by-step scaling instructions, see [How to Scale Azure Cache for Redis](cache-how-to-scale.md) and [How to automate a scaling operation](cache-how-to-scale.md#how-to-automate-a-scaling-operation). |
azure-fluid-relay | Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/resources/support.md | -If you have an issue or question involving Azure Fluid Relay, the following options are available. +If you have an issue or question involving Azure Fluid Relay, the following options are available: -## Check out frequently asked questions --You can see if your question is already answered on our Frequently Asked Questions [page](faq.md). +> [!IMPORTANT] +> For ongoing service issues that are time sensitive, creating an Azure support request is the preferred option. ## Create an Azure support request -With Azure, there are many [support options and plans](https://azure.microsoft.com/support/plans/) available, which you can explore and review. You can create a support ticket in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview). +If you are running into an ongoing service issue that is impacting your end users, creating a support request is the best way to obtain live-site support. Depending on the degree of impact, setting the right severity level for the support case will get you to the technical support needed. With Azure, there are many [support options and plans](https://azure.microsoft.com/support/plans/) available, which you can explore and review. You can create a support ticket in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview). ++## Check out frequently asked questions ++You can see if your question is already answered on our Frequently Asked Questions [page](faq.md). ## Post a question to Microsoft Q&A |
azure-fluid-relay | Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-glossary-cloud-terminology.md | |
azure-government | Documentation Government Overview Itar | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-itar.md | The EAR is applicable to dual-use items that have both commercial and military a Both Azure and Azure Government can help you meet your EAR compliance requirements. Except for the Azure region in Hong Kong SAR, Azure and Azure Government datacenters aren't located in proscribed countries or in the Russian Federation. -Azure services rely on [FIPS 140](/azure/compliance/offerings/offering-fips-140-2) validated cryptographic modules in the underlying operating system, and provide you with [many options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md). The Key Vault service can store encryption keys in FIPS 140 validated hardware security modules (HSMs) under your control, also known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Keys generated inside the Azure Key Vault HSMs aren't exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. **Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents don't see or extract your cryptographic keys.** +Azure services rely on [FIPS 140](/azure/compliance/offerings/offering-fips-140-2) validated cryptographic modules in the underlying operating system, and provide you with [many options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md). The Key Vault service can store encryption keys in FIPS 140 validated hardware security modules (HSMs) under your control, also known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Keys generated inside the Azure Key Vault HSMs aren't exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. **Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents don't see or extract your cryptographic keys.** For extra assurances, see [How does Azure Key Vault protect your keys?](../key-vault/managed-hsm/mhsm-control-data.md#how-does-azure-key-vault-managed-hsm-protect-your-keys) You're responsible for choosing Azure or Azure Government regions for deploying your applications and data. Moreover, you're responsible for designing your applications to apply end-to-end data encryption that meets EAR requirements. Microsoft doesn't inspect, approve, or monitor your applications deployed on Azure or Azure Government. The US Department of State has export control authority over defense articles, s DDTC [revised the ITAR rules](https://www.federalregister.gov/documents/2019/12/26/2019-27438/international-traffic-in-arms-regulations-creation-of-definition-of-activities-that-are-not-exports) effective 25 March 2020 to align them more closely with the EAR. These ITAR revisions introduced an end-to-end data encryption carve-out that incorporated many of the same terms that the US Department of Commerce adopted in 2016 for the EAR. Specifically, the revised ITAR rules state that activities that don't constitute exports, re-exports, re-transfers, or temporary imports include (among other activities) the sending, taking, or storing of technical data that is 1) unclassified, 2) secured using end-to-end encryption, 3) secured using FIPS 140 compliant cryptographic modules as prescribed in the regulations, 4) not intentionally sent to a person in or stored in a [country proscribed in ┬º 126.1](https://www.ecfr.gov/current/title-22/chapter-I/subchapter-M/part-126?toc=1) or the Russian Federation, and 5) not sent from a country proscribed in ┬º 126.1 or the Russian Federation. Moreover, DDTC clarified that data in-transit via the Internet isn't deemed to be stored. End-to-end encryption implies the data is always kept encrypted between the originator and intended recipient, and the means of decryption isn't provided to any third party. -There's no ITAR compliance certification; however, both Azure and Azure Government can help you meet your ITAR compliance obligations. Except for the Azure region in Hong Kong SAR, Azure and Azure Government datacenters aren't located in proscribed countries or in the Russian Federation. Azure services rely on [FIPS 140](/azure/compliance/offerings/offering-fips-140-2) validated cryptographic modules in the underlying operating system, and provide you with [many options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md). The Key Vault service can store encryption keys in FIPS 140 validated hardware security modules (HSMs) under your control, also known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Keys generated inside the Azure Key Vault HSMs aren't exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. **Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents don't see or extract your cryptographic keys.** +There's no ITAR compliance certification; however, both Azure and Azure Government can help you meet your ITAR compliance obligations. Except for the Azure region in Hong Kong SAR, Azure and Azure Government datacenters aren't located in proscribed countries or in the Russian Federation. Azure services rely on [FIPS 140](/azure/compliance/offerings/offering-fips-140-2) validated cryptographic modules in the underlying operating system, and provide you with [many options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md). The Key Vault service can store encryption keys in FIPS 140 validated hardware security modules (HSMs) under your control, also known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Keys generated inside the Azure Key Vault HSMs aren't exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. **Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents don't see or extract your cryptographic keys.** For extra assurances, see [How does Azure Key Vault protect your keys?](../key-vault/managed-hsm/mhsm-control-data.md#how-does-azure-key-vault-managed-hsm-protect-your-keys) You're responsible for choosing Azure or Azure Government regions for deploying your applications and data. Moreover, you're responsible for designing your applications to apply end-to-end data encryption that meets ITAR requirements. Microsoft doesn't inspect, approve, or monitor your applications deployed on Azure or Azure Government. The [Office of Foreign Assets Control](https://home.treasury.gov/policy-issues/o The OFAC defines prohibited transactions as trade or financial transactions and other dealings in which US persons may not engage unless authorized by OFAC or expressly exempt by statute. For web-based interactions, see [FAQ No. 73](https://home.treasury.gov/policy-issues/financial-sanctions/faqs/73) for general guidance released by OFAC, which specifies, for example, that “Firms that facilitate or engage in e-commerce should do their best to know their customers directly.” -As stated in the Microsoft Online Services Terms [Data Protection Addendum](https://aka.ms/dpa) (DPA), “Microsoft doesn't control or limit the regions from which customer or customerΓÇÖs end users may access or move customer data.” For Microsoft online services, Microsoft conducts due diligence to prevent transactions with entities from OFAC embargoed countries. For example, a sanctions target isn't allowed to provision Azure services. OFAC hasn't issued guidance, like the guidance provided by BIS for the EAR, that draws a distinction between cloud service providers and customers when it comes to deemed export. Therefore, it would be **your responsibility to exclude sanctions targets from online transactions** involving your applications, including web sites, deployed on Azure. Microsoft doesn't block network traffic to your web sites deployed on Azure. Even though OFAC mentions that customers can restrict access based in IP table ranges, they also acknowledge that this approach doesn't fully address an internetΓÇÖs firm compliance risks. Therefore, OFAC recommends that e-commerce firms should know their customers directly. Microsoft isn't responsible for and doesn't have the means to know directly the end users that interact with your applications deployed on Azure. +As stated in the Microsoft Online Services Terms [Data Protection Addendum](https://aka.ms/dpa) (DPA), “Microsoft doesn't control or limit the regions from which customer or customerΓÇÖs end users may access or move customer data.” For Microsoft online services, Microsoft conducts due diligence to prevent transactions with entities from OFAC embargoed countries. For example, a sanctions target isn't allowed to provision Azure services. OFAC hasn't issued guidance, like the guidance provided by BIS for the EAR that draws a distinction between cloud service providers and customers when it comes to deemed export. Therefore, it would be **your responsibility to exclude sanctions targets from online transactions** involving your applications, including web sites, deployed on Azure. Microsoft doesn't block network traffic to your web sites deployed on Azure. Even though OFAC mentions that customers can restrict access based in IP table ranges, they also acknowledge that this approach doesn't fully address an internetΓÇÖs firm compliance risks. Therefore, OFAC recommends that e-commerce firms should know their customers directly. Microsoft isn't responsible for and doesn't have the means to know directly the end users that interact with your applications deployed on Azure. OFAC sanctions are in place to prevent “conducting business with a sanctions target”, that is, preventing transactions involving trade, payments, financial instruments, and so on. OFAC sanctions aren't intended to prevent a resident of a proscribed country from viewing a public web site. OFAC sanctions are in place to prevent “conducting business with a sanctio You should assess carefully how your use of Azure may implicate US export controls, and determine whether any of the data you want to store or process in the cloud may be subject to export controls. Microsoft provides you with contractual commitments, operational processes, and technical features to help you meet your export control obligations when using Azure. The following Azure features are available to help you manage potential export control risks: - **Ability to control data location** ΓÇô You have visibility as to where your [data is stored](https://azure.microsoft.com/global-infrastructure/data-residency/), and robust tools to restrict data storage to a single geography, region, or country. For example, you may therefore ensure that data is stored in the United States or your country of choice and minimize transfer of controlled technology/technical data outside the target country. Your data isn't *intentionally stored* in a non-conforming location, consistent with the EAR and ITAR rules.-- **End-to-end encryption** ΓÇô Implies the data is always kept encrypted between the originator and intended recipient, and the means of decryption isn't provided to any third party. Azure relies on [FIPS 140](/azure/compliance/offerings/offering-fips-140-2) validated cryptographic modules in the underlying operating system, and provides you with [many options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md). The Key Vault service can store encryption keys in FIPS 140 validated hardware security modules (HSMs) under your control, also known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents don't see or extract your cryptographic keys.+- **End-to-end encryption** ΓÇô Implies the data is always kept encrypted between the originator and intended recipient, and the means of decryption isn't provided to any third party. Azure relies on [FIPS 140](/azure/compliance/offerings/offering-fips-140-2) validated cryptographic modules in the underlying operating system, and provides you with [many options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md). The Key Vault service can store encryption keys in FIPS 140 validated hardware security modules (HSMs) under your control, also known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents [don't see or extract your cryptographic keys](../key-vault/managed-hsm/mhsm-control-data.md#how-does-azure-key-vault-managed-hsm-protect-your-keys). - **Control over access to data** ΓÇô You can know and control who can access your data and on what terms. Microsoft technical support personnel don't need and don't have default access to your data. For those rare instances where resolving your support requests requires elevated access to your data, [Customer Lockbox for Azure](../security/fundamentals/customer-lockbox-overview.md) puts you in charge of approving or denying data access requests.-- **Tools and protocols to prevent unauthorized deemed export/re-export** ΓÇô Apart from the EAR and ITAR *end-to-end encryption* safe harbor for physical storage locations, the use of encryption also helps protect against a potential deemed export, or deemed re-export, because even if a non-US person has access to the encrypted data, nothing is revealed to non-US person who can't read or understand the data while it's encrypted and thus there is no release of any controlled data. However, ITAR requires some authorization before granting foreign persons with access information that would enable them to decrypt ITAR technical data. Azure offers a wide range of encryption capabilities and solutions, flexibility to choose among encryption options, and robust tools for managing encryption.+- **Tools and protocols to prevent unauthorized deemed export/re-export** ΓÇô Apart from the EAR and ITAR *end-to-end encryption* safe harbor for physical storage locations, the use of encryption also helps protect against a potential deemed export, or deemed re-export, because even if a non-US person has access to the encrypted data, nothing is revealed to non-US person who can't read or understand the data while it's encrypted and thus there's no release of any controlled data. However, ITAR requires some authorization before granting foreign persons with access information that would enable them to decrypt ITAR technical data. Azure offers a wide range of encryption capabilities and solutions, flexibility to choose among encryption options, and robust tools for managing encryption. ## Location of customer data |
azure-government | Documentation Government Overview Jps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-jps.md | While the current CMVP FIPS 140 implementation guidance precludes a FIPS 140 val Proper protection and management of encryption keys is essential for data security. [Azure Key Vault](../key-vault/index.yml) is a cloud service for securely storing and managing secrets. Key Vault enables you to store your encryption keys in hardware security modules (HSMs) that are FIPS 140 validated. For more information, see [Data encryption key management](./azure-secure-isolation-guidance.md#data-encryption-key-management). -With Key Vault, you can import or generate encryption keys in HSMs, ensuring that keys never leave the HSM protection boundary to support *bring your own key* (BYOK) scenarios. Keys generated inside the Key Vault HSMs aren't exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. **Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract your cryptographic keys.** +With Key Vault, you can import or generate encryption keys in HSMs, ensuring that keys never leave the HSM protection boundary to support *bring your own key* (BYOK) scenarios. Keys generated inside the Key Vault HSMs aren't exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. **Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents don't see or extract your cryptographic keys.** For extra assurances, see [How does Azure Key Vault protect your keys?](../key-vault/managed-hsm/mhsm-control-data.md#how-does-azure-key-vault-managed-hsm-protect-your-keys) Therefore, if you use CMK stored in Azure Key Vault HSMs, you effectively maintain sole ownership of encryption keys. ### Data encryption in transit |
azure-monitor | Alerts Create New Alert Rule | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md | description: Learn how to create a new alert rule. Previously updated : 08/03/2022 Last updated : 08/23/2022 # Create a new alert rule And then defining these elements for the resulting alert actions using: 1. In the **Select a resource** pane, set the scope for your alert rule. You can filter by **subscription**, **resource type**, **resource location**, or do a search. - You can see the **Available signal types** for your selected resource(s) at the bottom right of the pane. The available signal types change based on the selected resource. + The **Available signal types** for your selected resource(s) are at the bottom right of the pane. :::image type="content" source="media/alerts-create-new-alert-rule/alerts-select-resource.png" alt-text="Screenshot showing the select resource pane for creating new alert rule."::: 1. Select **Include all future resources** to include any future resources added to the selected scope. 1. Select **Done**. 1. Select **Next: Condition>** at the bottom of the page.-1. In the **Select a signal** pane, the **Signal type**, **Monitor service**, and **Signal name** fields are pre-populated with the available values for your selected scope. You can narrow the signal list using these fields. The **Signal type** determines which [type of alert](alerts-overview.md#types-of-alerts) rule you're creating. -1. Select the **Signal name**, and follow the steps below depending on the type of alert you're creating. +1. In the **Select a signal** pane, filter the list of signals using the **Signal type** and **Monitor service**. + - **Signal Type**: The [type of alert rule](alerts-overview.md#types-of-alerts) you're creating. + - **Monitor service**: The service sending the signal. This list is pre-populated based on the type of alert rule you selected. ++ This table describes the services available for each type of alert rule: ++ |Signal type |Monitor service |Description | + |||| + |Metrics|Platform |For metric signals, the monitor service is the metric namespace. ΓÇÿPlatformΓÇÖ means the metrics are provided by the resource provider, namely 'Azure'.| + | |Azure.ApplicationInsights|Customer-reported metrics, sent by the Application Insights SDK. | + | |Azure.VM.Windows.GuestMetrics |VM guest metrics, collected by an extension running on the VM. Can include built-in operating system perf counters, and custom perf counters. | + | |\<your custom namespace\>|A custom metric namespace, containing custom metrics sent with the Azure Monitor Metrics API. | + |Log |Log Analytics|The service that provides the ΓÇÿCustom log searchΓÇÖ and ΓÇÿLog (saved query)ΓÇÖ signals. | + |Activity log|Activity Log ΓÇô Administrative|The service that provides the ΓÇÿAdministrativeΓÇÖ activity log events. | + | |Activity Log ΓÇô Policy|The service that provides the 'Policy' activity log events. | + | |Activity Log ΓÇô Autoscale|The service that provides the ΓÇÿAutoscaleΓÇÖ activity log events. | + | |Activity Log ΓÇô Security|The service that provides the ΓÇÿSecurityΓÇÖ activity log events. | + |Resource health|Resource Health|The service that provides the resource-level health status. | + |Service health|Service health|The service that provides the subscription-level health status. | ++ +1. Select the **Signal name**, and follow the steps in the tab below that corresponds to the type of alert you're creating. ### [Metric alert](#tab/metric) 1. In the **Configure signal logic** pane, you can preview the results of the selected metric signal. Select values for the following fields. And then defining these elements for the resulting alert actions using: From this point on, you can select the **Review + create** button at any time. 1. In the **Actions** tab, select or create the required [action groups](./action-groups.md).+1. (Optional) If you want to make sure that the data processing for the action group takes place within a specific region, you can select an action group in one of these regions in which to process the action group: + - Sweden Central + - Germany West Central ++ > [!NOTE] + > We are continually adding more regions for regional data processing. :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-actions-tab.png" alt-text="Screenshot of the actions tab when creating a new alert rule."::: -1. In the **Details** tab, define the **Project details** by selecting the **Subscription** and **Resource group**. +1. In the **Details** tab, define the **Project details**. + - Select the **Subscription**. + - Select the **Resource group**. + - (Optional) If you want to make sure that the data processing for the alert rule takes place within a specific region, and you're creating a metric alert rule that monitors a custom metric, you can select to process the alert rule in one of these regions. + - North Europe + - West Europe + - Sweden Central + - Germany West Central + 1. Define the **Alert rule details**. ### [Metric alert](#tab/metric) The *sampleActivityLogAlert.parameters.json* file contains the values provided f ## Changes to log alert rule creation experience -If you're creating a new log alert rule, note that current alert rule wizard is a little different from the earlier experience: +If you're creating a new log alert rule, please note that current alert rule wizard is a little different from the earlier experience: - Previously, search results were included in the payload of the triggered alert and its associated notifications. The email included only 10 rows from the unfiltered results while the webhook payload contained 1000 unfiltered results. To get detailed context information about the alert so that you can decide on the appropriate action: - We recommend using [Dimensions](alerts-types.md#narrow-the-target-using-dimensions). Dimensions provide the column value that fired the alert, giving you context for why the alert fired and how to fix the issue. |
azure-monitor | Alerts Troubleshoot Metric | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot-metric.md | description: Common issues with Azure Monitor metric alerts and possible solutio Previously updated : 6/23/2022 Last updated : 8/31/2022 ms:reviwer: harelbr # Troubleshooting problems in Azure Monitor metric alerts The table below lists the metrics that aren't supported by dynamic thresholds. | Microsoft.Network/expressRouteGateways | ExpressRouteGatewayPacketsPerSecond | | Microsoft.Network/expressRouteGateways | ExpressRouteGatewayNumberOfVmInVnet | | Microsoft.Network/expressRouteGateways | ExpressRouteGatewayFrequencyOfRoutesChanged |+| Microsoft.Network/virtualNetworkGateways | ExpressRouteGatewayBitsPerSecond | | Microsoft.Network/virtualNetworkGateways | ExpressRouteGatewayPacketsPerSecond | | Microsoft.Network/virtualNetworkGateways | ExpressRouteGatewayNumberOfVmInVnet | | Microsoft.Network/virtualNetworkGateways | ExpressRouteGatewayFrequencyOfRoutesChanged | |
azure-monitor | Java In Process Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md | Add `-javaagent:"path/to/applicationinsights-agent-3.3.1.jar"` to your applicati - You can set an environment variable: ```console- APPLICATIONINSIGHTS_CONNECTION_STRING = <Copy connection string from Application Insights Resource Overview> + APPLICATIONINSIGHTS_CONNECTION_STRING=<Copy connection string from Application Insights Resource Overview> ``` - Or you can create a configuration file named `applicationinsights.json`. Place it in the same directory as `applicationinsights-agent-3.3.1.jar` with the following content: In the `applicationinsights.json` file, you can also configure these settings: For more information, see [Configuration options](./java-standalone-config.md). -## Instrumentation libraries +## Auto-Instrumentation -Java 3.x includes the following instrumentation libraries. +Java 3.x includes the following auto-instrumentation. ### Autocollected requests This section explains how to modify telemetry. ### Add spans -You can use `opentelemetry-api` to create [tracers](https://opentelemetry.io/docs/instrumentation/java/manual/#tracing) and spans. Spans populate the dependencies table in Application Insights. The string passed in for the span's name is saved to the _target_ field within the dependency. +You can use `opentelemetry-api` to create [tracers](https://opentelemetry.io/docs/instrumentation/java/manual/#tracing) and spans. +Spans populate the `requests` and `dependencies` tables in Application Insights. > [!NOTE] > This feature is only in 3.2.0 and later. You can use `opentelemetry-api` to create span events, which populate the traces You can use `opentelemetry-api` to add attributes to spans. These attributes can include adding a custom business dimension to your telemetry. You can also use attributes to set optional fields in the Application Insights schema, such as User ID or Client IP. -#### Add a custom dimension --Adding one or more custom dimensions populates the _customDimensions_ field in the requests, dependencies, traces, or exceptions table. +Adding one or more span attributes populates the _customDimensions_ field in the requests, dependencies, traces, or exceptions table. > [!NOTE] > This feature is only in 3.2.0 and later. The following table represents currently supported custom telemetry types that y - Custom metrics are supported through micrometer. - Custom exceptions and traces are supported through logging frameworks.-- Custom requests, dependencies, and exceptions are supported through `opentelemetry-api`.-- Any type of the custom telemetry is supported through the [Application Insights Java 2.x SDK](#send-custom-telemetry-by-using-the-2x-sdk).+- Custom requests, dependencies, metrics, and exceptions are supported through `opentelemetry-api`. +- All types of the custom telemetry is supported through the [Application Insights Java 2.x SDK](#send-custom-telemetry-by-using-the-2x-sdk). | Custom telemetry type | Micrometer | Log4j, logback, JUL | 2.x SDK | opentelemetry-api | |--||||-| The following table represents currently supported custom telemetry types that y | Exceptions | | Yes | Yes | Yes | | Page views | | | Yes | | | Requests | | | Yes | Yes |-| Traces | | Yes | Yes | Yes | +| Traces | | Yes | Yes | | Currently, we're not planning to release an SDK with Application Insights 3.x. |
azure-monitor | Java Standalone Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md | Or you can set the cloud role instance using the Java system property `applicati ## Sampling -Sampling is helpful if you need to reduce cost. -Sampling is performed as a function on the operation ID (also known as trace ID), so that the same operation ID will always result in the same sampling decision. This ensures that you won't get parts of a distributed transaction sampled in while other parts of it are sampled out. +> [!NOTE] +> Sampling can be a great way to reduce the cost of Application Insights. Make sure to set up your sampling +> configuration appropriately for your use case. ++Sampling is request-based, meaning if a request is captured (sampled), then so are its dependencies, logs and +exceptions. ++Furthermore, sampling is trace ID based, to help ensure consistent sampling decisions across different services. ++### Rate-Limited Sampling ++Starting from 3.4.0-BETA, rate-limited sampling is available, and is now the default. ++If no sampling has been configured, the default is now rate-limited sampling configured to capture at most +(approximately) 5 requests per second. This replaces the prior default which was to capture all requests. +If you still wish to capture all requests, use [fixed-percentage sampling](#fixed-percentage-sampling) and set the +sampling percentage to 100. ++> [!NOTE] +> The rate-limited sampling is approximate, because internally it must adapt a "fixed" sampling percentage over +> time in order to emit accurate item counts on each telemetry record. Internally, the rate-limited sampling is +> tuned to adapt quickly (0.1 seconds) to new application loads, so you should not see it exceed the configured rate by +> much, or for very long. ++Here is an example how to set the sampling to capture at most (approximately) 1 request per second: ++```json +{ + "sampling": { + "limitPerSecond": 1.0 + } +} +``` ++Note that `limitPerSecond` can be a decimal, so you can configure it to capture less than one request per second if you +wish. ++You can also set the sampling percentage using the environment variable `APPLICATIONINSIGHTS_SAMPLING_LIMIT_PER_SECOND` +(which will then take precedence over rate limit specified in the json configuration). -For example, if you set sampling to 10%, you will only see 10% of your transactions, but each one of those 10% will have full end-to-end transaction details. +### Fixed-Percentage Sampling -Here is an example how to set the sampling to capture approximately **1/3 of all transactions** - make sure you set the sampling rate that is correct for your use case: +Here is an example how to set the sampling to capture approximately a third of all requests: ```json { You can also set the sampling percentage using the environment variable `APPLICA (which will then take precedence over sampling percentage specified in the json configuration). > [!NOTE]-> For the sampling percentage, choose a percentage that is close to 100/N where N is an integer. Currently sampling doesn't support other values. +> For the sampling percentage, choose a percentage that is close to 100/N where N is an integer. +> Currently sampling doesn't support other values. ## Sampling overrides (preview) Starting from version 3.2.0, if you want to set a custom dimension programmatica } ``` +## Connection string overrides (preview) ++This feature is in preview, starting from 3.4.0-BETA. ++Connection string overrides allow you to override the [default connection string](#connection-string), for example: +* Set one connection string for one http path prefix `/myapp1`. +* Set another connection string for another http path prefix `/myapp2/`. ++```json +{ + "preview": { + "connectionStringOverrides": [ + { + "httpPathPrefix": "/myapp1", + "connectionString": "12345678-0000-0000-0000-0FEEDDADBEEF" + }, + { + "httpPathPrefix": "/myapp2", + "connectionString": "87654321-0000-0000-0000-0FEEDDADBEEF" + } + ] + } +} +``` + ## Instrumentation key overrides (preview) This feature is in preview, starting from 3.2.3. These are the valid `level` values that you can specify in the `applicationinsig ### LoggingLevel -Starting from version 3.3.0, `LoggingLevel` is not captured by default as part of Traces' custom dimension since that data is aleady captured in the `SeverityLevel` field. +Starting from version 3.3.0, `LoggingLevel` is not captured by default as part of Traces' custom dimension since that data is already captured in the `SeverityLevel` field. If needed, you can re-enable the previous behavior: To disable auto-collection of Micrometer metrics (including Spring Boot Actuator } ``` +## JDBC query masking ++Literal values in JDBC queries are masked by default in order to avoid accidentally capturing sensitive data. ++Starting from 3.4.0-BETA, this behavior can be disabled if desired, e.g. ++```json +{ + "instrumentation": { + "jdbc": { + "masking": { + "enabled": false + } + } + } +} +``` + ## HTTP headers Starting from version 3.3.0, you can capture request and response headers on your server (request) telemetry: |
azure-monitor | Java Standalone Sampling Overrides | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-sampling-overrides.md | To begin, create a configuration file named *applicationinsights.json*. Save it ```json {- "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000", + "connectionString": "...", "sampling": { "percentage": 10 }, To begin, create a configuration file named *applicationinsights.json*. Save it } ``` +> [!NOTE] +> Starting from 3.4.0-BETA, `telemetryKind` of `request`, `dependency`, `trace` (log), or `exception` is supported +> (and should be set) on all sampling overrides, e.g. +> ```json +> { +> "connectionString": "...", +> "sampling": { +> "percentage": 10 +> }, +> "preview": { +> "sampling": { +> "overrides": [ +> { +> "telemetryKind": "request", +> "attributes": [ +> ... +> ], +> "percentage": 0 +> }, +> { +> "telemetryKind": "request", +> "attributes": [ +> ... +> ], +> "percentage": 100 +> } +> ] +> } +> } +> } +> ``` + ## How it works When a span is started, the attributes present on the span at that time are used to check if any of the sampling overrides match. Matches can be either `strict` or `regexp`. Regular expression matches are performed against the entire attribute value, so if you want to match a value that contains `abc` anywhere in it, then you need to use `.*abc.*`.+A sampling override can specify multiple attribute criteria, in which case all of them must match for the sampling +override to match. -If one of the sampling overrides match, then its sampling percentage is used to decide whether to sample the span or +If one of the sampling overrides matches, then its sampling percentage is used to decide whether to sample the span or not. Only the first sampling override that matches is used. If no sampling overrides match: -* If this is the first span in the trace, then the [default sampling percentage](./java-standalone-config.md#sampling) - is used. +* If this is the first span in the trace, then the + [top-level sampling configuration](./java-standalone-config.md#sampling) is used. * If this is not the first span in the trace, then the parent sampling decision is used. -> [!WARNING] -> When a decision has been made to not collect a span, then all downstream spans will also not be collected, -> even if there are sampling overrides that match the downstream span. -> This behavior is necessary because otherwise broken traces would result, with downstream spans being collected -> but being parented to spans that were not collected. - > [!NOTE]-> The sampling decision is based on hashing the traceId (also known as the operationId) to a number between 0 and 100, -> and that hash is then compared to the sampling percentage. -> Since all spans in a given trace will have the same traceId, they will have the same hash, -> and so the sampling decision will be consistent across the whole trace. +> Starting from 3.4.0-BETA, sampling overrides do not apply to "standalone" telemetry by default. Standalone telemetry +> is any telemetry that is not associated with a request, e.g. startup logs. +> You can make a sampling override apply to standalone telemetry by including the attribute +> `includingStandaloneTelemetry` in the sampling override, e.g. +> ```json +> { +> "connectionString": "...", +> "preview": { +> "sampling": { +> "overrides": [ +> { +> "telemetryKind": "dependency", +> "includingStandaloneTelemetry": true, +> "attributes": [ +> ... +> ], +> "percentage": 0 +> } +> ] +> } +> } +> } +> ``` ## Example: Suppress collecting telemetry for health checks This will also suppress collecting any downstream spans (dependencies) that woul ```json {- "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000", + "connectionString": "...", "preview": { "sampling": { "overrides": [ This will suppress collecting telemetry for all `GET my-noisy-key` redis calls. ```json {- "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000", + "connectionString": "...", "preview": { "sampling": { "overrides": [ This will suppress collecting telemetry for all `GET my-noisy-key` redis calls. } ``` +> [!NOTE] +> Starting from 3.4.0-BETA, `telemetryKind` is supported (and recommended) on all sampling overrides, e.g. + ## Example: Collect 100% of telemetry for an important request type This will collect 100% of telemetry for `/login`. those will also be collected for all '/login' requests. ```json {- "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000", + "connectionString": "...", "sampling": { "percentage": 10 }, |
azure-resource-manager | Linter Rule No Loc Expr Outside Params | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-no-loc-expr-outside-params.md | Title: Linter rule - no location expressions outside of parameter default values description: Linter rule - no location expressions outside of parameter default values Previously updated : 1/6/2022 Last updated : 8/30/2022 # Linter rule - no location expressions outside of parameter default values You can fix the failure by turning the variable into a parameter: param location string = resourceGroup().location ``` +If you're using Azure PowerShell to deploy to a subscription, management group, or tenant, you should use a parameter name other than `location`. The [New-AzDeployment](/powershell/module/az.resources/new-azdeployment), [New-AzManagementGroupDeployment](/powershell/module/az.resources/new-azmanagementgroupdeployment), and [New-AzTenantDeployment](/powershell/module/az.resources/new-aztenantdeployment) commands have a parameter named `location`. This command parameter conflicts with the parameter in your Bicep file. You can avoid this conflict by using a name such as `rgLocation`. ++You can use `location` for a parameter name when deploying to a resource group, because [New-AzResourceGroupDeployment](/powershell/module/az.resources/new-azresourcegroupdeployment) doesn't have a parameter named `location`. + ## Next steps For more information about the linter, see [Use Bicep linter](./linter.md). |
azure-resource-manager | Extension Resource Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/extension-resource-types.md | Title: Extension resource types description: Lists the Azure resource types are used to extend the capabilities of other resource types. Previously updated : 08/24/2022 Last updated : 08/31/2022 # Resource types that extend capabilities of other resources An extension resource is a resource that adds to another resource's capabilities * cloudServiceSlots * networkManagerConnections +## Microsoft.OperationalInsights ++* storageInsightConfigs + ## Microsoft.PolicyInsights * attestations |
azure-resource-manager | Resources Without Resource Group Limit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resources-without-resource-group-limit.md | Title: Resources without 800 count limit description: Lists the Azure resource types that can have more than 800 instances in a resource group. Previously updated : 08/10/2022 Last updated : 08/31/2022 # Resources not limited to 800 instances per resource group Some resources have a limit on the number instances per region. This limit is di ## Microsoft.BotService -* botServices - By default, limited to 800 instances. That limit can be increased by contacting support. +* botServices - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit ## Microsoft.Compute Some resources have a limit on the number instances per region. This limit is di * snapshots * virtualMachines * virtualMachines/extensions-* virtualMachineScaleSets - By default, limited to 800 instances. That limit can be increased by contacting support. +* virtualMachineScaleSets - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit ## Microsoft.ContainerInstance Some resources have a limit on the number instances per region. This limit is di ## Microsoft.DevTestLab -* labs/virtualMachines - By default, limited to 800 instances. That limit can be increased by contacting support. +* labs/virtualMachines - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.DevTestLab/DisableLabVirtualMachineQuota * schedules ## Microsoft.EdgeOrder Some resources have a limit on the number instances per region. This limit is di ## Microsoft.NotificationHubs -* namespaces - By default, limited to 800 instances. That limit can be increased by contacting support. -* namespaces/notificationHubs - By default, limited to 800 instances. That limit can be increased by contacting support. +* namespaces - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.NotificationHubs/ARMDisableResourcesPerRGLimit +* namespaces/notificationHubs - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.NotificationHubs/ARMDisableResourcesPerRGLimit ## Microsoft.PowerBI -* workspaceCollections - By default, limited to 800 instances. That limit can be increased by contacting support. +* workspaceCollections - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.PowerBI/UnlimitedQuota ## Microsoft.PowerBIDedicated -* autoScaleVCores - By default, limited to 800 instances. That limit can be increased by contacting support. -* capacities - By default, limited to 800 instances. That limit can be increased by contacting support. +* autoScaleVCores - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.PowerBIDedicated/UnlimitedResourceGroupQuota +* capacities - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.PowerBIDedicated/UnlimitedResourceGroupQuota ## Microsoft.Relay Some resources have a limit on the number instances per region. This limit is di ## Microsoft.StreamAnalytics -* streamingjobs - By default, limited to 800 instances. That limit can be increased by contacting support. +* streamingjobs - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.StreamAnalytics/ASADisableARMResourcesPerRGLimit ## Microsoft.Web |
azure-resource-manager | Tag Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-support.md | Title: Tag support for resources description: Shows which Azure resource types support tags. Provides details for all Azure services. Previously updated : 08/10/2022 Last updated : 08/31/2022 # Tag support for Azure resources To get the same data as a file of comma-separated values, download [tag-support. > | configurationProfiles / versions | Yes | Yes | > | patchJobConfigurations | Yes | Yes | > | patchJobConfigurations / patchJobs | No | No |+> | patchSchedules | Yes | Yes | +> | patchSchedules / associations | Yes | Yes | > | patchTiers | Yes | Yes | > | servicePrincipals | No | No | To get the same data as a file of comma-separated values, download [tag-support. > | accounts | Yes | Yes | > | accounts / datapools | No | No | > | workspaces | Yes | Yes |+> | workspaces / eventgridfilters | No | No | ## Microsoft.AutonomousSystems To get the same data as a file of comma-separated values, download [tag-support. > | catalogs / deviceRegistrations | Yes | Yes | > | catalogs / provisioningPackages | Yes | Yes | +## Microsoft.AzureSphereV2 ++> [!div class="mx-tableFixed"] +> | Resource type | Supports tags | Tag in cost report | +> | - | -- | -- | +> | catalogs | Yes | Yes | +> | catalogs / certificates | No | No | +> | catalogs / deviceRegistrations | Yes | Yes | +> | catalogs / provisioningPackages | Yes | Yes | + ## Microsoft.AzureStack > [!div class="mx-tableFixed"] To get the same data as a file of comma-separated values, download [tag-support. > | clusters / offers | No | No | > | clusters / publishers | No | No | > | clusters / publishers / offers | No | No |-> | galleryimages | Yes | Yes | -> | marketplacegalleryimages | Yes | Yes | +> | clusters / updates | No | No | +> | clusters / updates / updateRuns | No | No | +> | clusters / updateSummaries | No | No | +> | galleryImages | Yes | Yes | +> | marketplaceGalleryImages | Yes | Yes | > | networkinterfaces | Yes | Yes |-> | storagecontainers | Yes | Yes | +> | storageContainers | Yes | Yes | > | virtualharddisks | Yes | Yes | > | virtualmachines | Yes | Yes |-> | virtualmachines / extensions | Yes | Yes | +> | virtualMachines / extensions | Yes | Yes | > | virtualmachines / hybrididentitymetadata | No | No | > | virtualnetworks | Yes | Yes | To get the same data as a file of comma-separated values, download [tag-support. > | billingAccounts / createBillingRoleAssignment | No | No | > | billingAccounts / customers | No | No | > | billingAccounts / customers / billingPermissions | No | No |+> | billingAccounts / customers / billingRoleAssignments | No | No | +> | billingAccounts / customers / billingRoleDefinitions | No | No | > | billingAccounts / customers / billingSubscriptions | No | No |+> | billingAccounts / customers / createBillingRoleAssignment | No | No | > | billingAccounts / customers / initiateTransfer | No | No | > | billingAccounts / customers / policies | No | No | > | billingAccounts / customers / products | No | No | To get the same data as a file of comma-separated values, download [tag-support. > | snapshots | Yes | Yes | > | sshPublicKeys | Yes | Yes | > | virtualMachines | Yes | Yes |+> | virtualMachines / applications | Yes | Yes | > | virtualMachines / extensions | Yes | Yes | > | virtualMachines / metricDefinitions | No | No | > | virtualMachines / runCommands | Yes | Yes | > | virtualMachineScaleSets | Yes | Yes |+> | virtualMachineScaleSets / applications | No | No | > | virtualMachineScaleSets / extensions | No | No | > | virtualMachineScaleSets / networkInterfaces | No | No | > | virtualMachineScaleSets / publicIPAddresses | No | No | To get the same data as a file of comma-separated values, download [tag-support. > | VirtualMachines / GuestAgents | No | No | > | VirtualMachines / HybridIdentityMetadata | No | No | > | VirtualMachines / InstallPatches | No | No |+> | VirtualMachines / UpgradeExtensions | No | No | > | VirtualMachineTemplates | Yes | Yes | > | VirtualNetworks | Yes | Yes | To get the same data as a file of comma-separated values, download [tag-support. > | devcenters / images | No | No | > | networkconnections | Yes | Yes | > | projects | Yes | Yes |+> | projects / allowedEnvironmentTypes | No | No | > | projects / attachednetworks | No | No | > | projects / devboxdefinitions | No | No | > | projects / environmentTypes | No | No | To get the same data as a file of comma-separated values, download [tag-support. > | cassandraClusters | Yes | Yes | > | databaseAccountNames | No | No | > | databaseAccounts | Yes | Yes |+> | databaseAccounts / encryptionScopes | No | No | > | restorableDatabaseAccounts | No | No | ## Microsoft.DomainRegistration To get the same data as a file of comma-separated values, download [tag-support. > | networkFunctionPublishers / networkFunctionDefinitionGroups | No | No | > | networkFunctionPublishers / networkFunctionDefinitionGroups / publisherNetworkFunctionDefinitionVersions | No | No | > | networkfunctions | Yes | Yes |+> | networkfunctions / components | No | No | > | networkFunctionVendors | No | No | > | publishers | Yes | Yes | > | publishers / artifactStores | Yes | Yes | To get the same data as a file of comma-separated values, download [tag-support. > | mobileNetworks / simPolicies | Yes | Yes | > | mobileNetworks / sites | Yes | Yes | > | mobileNetworks / slices | Yes | Yes |-> | networks | Yes | Yes | -> | networks / sites | Yes | Yes | > | packetCoreControlPlanes | Yes | Yes | > | packetCoreControlPlanes / packetCoreDataPlanes | Yes | Yes | > | packetCoreControlPlanes / packetCoreDataPlanes / attachedDataNetworks | Yes | Yes | > | packetCoreControlPlaneVersions | No | No |-> | packetCores | Yes | Yes | > | simGroups | Yes | Yes | > | simGroups / sims | No | No | > | sims | Yes | Yes |-> | sims / simProfiles | Yes | Yes | ## Microsoft.Monitor To get the same data as a file of comma-separated values, download [tag-support. > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | energyServices | Yes | Yes |+> | energyServices / privateEndpointConnectionProxies | No | No | +> | energyServices / privateEndpointConnections | No | No | +> | energyServices / privateLinkResources | No | No | ## Microsoft.OpenLogisticsPlatform To get the same data as a file of comma-separated values, download [tag-support. > | workspaces / shares | No | No | > | workspaces / shareSubscriptions | No | No | +## Microsoft.OperationalInsights ++> [!div class="mx-tableFixed"] +> | Resource type | Supports tags | Tag in cost report | +> | - | -- | -- | +> | clusters | Yes | Yes | +> | deletedWorkspaces | No | No | +> | linkTargets | No | No | +> | querypacks | Yes | Yes | +> | storageInsightConfigs | No | No | +> | workspaces | Yes | Yes | +> | workspaces / dataExports | No | No | +> | workspaces / dataSources | No | No | +> | workspaces / linkedServices | No | No | +> | workspaces / linkedStorageAccounts | No | No | +> | workspaces / metadata | No | No | +> | workspaces / networkSecurityPerimeterAssociationProxies | No | No | +> | workspaces / networkSecurityPerimeterConfigurations | No | No | +> | workspaces / query | No | No | +> | workspaces / scopedPrivateLinkProxies | No | No | +> | workspaces / storageInsightConfigs | No | No | +> | workspaces / tables | No | No | + ## Microsoft.Orbital > [!div class="mx-tableFixed"] To get the same data as a file of comma-separated values, download [tag-support. > | - | -- | -- | > | playerAccountPools | Yes | Yes | > | titles | Yes | Yes |+> | titles / automationRules | No | No | > | titles / segments | No | No | > | titles / titleDataSets | No | No | > | titles / titleInternalDataKeyValues | No | No | To get the same data as a file of comma-separated values, download [tag-support. > | testBaseAccounts / emailEvents | No | No | > | testBaseAccounts / externalTestTools | No | No | > | testBaseAccounts / externalTestTools / testCases | No | No |+> | testBaseAccounts / featureUpdateSupportedOses | No | No | > | testBaseAccounts / flightingRings | No | No | > | testBaseAccounts / packages | Yes | Yes | > | testBaseAccounts / packages / favoriteProcesses | No | No | To get the same data as a file of comma-separated values, download [tag-support. > | environments / privateLinkResources | No | No | > | environments / referenceDataSets | Yes | No | +## Microsoft.UsageBilling ++> [!div class="mx-tableFixed"] +> | Resource type | Supports tags | Tag in cost report | +> | - | -- | -- | +> | accounts | Yes | Yes | + ## Microsoft.VideoIndexer > [!div class="mx-tableFixed"] |
azure-resource-manager | Deployment Complete Mode Deletion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-complete-mode-deletion.md | Title: Complete mode deletion description: Shows how resource types handle complete mode deletion in Azure Resource Manager templates. Previously updated : 08/10/2022 Last updated : 08/31/2022 # Deletion of Azure resources for complete mode deployments The resources are listed by resource provider namespace. To match a resource pro > | configurationProfiles / versions | Yes | > | patchJobConfigurations | Yes | > | patchJobConfigurations / patchJobs | No |+> | patchSchedules | Yes | +> | patchSchedules / associations | Yes | > | patchTiers | Yes | > | servicePrincipals | No | The resources are listed by resource provider namespace. To match a resource pro > | accounts | Yes | > | accounts / datapools | No | > | workspaces | Yes |+> | workspaces / eventgridfilters | No | ## Microsoft.AutonomousSystems The resources are listed by resource provider namespace. To match a resource pro > | catalogs / deviceRegistrations | Yes | > | catalogs / provisioningPackages | Yes | +## Microsoft.AzureSphereV2 ++> [!div class="mx-tableFixed"] +> | Resource type | Complete mode deletion | +> | - | -- | +> | catalogs | Yes | +> | catalogs / certificates | No | +> | catalogs / deviceRegistrations | Yes | +> | catalogs / provisioningPackages | Yes | + ## Microsoft.AzureStack > [!div class="mx-tableFixed"] The resources are listed by resource provider namespace. To match a resource pro > | clusters / offers | No | > | clusters / publishers | No | > | clusters / publishers / offers | No |-> | galleryimages | Yes | -> | marketplacegalleryimages | Yes | +> | clusters / updates | No | +> | clusters / updates / updateRuns | No | +> | clusters / updateSummaries | No | +> | galleryImages | Yes | +> | marketplaceGalleryImages | Yes | > | networkinterfaces | Yes |-> | storagecontainers | Yes | +> | storageContainers | Yes | > | virtualharddisks | Yes | > | virtualmachines | Yes |-> | virtualmachines / extensions | Yes | +> | virtualMachines / extensions | Yes | > | virtualmachines / hybrididentitymetadata | No | > | virtualnetworks | Yes | The resources are listed by resource provider namespace. To match a resource pro > | billingAccounts / createBillingRoleAssignment | No | > | billingAccounts / customers | No | > | billingAccounts / customers / billingPermissions | No |+> | billingAccounts / customers / billingRoleAssignments | No | +> | billingAccounts / customers / billingRoleDefinitions | No | > | billingAccounts / customers / billingSubscriptions | No |+> | billingAccounts / customers / createBillingRoleAssignment | No | > | billingAccounts / customers / initiateTransfer | No | > | billingAccounts / customers / policies | No | > | billingAccounts / customers / products | No | The resources are listed by resource provider namespace. To match a resource pro > | snapshots | Yes | > | sshPublicKeys | Yes | > | virtualMachines | Yes |+> | virtualMachines / applications | Yes | > | virtualMachines / extensions | Yes | > | virtualMachines / metricDefinitions | No | > | virtualMachines / runCommands | Yes | > | virtualMachineScaleSets | Yes |+> | virtualMachineScaleSets / applications | No | > | virtualMachineScaleSets / extensions | No | > | virtualMachineScaleSets / networkInterfaces | No | > | virtualMachineScaleSets / publicIPAddresses | No | The resources are listed by resource provider namespace. To match a resource pro > | VirtualMachines / GuestAgents | No | > | VirtualMachines / HybridIdentityMetadata | No | > | VirtualMachines / InstallPatches | No |+> | VirtualMachines / UpgradeExtensions | No | > | VirtualMachineTemplates | Yes | > | VirtualNetworks | Yes | The resources are listed by resource provider namespace. To match a resource pro > | devcenters / images | No | > | networkconnections | Yes | > | projects | Yes |+> | projects / allowedEnvironmentTypes | No | > | projects / attachednetworks | No | > | projects / devboxdefinitions | No | > | projects / environmentTypes | No | The resources are listed by resource provider namespace. To match a resource pro > | cassandraClusters | Yes | > | databaseAccountNames | No | > | databaseAccounts | Yes |+> | databaseAccounts / encryptionScopes | No | > | restorableDatabaseAccounts | No | ## Microsoft.DomainRegistration The resources are listed by resource provider namespace. To match a resource pro > | networkFunctionPublishers / networkFunctionDefinitionGroups | No | > | networkFunctionPublishers / networkFunctionDefinitionGroups / publisherNetworkFunctionDefinitionVersions | No | > | networkfunctions | Yes |+> | networkfunctions / components | No | > | networkFunctionVendors | No | > | publishers | Yes | > | publishers / artifactStores | Yes | The resources are listed by resource provider namespace. To match a resource pro > | mobileNetworks / simPolicies | Yes | > | mobileNetworks / sites | Yes | > | mobileNetworks / slices | Yes |-> | networks | Yes | -> | networks / sites | Yes | > | packetCoreControlPlanes | Yes | > | packetCoreControlPlanes / packetCoreDataPlanes | Yes | > | packetCoreControlPlanes / packetCoreDataPlanes / attachedDataNetworks | Yes | > | packetCoreControlPlaneVersions | No |-> | packetCores | Yes | > | simGroups | Yes | > | simGroups / sims | No | > | sims | Yes |-> | sims / simProfiles | Yes | ## Microsoft.Monitor The resources are listed by resource provider namespace. To match a resource pro > | Resource type | Complete mode deletion | > | - | -- | > | energyServices | Yes |+> | energyServices / privateEndpointConnectionProxies | No | +> | energyServices / privateEndpointConnections | No | +> | energyServices / privateLinkResources | No | ## Microsoft.OpenLogisticsPlatform The resources are listed by resource provider namespace. To match a resource pro > | workspaces / shares | No | > | workspaces / shareSubscriptions | No | +## Microsoft.OperationalInsights ++> [!div class="mx-tableFixed"] +> | Resource type | Complete mode deletion | +> | - | -- | +> | clusters | Yes | +> | deletedWorkspaces | No | +> | linkTargets | No | +> | querypacks | Yes | +> | storageInsightConfigs | No | +> | workspaces | Yes | +> | workspaces / dataExports | No | +> | workspaces / dataSources | No | +> | workspaces / linkedServices | No | +> | workspaces / linkedStorageAccounts | No | +> | workspaces / metadata | No | +> | workspaces / networkSecurityPerimeterAssociationProxies | No | +> | workspaces / networkSecurityPerimeterConfigurations | No | +> | workspaces / query | No | +> | workspaces / scopedPrivateLinkProxies | No | +> | workspaces / storageInsightConfigs | No | +> | workspaces / tables | No | + ## Microsoft.Orbital > [!div class="mx-tableFixed"] The resources are listed by resource provider namespace. To match a resource pro > | - | -- | > | playerAccountPools | Yes | > | titles | Yes |+> | titles / automationRules | No | > | titles / segments | No | > | titles / titleDataSets | No | > | titles / titleInternalDataKeyValues | No | The resources are listed by resource provider namespace. To match a resource pro > | testBaseAccounts / emailEvents | No | > | testBaseAccounts / externalTestTools | No | > | testBaseAccounts / externalTestTools / testCases | No |+> | testBaseAccounts / featureUpdateSupportedOses | No | > | testBaseAccounts / flightingRings | No | > | testBaseAccounts / packages | Yes | > | testBaseAccounts / packages / favoriteProcesses | No | The resources are listed by resource provider namespace. To match a resource pro > | environments / privateLinkResources | No | > | environments / referenceDataSets | Yes | +## Microsoft.UsageBilling ++> [!div class="mx-tableFixed"] +> | Resource type | Complete mode deletion | +> | - | -- | +> | accounts | Yes | + ## Microsoft.VideoIndexer > [!div class="mx-tableFixed"] |
azure-video-indexer | Accounts Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/accounts-overview.md | -## Overview +## A trial account -The first time you visit the [www.videoindexer.ai/](https://www.videoindexer.ai/) website, a trial account is automatically created. A trial Azure Video Indexer account has limitation on number of indexing minutes, support, and SLA. +The first time you visit the [Azure Video Indexer](https://www.videoindexer.ai/) website, a trial account is automatically created. The trial Azure Video Indexer account has limitation on number of indexing minutes, support, and SLA. -With a trial, account Azure Video Indexer provides up to 600 minutes of free indexing to users and up to 2400 minutes of free indexing to users that subscribe to the Azure Video Indexer API on the [developer portal](https://aka.ms/avam-dev-portal). +With a trial, account Azure Video Indexer provides: -The trial account is not available on the Azure Government cloud. For other Azure Government limitations, see [Limitations of Azure Video Indexer on Azure Government](connect-to-azure.md#limitations-of-azure-video-indexer-on-azure-government). +* up to 600 minutes of free indexing to the [Azure Video Indexer](https://www.videoindexer.ai/) website users and +* up to 2400 minutes of free indexing to users that subscribe to the Azure Video Indexer API on the [developer portal](https://aka.ms/avam-dev-portal). ++When using the trial account, you don't have to set up an Azure subscription. ++The trial account option is not available on the Azure Government cloud. For other Azure Government limitations, see [Limitations of Azure Video Indexer on Azure Government](connect-to-azure.md#limitations-of-azure-video-indexer-on-azure-government). ++## A paid (unlimited) account You can later create a paid account where you're not limited by the quota. Two types of paid accounts are available to you: Azure Resource Manager (ARM) (currently in preview) and classic (generally available). The main difference between the two is account management platform. While classic accounts are built on the API Management, ARM-based accounts management is built on Azure, which enables apply access control to all services with role-based access control (Azure RBAC) natively. -Make sure to review [Pricing](https://azure.microsoft.com/pricing/details/video-indexer/). +With the paid option, you pay for indexed minutes, for more information, see [Azure Video Indexer pricing](https://azure.microsoft.com/pricing/details/video-indexer/). -## Connecting to Azure subscription +When creating a new paid account, you need to connect the Azure Video Indexer account to your Azure subscription and an Azure Media Services account. -With a trial account, you don't have to set up an Azure subscription. When creating a paid account, you need to connect Azure Video Indexer [to your Azure subscription and an Azure Media Services account](connect-to-azure.md). +**The recommended paid account type is the ARM-based account**. ## To get access to your account With a trial account, you don't have to set up an Azure subscription. When creat ## Create accounts -* ARM accounts: **The recommended paid account type is the ARM-based account**. +* Creating ARM accounts. Make sure you are signed in with the correct domain to the [Azure Video Indexer website](https://www.videoindexer.ai/). For details, see [Switch tenants](switch-tenants-portal.md). - * You can create an Azure Video Indexer **ARM-based** account through one of the following: + * You can create an Azure Video Indexer ARM-based account through one of the following: - 1. [Azure Video Indexer portal](https://aka.ms/vi-portal-link) - 2. [Azure portal](https://portal.azure.com/#home) + 1. The [Azure Video Indexer website](https://aka.ms/vi-portal-link) + 2. The [Azure portal](https://portal.azure.com/#home) For the detailed description, [Get started with Azure Video Indexer in Azure portal](create-account-portal.md). * Upgrade a trial account to an ARM-based account and [import your content for free](import-content-from-trial.md). -* Classic accounts: [Create classic accounts using API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Paid-Account). -* Connect a classic account to ARM: [Connect an existing classic paid Azure Video Indexer account to an ARM-based account](connect-classic-account-to-arm.md). +* [Create classic accounts using API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Paid-Account). +* [Connect an existing classic paid Azure Video Indexer account to an ARM-based account](connect-classic-account-to-arm.md). ## Limited access features For more information, see [Azure Video Indexer limited access features](limited- ## Next steps -[Pricing](https://azure.microsoft.com/pricing/details/video-indexer/) +Make sure to review [Pricing](https://azure.microsoft.com/pricing/details/video-indexer/). |
azure-video-indexer | Logic Apps Connector Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/logic-apps-connector-tutorial.md | Also, add a new "Shared Access Protocol" parameter. Choose HttpsOnly for the val  -Fill out [your account location](regions.md) and [account ID](./video-indexer-use-apis.md#account-id) to get the Azure Video Indexer account token. +Fill out [your account location](regions.md) and [account ID](./video-indexer-use-apis.md#operational-api-calls) to get the Azure Video Indexer account token.  Create the second flow separate from the first one. To set up this flow, you will need to provide your Azure Video Indexer API Key and Azure Storage credentials again. You will need to update the same parameters as you did for the first flow. -For your trigger, you will see a HTTP POST URL field. The URL won’t be generated until after you save your flow; however, you will need the URL eventually. We will come back to this. +For your trigger, you will see an HTTP POST URL field. The URL won’t be generated until after you save your flow; however, you will need the URL eventually. We will come back to this. -Fill out [your account location](regions.md) and [account ID](./video-indexer-use-apis.md#account-id) to get the Azure Video Indexer account token. +Fill out [your account location](regions.md) and [account ID](./video-indexer-use-apis.md#operational-api-calls) to get the Azure Video Indexer account token. Go to the “Get Video Index” action and fill out the required parameters. For Video ID, put in the following expression: triggerOutputs()['queries']['id'] |
azure-video-indexer | Video Indexer Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-get-started.md | -This quickstart shows you how to sign in to the Azure Video Indexer [website](https://www.videoindexer.ai/) and how to upload your first video. When visiting the Azure Video Indexer website for the first time, the free trial account is automatically created for you. With the free trial account, you get a certain number of free indexing minutes. When creating an unlimited/paid account, you aren't limited by the quota. +This quickstart shows you how to sign in to the Azure Video Indexer [website](https://www.videoindexer.ai/) and how to upload your first video. When visiting the Azure Video Indexer website for the first time, a trial account is automatically created for you. With the trial account, you get a certain number of free indexing minutes. You can later add a paid (ARM-based or classic) account. With the paid option, you pay for indexed minutes. -With free trial, Azure Video Indexer provides up to 600 minutes of free indexing to website users and up to 2400 minutes of free indexing to API users. With paid option, you create an Azure Video Indexer account that is [connected to your Azure subscription and an Azure Media Services account](connect-to-azure.md). You pay for minutes indexed, for more information, see [Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/). --For details about available accounts, see [Azure Video Indexer account types](accounts-overview.md). +For details about available accounts (trial and paid options), see [Azure Video Indexer account types](accounts-overview.md). ## Sign up for Azure Video Indexer Once you start using Azure Video Indexer, all your stored data and uploaded cont ## Upload a video using the Azure Video Indexer website +### Supported browsers ++For more information, see [supported browsers](video-indexer-overview.md#supported-browsers). + ### Supported file formats for Azure Video Indexer See the [input container/file formats](/azure/media-services/latest/encode-media-encoder-standard-formats-reference) article for a list of file formats that you can use with Azure Video Indexer. See the [input container/file formats](/azure/media-services/latest/encode-media > [!div class="mx-imgBorder"] > :::image type="content" source="./media/video-indexer-get-started/progress.png" alt-text="Progress of the upload"::: - The produced JSON output contains `Insights` and `SummarizedInsights` elements. We highly recommend using `Insights` and not using `SummarizedInsights` (which is present for backward compatibility). + The produced JSON output contains `Insights` and `SummarizedInsights` elements. We highly recommend using `Insights` and not using `SummarizedInsights` (which is present for backward compatibility). + 1. Once Azure Video Indexer is done analyzing, you'll get an email with a link to your video and a short description of what was found in your video. For example: people, spoken and written words, topics, and named entities. 1. You can later find your video in the library list and perform different operations. For example: search, reindex, edit. For more details, see [Upload and index videos](upload-index-videos.md). To start using the APIs, see [use APIs](video-indexer-use-apis.md) -## Supported browsers --For more information, see [supported browsers](video-indexer-overview.md#supported-browsers). - ## Next steps For detailed introduction please visit our [introduction lab](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/IntroToVideoIndexer.md). |
azure-video-indexer | Video Indexer Use Apis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-use-apis.md | -When creating an Azure Video Indexer account, you can choose a trial account (where you get a certain number of free indexing minutes) or a paid option (where you're not limited by the quota). With a trial, account, Azure Video Indexer provides up to 600 minutes of free indexing to website users and up to 2400 minutes of free indexing to API users. With a paid option, you create an Azure Video Indexer account that's [connected to your Azure subscription and an Azure Media Services account](connect-to-azure.md). You pay for minutes indexed, for more information, see [Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/). +When visiting the [Azure Video Indexer](https://www.videoindexer.ai/) website for the first time, a trial account is automatically created for you. With the trial account, you get a certain number of free indexing minutes. You can later add a paid (ARM-based or classic) account. With the paid option, you pay for indexed minutes. ++For details about available accounts (trial and paid options), see [Azure Video Indexer account types](accounts-overview.md). This article shows how the developers can take advantage of the [Azure Video Indexer API](https://api-portal.videoindexer.ai/). This article shows how the developers can take advantage of the [Azure Video Ind Select the [Products](https://api-portal.videoindexer.ai/products) tab. Then, select Authorization and subscribe. -  +  > [!NOTE] > New users are automatically subscribed to Authorization. Access tokens expire after 1 hour. Make sure your access token is valid before u You're ready to start integrating with the API. Find [the detailed description of each Azure Video Indexer REST API](https://api-portal.videoindexer.ai/). -## Account ID +## Recommendations ++This section lists some recommendations when using Azure Video Indexer API. ++- If you're planning to upload a video, it's recommended to place the file in some public network location (for example, an Azure Blob Storage account). Get the link to the video and provide the URL as the upload file param. ++ The URL provided to Azure Video Indexer must point to a media (audio or video) file. An easy verification for the URL (or SAS URL) is to paste it into a browser, if the file starts playing/downloading, it's likely a good URL. If the browser is rendering some visualization, it's likely not a link to a file but to an HTML page. +- When you call the API that gets video insights for the specified video, you get a detailed JSON output as the response content. [See details about the returned JSON in this topic](video-indexer-output-json-v2.md). +- The JSON output produced by the API contains `Insights` and `SummarizedInsights` elements. We highly recommend using `Insights` and not using `SummarizedInsights` (which is present for backward compatibility). +- We do not recommend that you use data directly from the artifacts folder for production purposes. Artifacts are intermediate outputs of the indexing process. They are essentially raw outputs of the various AI engines that analyze the videos; the artifacts schema may change over time. ++ It is recommended that you use the [Get Video Index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Index) API, as described in [Get insights and artifacts produced by the API](video-indexer-output-json-v2.md#get-insights-produced-by-the-api) and **not** [Get-Video-Artifact-Download-Url](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Artifact-Download-Url). ++## Operational API calls The Account ID parameter is required in all operational API calls. Account ID is a GUID that can be obtained in one of the following ways: The Account ID parameter is required in all operational API calls. Account ID is https://www.videoindexer.ai/accounts/00000000-f324-4385-b142-f77dacb0a368/videos/d45bf160b5/ ``` -## Recommendations --This section lists some recommendations when using Azure Video Indexer API. --- If you're planning to upload a video, it's recommended to place the file in some public network location (for example, an Azure Blob Storage account). Get the link to the video and provide the URL as the upload file param.-- The URL provided to Azure Video Indexer must point to a media (audio or video) file. An easy verification for the URL (or SAS URL) is to paste it into a browser, if the file starts playing/downloading, it's likely a good URL. If the browser is rendering some visualization, it's likely not a link to a file but to an HTML page. --- When you call the API that gets video insights for the specified video, you get a detailed JSON output as the response content. [See details about the returned JSON in this topic](video-indexer-output-json-v2.md).- ## Code sample The following C# code snippet demonstrates the usage of all the Azure Video Indexer APIs together. Debug.WriteLine(playerWidgetLink); After you are done with this tutorial, delete resources that you are not planning to use. -## Considerations --* The JSON output produced by the API contains `Insights` and `SummarizedInsights` elements. We highly recommend using `Insights` and not using `SummarizedInsights` (which is present for backward compatibility). -* We do not recommend that you use data directly from the artifacts folder for production purposes. Artifacts are intermediate outputs of the indexing process. They are essentially raw outputs of the various AI engines that analyze the videos; the artifacts schema may change over time. -- It is recommended that you use the [Get Video Index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Index) API, as described in [Get insights and artifacts produced by the API](video-indexer-output-json-v2.md#get-insights-produced-by-the-api) and **not** [Get-Video-Artifact-Download-Url](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Artifact-Download-Url). - ## See also - [Azure Video Indexer overview](video-indexer-overview.md) |
azure-vmware | Concepts Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-identity.md | The CloudAdmin role in Azure VMware Solution has the following privileges on vCe | Privilege | Description | | | -- | | **Alarms** | Acknowledge alarm<br />Create alarm<br />Disable alarm action<br />Modify alarm<br />Remove alarm<br />Set alarm status |-| **Content Library** | Add library item<br />Create a subscription for a published library<br />Create local library<br />Create subscribed library<br />Delete library item<br />Delete local library<br />Delete subscribed library<br />Delete subscription of a published library<br />Download files<br />Evict library items<br />Evict subscribed library<br />Import storage<br />Probe subscription information<br />Publish a library item to its subscribers<br />Publish a library to its subscribers<br />Read storage<br />Sync library item<br />Sync subscribed library<br />Type introspection<br />Update configuration settings<br />Update files<br />Update library<br />Update library item<br />Update local library<br />Update subscribed library<br />Update subscription of a published library<br />View configuration settings | +| **Content Library** | Add library item<br />Add root certificate to trust store<br />Check in a template<br />Check out a template<br />Create a subscription for a published library<br />Create local library<br />Create or delete a Harbor registry<br />Create subscribed library<br />Create, delete or purge a Harbor registry project<br />Delete library item<br />Delete local library<br />Delete root certificate from trust store<br />Delete subscribed library<br />Delete subscription of a published library<br />Download files<br />Evict library items<br />Evict subscribed library<br />Import storage<br />Manage Harbor registry resources on specified compute resource<br />Probe subscription information<br />Publish a library item to its subscribers<br />Publish a library to its subscribers<br />Read storage<br />Sync library item<br />Sync subscribed library<br />Type introspection<br />Update configuration settings<br />Update files<br />Update library<br />Update library item<br />Update local library<br />Update subscribed library<br />Update subscription of a published library<br />View configuration settings | | **Cryptographic operations** | Direct access | | **Datastore** | Allocate space<br />Browse datastore<br />Configure datastore<br />Low-level file operations<br />Remove files<br />Update virtual machine metadata | | **Folder** | Create folder<br />Delete folder<br />Move folder<br />Rename folder | The CloudAdmin role in Azure VMware Solution has the following privileges on vCe ### Create custom roles on vCenter Server -Azure VMware Solution supports the use of custom roles with equal or lesser privileges than the CloudAdmin role. +Azure VMware Solution supports the use of custom roles with equal or lesser privileges than the CloudAdmin role. You'll use the CloudAdmin role to create, modify, or delete custom roles with privileges lesser than or equal to their current role. -You'll use the CloudAdmin role to create, modify, or delete custom roles with privileges lesser than or equal to their current role. You can create roles with privileges greater than CloudAdmin. You can't assign the role to any users or groups or delete the role. + >[!NOTE] + >You can create roles with privileges greater than CloudAdmin. However, you can't assign the role to any users or groups or delete the role. Roles that have privileges greater than that of CloudAdmin is unsupported. To prevent creating roles that can't be assigned or deleted, clone the CloudAdmin role as the basis for creating new custom roles. To prevent creating roles that can't be assigned or deleted, clone the CloudAdmi 1. Provide the name you want for the cloned role. -1. Add or remove privileges for the role and select **OK**. The cloned role is visible in the **Roles** list. +1. Remove privileges for the role and select **OK**. The cloned role is visible in the **Roles** list. #### Apply a custom role To prevent creating roles that can't be assigned or deleted, clone the CloudAdmi 1. Search for the user or group after selecting the Identity Source under the **User** section. 1. Select the role that you want to apply to the user or group.+ >[!NOTE] + >Attempting to apply a user or group to a role that has privileges greater than that of CloudAdmin will result in errors. 1. Check the **Propagate to children** if needed, and select **OK**. The added permission displays in the **Permissions** section. + ## NSX-T Manager access and identity When a private cloud is provisioned using Azure portal, software-defined data center (SDDC) management components like vCenter Server and NSX-T Manager are provisioned for customers. |
cognitive-services | Rest Text To Speech | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-text-to-speech.md | Title: Text-to-speech API reference (REST) - Speech service -description: Learn how to use the REST API to convert text into synthesized speech. +description: Learn how to use the REST API to convert text into synthesized speech. This response has been truncated to illustrate the structure of a response. ], "Status": "Preview" },- + ...- + { "Name": "Microsoft Server Speech Text to Speech Voice (ga-IE, OrlaNeural)", "DisplayName": "Orla", If the HTTP status is `200 OK`, the body of the response contains an audio file This is a list of supported audio formats that are sent in each request as the `X-Microsoft-OutputFormat` header. Each format incorporates a bit rate and encoding type. The Speech service supports 48-kHz, 24-kHz, 16-kHz, and 8-kHz audio outputs. Prebuilt neural voices are created from samples that use a 24-khz sample rate. All voices can upsample or downsample to other sample rates when synthesizing. -|Streaming |Non-Streaming | -|-|-| -|audio-16khz-16bit-32kbps-mono-opus|riff-8khz-8bit-mono-alaw | -|audio-16khz-32kbitrate-mono-mp3 |riff-8khz-8bit-mono-mulaw| -|audio-16khz-64kbitrate-mono-mp3 |riff-8khz-16bit-mono-pcm | -|audio-16khz-128kbitrate-mono-mp3 |riff-24khz-16bit-mono-pcm| -|audio-24khz-16bit-24kbps-mono-opus|riff-48khz-16bit-mono-pcm| -|audio-24khz-16bit-48kbps-mono-opus| | -|audio-24khz-48kbitrate-mono-mp3 | | -|audio-24khz-96kbitrate-mono-mp3 | | -|audio-24khz-160kbitrate-mono-mp3 | | -|audio-48khz-96kbitrate-mono-mp3 | | -|audio-48khz-192kbitrate-mono-mp3 | | -|ogg-16khz-16bit-mono-opus | | -|ogg-24khz-16bit-mono-opus | | -|ogg-48khz-16bit-mono-opus | | -|raw-8khz-8bit-mono-alaw | | -|raw-8khz-8bit-mono-mulaw | | -|raw-8khz-16bit-mono-pcm | | -|raw-16khz-16bit-mono-pcm | | -|raw-16khz-16bit-mono-truesilk | | -|raw-24khz-16bit-mono-pcm | | -|raw-24khz-16bit-mono-truesilk | | -|raw-48khz-16bit-mono-pcm | | -|webm-16khz-16bit-mono-opus | | -|webm-24khz-16bit-24kbps-mono-opus | | -|webm-24khz-16bit-mono-opus | | +| Streaming | Non-Streaming | +| - | | +| audio-16khz-16bit-32kbps-mono-opus | riff-8khz-8bit-mono-alaw | +| audio-16khz-32kbitrate-mono-mp3 | riff-8khz-8bit-mono-mulaw | +| audio-16khz-64kbitrate-mono-mp3 | riff-8khz-16bit-mono-pcm | +| audio-16khz-128kbitrate-mono-mp3 | riff-22050hz-16bit-mono-pcm | +| audio-24khz-16bit-24kbps-mono-opus | riff-24khz-16bit-mono-pcm | +| audio-24khz-16bit-48kbps-mono-opus | riff-44100hz-16bit-mono-pcm | +| audio-24khz-48kbitrate-mono-mp3 | riff-48khz-16bit-mono-pcm | +| audio-24khz-96kbitrate-mono-mp3 | | +| audio-24khz-160kbitrate-mono-mp3 | | +| audio-48khz-96kbitrate-mono-mp3 | | +| audio-48khz-192kbitrate-mono-mp3 | | +| ogg-16khz-16bit-mono-opus | | +| ogg-24khz-16bit-mono-opus | | +| ogg-48khz-16bit-mono-opus | | +| raw-8khz-8bit-mono-alaw | | +| raw-8khz-8bit-mono-mulaw | | +| raw-8khz-16bit-mono-pcm | | +| raw-16khz-16bit-mono-pcm | | +| raw-16khz-16bit-mono-truesilk | | +| raw-22050hz-16bit-mono-pcm | | +| raw-24khz-16bit-mono-pcm | | +| raw-24khz-16bit-mono-truesilk | | +| raw-44100hz-16bit-mono-pcm | | +| raw-48khz-16bit-mono-pcm | | +| webm-16khz-16bit-mono-opus | | +| webm-24khz-16bit-24kbps-mono-opus | | +| webm-24khz-16bit-mono-opus | | > [!NOTE] > en-US-AriaNeural, en-US-JennyNeural and zh-CN-XiaoxiaoNeural are available in public preview in 48Khz output. Other voices support 24khz upsampled to 48khz output. |
cognitive-services | Cognitive Services Environment Variables | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-environment-variables.md | + + Title: Use environment variables with Cognitive Services ++description: "This guide shows you how to set and retrieve environment variables to handle your Cognitive Services subscription credentials in a more secure way when you test out applications." +++++ Last updated : 08/15/2022++++# Use environment variables with Cognitive Services ++This guide shows you how to set and retrieve environment variables to handle your Cognitive Services subscription credentials in a more secure way when you test out applications. ++## Set an environment variable ++To set environment variables, use one the following commands, where the `ENVIRONMENT_VARIABLE_KEY` is the named key and `value` is the value stored in the environment variable. ++# [Command Line](#tab/command-line) ++Use the following command to create and assign a persisted environment variable, given the input value. ++```CMD +:: Assigns the env var to the value +setx ENVIRONMENT_VARIABLE_KEY="value" +``` ++In a new instance of the Command Prompt, use the following command to read the environment variable. ++```CMD +:: Prints the env var value +echo %ENVIRONMENT_VARIABLE_KEY% +``` ++# [PowerShell](#tab/powershell) ++Use the following command to create and assign a persisted environment variable, given the input value. ++```powershell +# Assigns the env var to the value +[System.Environment]::SetEnvironmentVariable('ENVIRONMENT_VARIABLE_KEY', 'value', 'User') +``` ++In a new instance of the Windows PowerShell, use the following command to read the environment variable. ++```powershell +# Prints the env var value +[System.Environment]::GetEnvironmentVariable('ENVIRONMENT_VARIABLE_KEY') +``` ++# [Bash](#tab/bash) ++Use the following command to create and assign a persisted environment variable, given the input value. ++```Bash +# Assigns the env var to the value +echo export ENVIRONMENT_VARIABLE_KEY="value" >> /etc/environment && source /etc/environment +``` ++In a new instance of the **Bash**, use the following command to read the environment variable. ++```Bash +# Prints the env var value +echo "${ENVIRONMENT_VARIABLE_KEY}" ++# Or use printenv: +# printenv ENVIRONMENT_VARIABLE_KEY +``` ++++> [!TIP] +> After you set an environment variable, restart your integrated development environment (IDE) to ensure that the newly added environment variables are available. ++## Retrieve an environment variable ++To use an environment variable in your code, it must be read into memory. Use one of the following code snippets, depending on which language you're using. These code snippets demonstrate how to get an environment variable given the `ENVIRONMENT_VARIABLE_KEY` and assign the value to a program variable named `value`. ++# [C#](#tab/csharp) ++For more information, see <a href="/dotnet/api/system.environment.getenvironmentvariable" target="_blank">`Environment.GetEnvironmentVariable` </a>. ++```csharp +using static System.Environment; ++class Program +{ + static void Main() + { + // Get the named env var, and assign it to the value variable + var value = + GetEnvironmentVariable( + "ENVIRONMENT_VARIABLE_KEY"); + } +} +``` ++# [C++](#tab/cpp) ++For more information, see <a href="/cpp/c-runtime-library/reference/getenv-wgetenv" target="_blank">`getenv` </a>. ++```cpp +#include <stdlib.h> ++int main() +{ + // Get the named env var, and assign it to the value variable + auto value = + getenv("ENVIRONMENT_VARIABLE_KEY"); +} +``` ++# [Java](#tab/java) ++For more information, see <a href="https://docs.oracle.com/javase/7/docs/api/java/lang/System.html#getenv(java.lang.String)" target="_blank">`System.getenv` </a>. ++```java +import java.lang.*; ++public class Program { + public static void main(String[] args) throws Exception { + // Get the named env var, and assign it to the value variable + String value = + System.getenv( + "ENVIRONMENT_VARIABLE_KEY") + } +} +``` ++# [Node.js](#tab/node-js) ++For more information, see <a href="https://nodejs.org/api/process.html#process_process_env" target="_blank">`process.env` </a>. ++```javascript +// Get the named env var, and assign it to the value variable +const value = + process.env.ENVIRONMENT_VARIABLE_KEY; +``` ++# [Python](#tab/python) ++For more information, see <a href="https://docs.python.org/2/library/os.html#os.environ" target="_blank">`os.environ` </a>. ++```python +import os ++# Get the named env var, and assign it to the value variable +value = os.environ['ENVIRONMENT_VARIABLE_KEY'] +``` ++# [Objective-C](#tab/objective-c) ++For more information, see <a href="https://developer.apple.com/documentation/foundation/nsprocessinfo/1417911-environment?language=objc" target="_blank">`environment` </a>. ++```objectivec +// Get the named env var, and assign it to the value variable +NSString* value = + [[[NSProcessInfo processInfo]environment]objectForKey:@"ENVIRONMENT_VARIABLE_KEY"]; +``` ++++## Next steps ++* Explore [Cognitive Services](./what-are-cognitive-services.md) and choose a service to get started. |
cognitive-services | Cognitive Services Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-security.md | - Title: Azure Cognitive Services security- -description: Learn about the various security considerations for Cognitive Services usage. ----- Previously updated : 08/28/2020-----# Azure Cognitive Services security --Security should be considered a top priority when developing any and all applications. With the onset of artificial intelligence enabled applications, security is even more important. In this article various aspects of Azure Cognitive Services security are outlined, such as the use of transport layer security, authentication, securely configuring sensitive data, and Customer Lockbox for customer data access. --## Transport Layer Security (TLS) --All of the Cognitive Services endpoints exposed over HTTP enforce TLS 1.2. With an enforced security protocol, consumers attempting to call a Cognitive Services endpoint should adhere to these guidelines: --* The client Operating System (OS) needs to support TLS 1.2 -* The language (and platform) used to make the HTTP call need to specify TLS 1.2 as part of the request - * Depending on the language and platform, specifying TLS is done either implicitly or explicitly --For .NET users, consider the <a href="/dotnet/framework/network-programming/tls" target="_blank">Transport Layer Security best practices </a>. --## Authentication --When discussing authentication, there are several common misconceptions. Authentication and authorization are often confused for one another. Identity is also a major component in security. An identity is a collection of information about a <a href="https://en.wikipedia.org/wiki/Principal_(computer_security)" target="_blank">principal </a>. Identity providers (IdP) provide identities to authentication services. Authentication is the act of verifying a user's identity. Authorization is the specification of access rights and privileges to resources for a given identity. Several of the Cognitive Services offerings, include Azure role-based access control (Azure RBAC). Azure RBAC could be used to simplify some of the ceremony involved with manually managing principals. For more details, see [Azure role-based access control for Azure resources](../role-based-access-control/overview.md). --For more information on authentication with subscription keys, access tokens and Azure Active Directory (AAD), see <a href="/azure/cognitive-services/authentication" target="_blank">authenticate requests to Azure Cognitive Services</a>. --## Environment variables and application configuration --Environment variables are name-value pairs, stored within a specific environment. A more secure alternative to using hardcoded values for sensitive data, is to use environment variables. Hardcoded values are insecure and should be avoided. --> [!CAUTION] -> Do **not** use hardcoded values for sensitive data, doing so is a major security vulnerability. --> [!NOTE] -> While environment variables are stored in plain text, they are isolated to an environment. If an environment is compromised, so too are the variables with the environment. --### Set environment variable --To set environment variables, use one the following commands - where the `ENVIRONMENT_VARIABLE_KEY` is the named key and `value` is the value stored in the environment variable. --# [Command Line](#tab/command-line) --Create and assign persisted environment variable, given the value. --```CMD -:: Assigns the env var to the value -setx ENVIRONMENT_VARIABLE_KEY="value" -``` --In a new instance of the **Command Prompt**, read the environment variable. --```CMD -:: Prints the env var value -echo %ENVIRONMENT_VARIABLE_KEY% -``` --# [PowerShell](#tab/powershell) --Create and assign persisted environment variable, given the value. --```powershell -# Assigns the env var to the value -[System.Environment]::SetEnvironmentVariable('ENVIRONMENT_VARIABLE_KEY', 'value', 'User') -``` --In a new instance of the **Windows PowerShell**, read the environment variable. --```powershell -# Prints the env var value -[System.Environment]::GetEnvironmentVariable('ENVIRONMENT_VARIABLE_KEY') -``` --# [Bash](#tab/bash) --Create and assign persisted environment variable, given the value. --```Bash -# Assigns the env var to the value -echo export ENVIRONMENT_VARIABLE_KEY="value" >> /etc/environment && source /etc/environment -``` --In a new instance of the **Bash**, read the environment variable. --```Bash -# Prints the env var value -echo "${ENVIRONMENT_VARIABLE_KEY}" --# Or use printenv: -# printenv ENVIRONMENT_VARIABLE_KEY -``` ----> [!TIP] -> After setting an environment variable, restart your integrated development environment (IDE) to ensure that newly added environment variables are available. --### Get environment variable --To get an environment variable, it must be read into memory. Depending on the language you're using, consider the following code snippets. These code snippets demonstrate how to get environment variable given the `ENVIRONMENT_VARIABLE_KEY` and assign to a variable named `value`. --# [C#](#tab/csharp) --For more information, see <a href="/dotnet/api/system.environment.getenvironmentvariable" target="_blank">`Environment.GetEnvironmentVariable` </a>. --```csharp -using static System.Environment; --class Program -{ - static void Main() - { - // Get the named env var, and assign it to the value variable - var value = - GetEnvironmentVariable( - "ENVIRONMENT_VARIABLE_KEY"); - } -} -``` --# [C++](#tab/cpp) --For more information, see <a href="/cpp/c-runtime-library/reference/getenv-wgetenv" target="_blank">`getenv` </a>. --```cpp -#include <stdlib.h> --int main() -{ - // Get the named env var, and assign it to the value variable - auto value = - getenv("ENVIRONMENT_VARIABLE_KEY"); -} -``` --# [Java](#tab/java) --For more information, see <a href="https://docs.oracle.com/javase/7/docs/api/java/lang/System.html#getenv(java.lang.String)" target="_blank">`System.getenv` </a>. --```java -import java.lang.*; --public class Program { - public static void main(String[] args) throws Exception { - // Get the named env var, and assign it to the value variable - String value = - System.getenv( - "ENVIRONMENT_VARIABLE_KEY") - } -} -``` --# [Node.js](#tab/node-js) --For more information, see <a href="https://nodejs.org/api/process.html#process_process_env" target="_blank">`process.env` </a>. --```javascript -// Get the named env var, and assign it to the value variable -const value = - process.env.ENVIRONMENT_VARIABLE_KEY; -``` --# [Python](#tab/python) --For more information, see <a href="https://docs.python.org/2/library/os.html#os.environ" target="_blank">`os.environ` </a>. --```python -import os --# Get the named env var, and assign it to the value variable -value = os.environ['ENVIRONMENT_VARIABLE_KEY'] -``` --# [Objective-C](#tab/objective-c) --For more information, see <a href="https://developer.apple.com/documentation/foundation/nsprocessinfo/1417911-environment?language=objc" target="_blank">`environment` </a>. --```objectivec -// Get the named env var, and assign it to the value variable -NSString* value = - [[[NSProcessInfo processInfo]environment]objectForKey:@"ENVIRONMENT_VARIABLE_KEY"]; -``` ----## Customer Lockbox --[Customer Lockbox for Microsoft Azure](../security/fundamentals/customer-lockbox-overview.md) provides an interface for customers to review, and approve or reject customer data access requests. It is used in cases where a Microsoft engineer needs to access customer data during a support request. For information on how Customer Lockbox requests are initiated, tracked, and stored for later reviews and audits, see [Customer Lockbox](../security/fundamentals/customer-lockbox-overview.md). --Customer Lockbox is available for this service: --* Translator -* Conversational language understanding -* Custom text classification -* Custom named entity recognition -* Orchestration workflow --For the following services, Microsoft engineers will not access any customer data in the E0 tier: --* Language Understanding -* Face -* Content Moderator -* Personalizer --To request the ability to use the E0 SKU, fill out and submit thisΓÇ»[request Form](https://aka.ms/cogsvc-cmk). It will take approximately 3-5 business days to hear back on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once approved for using the E0 SKU with LUIS, you'll need to create a new resource from the Azure portal and select E0 as the Pricing Tier. Users won't be able to upgrade from the F0 to the new E0 SKU. --The Speech service doesn't currently support Customer Lockbox. However, customer data can be stored using bring your own storage (BYOS), allowing you to achieve similar data controls to Customer Lockbox. Keep in mind that Speech service data stays and is processed in the region where the Speech resource was created. This applies to any data at rest and data in transit. When using customization features, like Custom Speech and Custom Voice, all customer data is transferred, stored, and processed in the same region where your BYOS (if used) and Speech service resource reside. --> [!IMPORTANT] -> Microsoft **does not** use customer data to improve its Speech models. Additionally, if endpoint logging is disabled and no customizations are used, then no customer data is stored. --## Next steps --* Explore the various [Cognitive Services](./what-are-cognitive-services.md) -* Learn more about [Cognitive Services Virtual Networks](cognitive-services-virtual-networks.md) |
cognitive-services | Security Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/security-features.md | + + Title: Azure Cognitive Services security ++description: Learn about the security considerations for Cognitive Services usage. +++++ Last updated : 08/09/2022+++++# Azure Cognitive Services security ++Security should be considered a top priority in the development of all applications, and with the growth of artificial intelligence enabled applications, security is even more important. This article outlines various security features available for Azure Cognitive Services. Each feature addresses a specific liability, so multiple features can be used in the same workflow. ++For a comprehensive list of Azure service security recommendations see the [Cognitive Services security baseline](/security/benchmark/azure/baselines/cognitive-services-security-baseline?toc=%2Fazure%2Fcognitive-services%2FTOC.json) article. ++## Security features ++|Feature | Description | +|:|:| +| [Transport Layer Security (TLS)](/dotnet/framework/network-programming/tls) | All of the Cognitive Services endpoints exposed over HTTP enforce the TLS 1.2 protocol. With an enforced security protocol, consumers attempting to call a Cognitive Services endpoint should follow these guidelines: </br>- The client operating system (OS) needs to support TLS 1.2.</br>- The language (and platform) used to make the HTTP call need to specify TLS 1.2 as part of the request. Depending on the language and platform, specifying TLS is done either implicitly or explicitly.</br>- For .NET users, consider the [Transport Layer Security best practices](/dotnet/framework/network-programming/tls). | +| [Authentication options](./authentication.md)| Authentication is the act of verifying a user's identity. Authorization, by contrast, is the specification of access rights and privileges to resources for a given identity. An identity is a collection of information about a <a href="https://en.wikipedia.org/wiki/Principal_(computer_security)" target="_blank">principal</a>, and a principal can be either an individual user or a service.</br></br>By default, you authenticate your own calls to Cognitive Services using the subscription keys provided; this is the simplest method but not the most secure. The most secure authentication method is to use manged roles in Azure Active Directory. To learn about this and other authentication options, see [Authenticate requests to Cognitive Services](/azure/cognitive-services/authentication). | +| [Environment variables](cognitive-services-environment-variables.md) | Environment variables are name-value pairs that are stored within a specific development environment. You can store your credentials in this way as a more secure alternative to using hardcoded values in your code. However, if your environment is compromised, the environment variables are compromised as well, so this is not the most secure approach.</br></br> For instructions on how to use environment variables in your code, see the [Environment variables guide](cognitive-services-environment-variables.md). | +| [Customer-managed keys (CMK)](./encryption/cognitive-services-encryption-keys-portal.md) | This feature is for services that store customer data at rest (longer than 48 hours). While this data is already double-encrypted on Azure servers, users can get extra security by adding another layer of encryption, with keys they manage themselves. You can link your service to Azure Key Vault and manage your data encryption keys there. </br></br>You need special approval to get the E0 SKU for your service, which enables CMK. Within 3-5 business days after you submit the [request form](https://aka.ms/cogsvc-cmk), you'll get an update on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once you're approved for using the E0 SKU, you'll need to create a new resource from the Azure portal and select E0 as the Pricing Tier. You won't be able to upgrade from F0 to the new E0 SKU. </br></br>Only some services can use CMK; look for your service on the [Customer-managed keys](./encryption/cognitive-services-encryption-keys-portal.md) page.| +| [Virtual networks](./cognitive-services-virtual-networks.md) | Virtual networks allow you to specify which endpoints can make API calls to your resource. The Azure service will reject API calls from devices outside of your network. You can set a formula-based definition of the allowed network, or you can define an exhaustive list of endpoints to allow. This is another layer of security that can be used in combination with others. | +| [Data loss prevention](./cognitive-services-data-loss-prevention.md) | The data loss prevention feature lets an administrator decide what types of URIs their Azure resource can take as inputs (for those API calls that take URIs as input). This can be done to prevent the possible exfiltration of sensitive company data: If a company stores sensitive information (such as a customer's private data) in URL parameters, a bad actor inside that company could submit the sensitive URLs to an Azure service, which surfaces that data outside the company. Data loss prevention lets you configure the service to reject certain URI forms on arrival.| +| [Customer Lockbox](../security/fundamentals/customer-lockbox-overview.md) |The Customer Lockbox feature provides an interface for customers to review and approve or reject data access requests. It's used in cases where a Microsoft engineer needs to access customer data during a support request. For information on how Customer Lockbox requests are initiated, tracked, and stored for later reviews and audits, see the [Customer Lockbox guide](../security/fundamentals/customer-lockbox-overview.md).</br></br>Customer Lockbox is available for the following +| [Bring your own storage (BYOS)](/azure/cognitive-services/speech-service/speech-encryption-of-data-at-rest)| The Speech service doesn't currently support Customer Lockbox. However, you can arrange for your service-specific data to be stored in your own storage resource using bring-your-own-storage (BYOS). BYOS allows you to achieve similar data controls to Customer Lockbox. Keep in mind that Speech service data stays and is processed in the Azure region where the Speech resource was created. This applies to any data at rest and data in transit. For customization features like Custom Speech and Custom Voice, all customer data is transferred, stored, and processed in the same region where the Speech service resource and BYOS resource (if used) reside. </br></br>To use BYOS with Speech, follow the [Speech encryption of data at rest](/azure/cognitive-services/speech-service/speech-encryption-of-data-at-rest) guide.</br></br> Microsoft does not use customer data to improve its Speech models. Additionally, if endpoint logging is disabled and no customizations are used, then no customer data is stored by Speech. | ++## Next steps ++* Explore [Cognitive Services](./what-are-cognitive-services.md) and choose a service to get started. |
communication-services | Capabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/capabilities.md | -# Capabilities for Teams external users +# Teams meeting capabilities for Teams external users -In this article, you will learn which capabilities are supported for Teams external users using Azure Communication Services SDKs. +In this article, you will learn which capabilities are supported for Teams external users using Azure Communication Services SDKs in Teams meetings. You can find per platform availability in [voice and video calling capabilities](../../voice-video-calling/calling-sdk-features.md). -## Client capabilities -The following table shows supported client-side capabilities available in Azure Communication Services SDKs. You can find per platform availability in [voice and video calling capabilities](../../voice-video-calling/calling-sdk-features.md). -| Category | Capability | Supported | -| | | | -|Chat | Send and receive chat messages | ✔️ | -| | Send and receive Giphy | ❌ | -| | Send messages with high priority | ❌ | -| | Recieve messages with high priority | ✔️ | -| | Send and receive Loop components | ❌ | -| | Send and receive Emojis | ❌ | -| | Send and receive Stickers | ❌ | -| | Send and receive Stickers | ❌ | -| | Send and receive Teams messaging extensions | ❌ | -| | Use typing indicators | ✔️ | -| | Read receipt | ❌ | -| | File sharing | ❌ | -| | Reply to chat message | ❌ | -| | React to chat message | ❌ | -|Calling - core | Audio send and receive | ✔️ | -| | Send and receive video | ✔️ | -| | Share screen and see shared screen | ✔️ | -| | Manage Teams convenient recording | ❌ | -| | Manage Teams transcription | ❌ | -| | Manage breakout rooms | ❌ | -| | Participation in breakout rooms | ❌ | -| | Leave meeting | ✔️ | -| | End meeting | ❌ | -| | Change meeting options | ❌ | -| | Lock meeting | ❌ | -| Calling - participants| See roster | ✔️ | -| | Add and remove meeting participants | ❌ | -| | Dial out to phone number | ❌ | -| | Disable mic or camera of others | ❌ | -| | Make a participant and attendee or presenter | ❌ | -| | Admit or reject participants in the lobby | ❌ | -| Calling - engagement | Raise and lower hand | ❌ | -| | See raised and lowered hand | ❌ | -| | See and set reactions | ❌ | -| Calling - video streams | Send and receive video | ✔️ | -| | See together mode video stream | ❌ | -| | See Large gallery view | ❌ | -| | See Video stream from Teams media bot | ❌ | -| | See adjusted content from Camera | ❌ | -| | Set and unset spotlight | ❌ | -| | Apply background effects | ❌ | -| Calling - integrations | Control Teams third-party applications | ❌ | -| | See PowerPoint Live stream | ❌ | -| | See Whiteboard stream | ❌ | -| | Interact with a poll | ❌ | -| | Interact with a Q&A | ❌ | -| | Interact with a OneNote | ❌ | -| | Manage SpeakerCoach | ❌ | -| Accessibility | Receive closed captions | ❌ | -| | Communication access real-time translation (CART) | ❌ | -| | Language interpretation | ❌ | +| Group of features | Capability | JavaScript | +| -- | - | - | +| Core Capabilities | Join Teams meeting | ✔️ | +| | Leave meeting | ✔️ | +| | End meeting for everyone | ✔️ | +| | Change meeting options | ❌ | +| | Lock & unlock meeting | ❌ | +| | Prevent joining locked meeting | ✔️ | +| | Honor assigned Teams meeting role | ✔️ | +| Chat | Send and receive chat messages | ✔️ | +| | Send and receive Giphy | ❌ | +| | Send messages with high priority | ❌ | +| | Receive messages with high priority | ✔️ | +| | Send and receive Loop components | ❌ | +| | Send and receive Emojis | ❌ | +| | Send and receive Stickers | ❌ | +| | Send and receive Stickers | ❌ | +| | Send and receive Teams messaging extensions | ❌ | +| | Use typing indicators | ✔️ | +| | Read receipt | ❌ | +| | File sharing | ❌ | +| | Reply to chat message | ❌ | +| | React to chat message | ❌ | +| Mid call control | Turn your video on/off | ✔️ | +| | Mute/Unmute mic | ✔️ | +| | Switch between cameras | ✔️ | +| | Local hold/un-hold | ✔️ | +| | Indicator of dominant speakers in the call | ✔️ | +| | Choose speaker device for calls | ✔️ | +| | Choose microphone for calls | ✔️ | +| | Indicator of participant's state<br/>*Idle, Early media, Connecting, Connected, On hold, In Lobby, Disconnected* | ✔️ | +| | Indicator of call's state <br/>*Early Media, Incoming, Connecting, Ringing, Connected, Hold, Disconnecting, Disconnected* | ✔️ | +| | Indicate participants being muted | ✔️ | +| | Indicate participants' reasons for terminating the call | ✔️ | +| Screen sharing | Share the entire screen from within the application | ✔️ | +| | Share a specific application (from the list of running applications) | ✔️ | +| | Share a web browser tab from the list of open tabs | ✔️ | +| | Share content in "content-only" mode | ✔️ | +| | Receive video stream with content for "content-only" screen sharing experience | ✔️ | +| | Share content in "standout" mode | ❌ | +| | Receive video stream with content for a "standout" screen sharing experience | ❌ | +| | Share content in "side-by-side" mode | ❌ | +| | Receive video stream with content for "side-by-side" screen sharing experience | ❌ | +| | Share content in "reporter" mode | ❌ | +| | Receive video stream with content for "reporter" screen sharing experience | ❌ | +| Roster | List participants | ✔️ | +| | Add an Azure Communication Services user | ❌ | +| | Add a Teams user | ✔️ | +| | Adding Teams user honors Teams external access configuration | ✔️ | +| | Adding Teams user honors Teams guest access configuration | ✔️ | +| | Add a phone number | ✔️ | +| | Remove a participant | ✔️ | +| | Manage breakout rooms | ❌ | +| | Participation in breakout rooms | ❌ | +| | Admit participants in the lobby into the Teams meeting | ❌ | +| | Be admitted from the lobby into the Teams meeting | ✔️ | +| | Promote participant to a presenter or attendee | ❌ | +| | Be promoted to presenter or attendee | ✔️ | +| | Disable or enable mic for attendees | ❌ | +| | Honor disabling or enabling a mic as an attendee | ✔️ | +| | Disable or enable camera for attendees | ❌ | +| | Honor disabling or enabling a camera as an attendee | ✔️ | +| | Adding Teams user honors information barriers | ✔️ | +| Device Management | Ask for permission to use audio and/or video | ✔️ | +| | Get camera list | ✔️ | +| | Set camera | ✔️ | +| | Get selected camera | ✔️ | +| | Get microphone list | ✔️ | +| | Set microphone | ✔️ | +| | Get selected microphone | ✔️ | +| | Get speakers list | ✔️ | +| | Set speaker | ✔️ | +| | Get selected speaker | ✔️ | +| Video Rendering | Render single video in many places (local camera or remote stream) | ✔️ | +| | Set / update scaling mode | ✔️ | +| | Render remote video stream | ✔️ | +| | See together mode video stream | ❌ | +| | See Large gallery view | ❌ | +| | Receive video stream from Teams media bot | ❌ | +| | Receive adjusted stream for "content from Camera" | ❌ | +| | Add and remove video stream from spotlight | ❌ | +| | Allow video stream to be selected for spotlight | ❌ | +| | Apply Teams background effects | ❌ | +| Recording & transcription | Manage Teams convenient recording | ❌ | +| | Receive information of call being recorded | ✔️ | +| | Manage Teams transcription | ❌ | +| | Receive information of call being transcribed | ✔️ | +| | Manage Teams closed captions | ❌ | +| | Support for compliance recording | ✔️ | +| | [Azure Communication Services recording](../../voice-video-calling/call-recording.md) | ❌ | +| Engagement | Raise and lower hand | ❌ | +| | Indicate other participants' raised and lowered hands | ❌ | +| | Trigger reactions | ❌ | +| | Indicate other participants' reactions | ❌ | +| Integrations | Control Teams third-party applications | ❌ | +| | Receive PowerPoint Live stream | ❌ | +| | Receive Whiteboard stream | ❌ | +| | Interact with a poll | ❌ | +| | Interact with a Q&A | ❌ | +| | Interact with a OneNote | ❌ | +| | Manage SpeakerCoach | ❌ | +| | [Include participant in Teams meeting attendance report](https://support.microsoft.com/office/view-and-download-meeting-attendance-reports-in-teams-ae7cf170-530c-47d3-84c1-3aedac74d310) | ✔️ | +| Accessibility | Receive closed captions | ❌ | +| | Communication access real-time translation (CART) | ❌ | +| | Language interpretation | ❌ | +| Advanced call routing | Does meeting dial-out honor forwarding rules | ✔️ | +| | Read and configure call forwarding rules | ❌ | +| | Does meeting dial-out honor simultaneous ringing | ✔️ | +| | Read and configure simultaneous ringing | ❌ | +| | Does meeting dial-out honor shared line configuration | ✔️ | +| | Dial-out from meeting on behalf of the Teams user | ❌ | +| | Read and configure shared line configuration | ❌ | +| Teams meeting policy | Honor setting "Let anonymous people join a meeting" | ✔️ | +| | Honor setting "Mode for IP audio" | ❌ | +| | Honor setting "Mode for IP video" | ❌ | +| | Honor setting "IP video" | ❌ | +| | Honor setting "Local broadcasting" | ❌ | +| | Honor setting "Media bit rate (Kbs)" | ❌ | +| | Honor setting "Network configuration lookup" | ❌ | +| | Honor setting "Transcription" | No API available | +| | Honor setting "Cloud recording" | No API available | +| | Honor setting "Meetings automatically expire" | ✔️ | +| | Honor setting "Default expiration time" | ✔️ | +| | Honor setting "Store recordings outside of your country or region" | ✔️ | +| | Honor setting "Screen sharing mode" | No API available | +| | Honor setting "Participants can give or request control" | No API available | +| | Honor setting "External participants can give or request control" | No API available | +| | Honor setting "PowerPoint Live" | No API available | +| | Honor setting "Whiteboard" | No API available | +| | Honor setting "Shared notes" | No API available | +| | Honor setting "Select video filters" | ❌ | +| | Honor setting "Let anonymous people start a meeting" | ✔️ | +| | Honor setting "Who can present in meetings" | ❌ | +| | Honor setting "Automatically admit people" | ✔️ | +| | Honor setting "Dial-in users can bypass the lobby" | ✔️ | +| | Honor setting "Meet now in private meetings" | ✔️ | +| | Honor setting "Live captions" | No API available | +| | Honor setting "Chat in meetings" | ✔️ | +| | Honor setting "Teams Q&A" | No API available | +| | Honor setting "Meeting reactions" | No API available | +| DevOps | [Azure Metrics](../../metrics.md) | ✔️ | +| | [Azure Monitor](../../logging-and-diagnostics.md) | ✔️ | +| | [Azure Communication Services Insights](../../analytics/insights.md) | ✔️ | +| | [Azure Communication Services Voice and video calling events](../../../../event-grid/communication-services-voice-video-events.md) | ❌ | +| | [Teams Call Analytics](/MicrosoftTeams/use-call-analytics-to-troubleshoot-poor-call-quality) | ✔️ | +| | [Teams real-time Analytics](/microsoftteams/use-real-time-telemetry-to-troubleshoot-poor-meeting-quality) | ❌ | When Teams external users leave the meeting, or the meeting ends, they can no longer send or receive new chat messages and no longer have access to messages sent and received during the meeting. -## Server capabilities --The following table shows supported server-side capabilities available in Azure Communication --|Capability | Supported | -| | | -| [Manage ACS call recording](../../voice-video-calling/call-recording.md) | ❌ | -| [Azure Metrics](../../metrics.md) | ✔️ | -| [Azure Monitor](../../logging-and-diagnostics.md) | ✔️ | -| [Azure Communication Services Insights](../../analytics/insights.md) | ✔️ | -| [Azure Communication Services Voice and video calling events](../../../../event-grid/communication-services-voice-video-events.md) | ❌ | ---## Teams capabilities --The following table shows supported Teams capabilities: --|Capability | Supported | -| | | -| [Teams Call Analytics](/MicrosoftTeams/use-call-analytics-to-troubleshoot-poor-call-quality) | ✔️ | -| [Teams real-time Analytics](/microsoftteams/use-real-time-telemetry-to-troubleshoot-poor-meeting-quality) | ❌ | -| [Teams meeting attendance report](/office/view-and-download-meeting-attendance-reports-in-teams-ae7cf170-530c-47d3-84c1-3aedac74d310) | ✔️ | ## Next steps |
communication-services | Teams User Calling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user-calling.md | Title: Azure Communication Services Teams identity overview + Title: Calling capabilities for Teams users -description: Provides an overview of the support for Teams identity in Azure Communication Services Calling SDK. +description: Provides an overview of supported calling capabilities for Teams users in Azure Communication Services Calling SDK. -# Support for Teams identity in Calling SDK +# Calling capabilities supported for Teams users in Calling SDK The Azure Communication Services Calling SDK for JavaScript enables Teams user devices to drive voice and video communication experiences. This page provides detailed descriptions of Calling features, including platform and browser support information. To get started right away, check out [Calling quickstarts](../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md). Key features of the Calling SDK: - **Teams Meetings** - The Calling SDK can [join Teams meetings](../../quickstarts/voice-video-calling/get-started-teams-interop.md) and interact with the Teams voice and video data plane. - **Notifications** - The Calling SDK provides APIs that allow clients to be notified of an incoming call. In situations where your app is not running in the foreground, patterns are available to [fire pop-up notifications](../notifications.md) ("toasts") to inform users of an incoming call. -## Detailed Azure Communication Services capabilities +## Calling capabilities -The following list presents the set of features that are currently available in the Azure Communication Services Calling SDK for JavaScript. +The following list presents the set of features that are currently available in the Azure Communication Services Calling SDK for JavaScript when participating in 1:1 voice-over-IP (VoIP) or group VoIP calls. | Group of features | Capability | JavaScript | | -- | - | - | -| Core Capabilities | Place a one-to-one call between two users | ✔️ | -| | Place a group call with more than two users (up to 350 users) | ✔️ | -| | Promote a one-to-one call with two users into a group call with more than two users | ✔️ | -| | Join a group call after it has started | ✔️ | +| Core Capabilities | Place a one-to-one call to Teams user | ✔️ | +| | Place a one-to-one call to Azure Communication Services user | ❌ | +| | Place a group call with more than two Teams users (up to 350 users) | ✔️ | +| | Promote a one-to-one call with two Teams users into a group call with more than two Teams users | ✔️ | +| | Join a group call after it has started | ❌ | | | Invite another VoIP participant to join an ongoing group call | ✔️ |-| | Join Teams meeting | ✔️ | +| | Test your mic, speaker, and camera with an audio testing service (available by calling 8:echo123) | ✔️ | +| | Placing a call honors Teams external access configuration | ✔️ | +| | Placing a call honors Teams guest access configuration | ✔️ | | Mid call control | Turn your video on/off | ✔️ | | | Mute/Unmute mic | ✔️ | | | Switch between cameras | ✔️ | | | Local hold/un-hold | ✔️ |-| | Active speaker | ✔️ | -| | Choose speaker for calls | ✔️ | +| | Indicator of dominant speakers in the call | ✔️ | +| | Choose speaker device for calls | ✔️ | | | Choose microphone for calls | ✔️ |-| | Show state of a participant<br/>*Idle, Early media, Connecting, Connected, On hold, In Lobby, Disconnected* | ✔️ | -| | Show state of a call<br/>*Early Media, Incoming, Connecting, Ringing, Connected, Hold, Disconnecting, Disconnected* | ✔️ | -| | Show if a participant is muted | ✔️ | -| | Show the reason why a participant left a call | ✔️ | -| | Admit participant in the lobby into the Teams meeting | ❌ | +| | Indicator of participant's state<br/>*Idle, Early media, Connecting, Connected, On hold, In Lobby, Disconnected* | ✔️ | +| | Indicator of call's state <br/>*Early Media, Incoming, Connecting, Ringing, Connected, Hold, Disconnecting, Disconnected* | ✔️ | +| | Indicate participants being muted | ✔️ | +| | Indicate participants' reasons for terminating the call | ✔️ | | Screen sharing | Share the entire screen from within the application | ✔️ | | | Share a specific application (from the list of running applications) | ✔️ | | | Share a web browser tab from the list of open tabs | ✔️ |-| | Participant can view remote screen share | ✔️ | +| | Share content in "content-only" mode | ✔️ | +| | Receive video stream with content for "content-only" screen sharing experience | ✔️ | +| | Share content in "standout" mode | ❌ | +| | Receive video stream with content for a "standout" screen sharing experience | ❌ | +| | Share content in "side-by-side" mode | ❌ | +| | Receive video stream with content for "side-by-side" screen sharing experience | ❌ | +| | Share content in "reporter" mode | ❌ | +| | Receive video stream with content for "reporter" screen sharing experience | ❌ | | Roster | List participants | ✔️ |+| | Add an Azure Communication Services user | ❌ | +| | Add a Teams user | ✔️ | +| | Adding Teams users honors Teams external access configuration | ✔️ | +| | Adding Teams user honors Teams guest access configuration | ✔️ | +| | Add a phone number | ✔️ | | | Remove a participant | ✔️ |-| PSTN | Place a one-to-one call with a PSTN participant | ✔️ | -| | Place a group call with PSTN participants | ✔️ | -| | Promote a one-to-one call with a PSTN participant into a group call | ✔️ | -| | Dial-out from a group call as a PSTN participant | ✔️ | -| | Support for early media | ❌ | -| General | Test your mic, speaker, and camera with an audio testing service (available by calling 8:echo123) | ✔️ | -| Device Management | Ask for permission to use audio and/or video | ✔️ | +| | Adding Teams users honors information barriers | ✔️ | +| Device Management | Ask for permission to use audio and/or video | ✔️ | | | Get camera list | ✔️ | | | Set camera | ✔️ | | | Get selected camera | ✔️ | The following list presents the set of features that are currently available in | Video Rendering | Render single video in many places (local camera or remote stream) | ✔️ | | | Set / update scaling mode | ✔️ | | | Render remote video stream | ✔️ |--Support for streaming, timeouts, platforms, and browsers is shared with [Communication Services calling SDK overview](./../voice-video-calling/calling-sdk-features.md). --## Detailed Teams capabilities --The following list presents the set of Teams capabilities, which are currently available in the Azure Communication Services Calling SDK for JavaScript. --|Group of features | Teams capability | JS | -|-|--|| -| Core Capabilities | Placing a call honors Teams external access configuration | ✔️ | -| | Placing a call honors Teams guest access configuration | ✔️ | -| | Joining Teams meeting honors configuration for automatic people admit in the Lobby | ✔️ | -| | Actions available in the Teams meeting are defined by assigned role | ✔️ | -| Mid call control | Receive forwarded call | ✔️ | -| | Receive simultaneous ringing | ✔️ | -| | Play music on hold | ❌ | -| | Park a call | ❌ | -| | Transfer a call to a person | ✔️ | -| | Transfer a call to a call | ✔️ | -| | Transfer a call to Voicemail | ❌ | -| | Merge ongoing calls | ❌ | -| | Place a call on behalf of the user | ❌ | -| | Start call recording | ❌ | -| | Start call transcription | ❌ | -| | Start live captions | ❌ | -| | Receive information of call being recorded | ✔️ | -| PSTN | Make an Emergency call | ✔️ | -| | Place a call honors location-based routing | ❌ | -| | Support for survivable branch appliance | ❌ | -| Phone system | Receive a call from Teams auto attendant | ✔️ | -| | Transfer a call to Teams auto attendant | ✔️ | -| | Receive a call from Teams call queue (only conference mode) | ✔️ | -| | Transfer a call from Teams call queue (only conference mode) | ✔️ | -| Compliance | Place a call honors information barriers | ✔️ | -| | Support for compliance recording | ✔️ | -| Meeting | [Include participant in Teams meeting attendance report](/office/view-and-download-meeting-attendance-reports-in-teams-ae7cf170-530c-47d3-84c1-3aedac74d310) | ❌ | ---## Teams meeting options --Teams meeting organizers can configure the Teams meeting options to adjust the experience for participants. The following options are supported in Azure Communication Services for Teams users: --|Option name|Description| Supported | -| | | | -| [Automatically admit people](/microsoftteams/meeting-policies-participants-and-guests#automatically-admit-people) | Teams user can bypass the lobby, if Teams meeting organizer set value to include "people in my organization" for single tenant meetings and "people in trusted organizations" for cross-tenant meetings. Otherwise, Teams users have to wait in the lobby until an authenticated user admits them.| ✔️ | -| [Always let callers bypass the lobby](/microsoftteams/meeting-policies-participants-and-guests#allow-dial-in-users-to-bypass-the-lobby)| Participants joining through phone can bypass lobby | Not applicable | -| Announce when callers join or leave| Participants hear announcement sounds when phone participants join and leave the meeting | ✔️ | -| [Choose co-organizers](/office/add-co-organizers-to-a-meeting-in-teams-0de2c31c-8207-47ff-ae2a-fc1792d466e2)| Teams user can be selected as co-organizer. It affects the availability of actions in Teams meetings. | ✔️ | -| [Who can present in meetings](/microsoftteams/meeting-policies-in-teams-general#designated-presenter-role-mode) | Controls who in the Teams meeting can share screen. | ❌ | -|[Manage what attendees see](/office/spotlight-someone-s-video-in-a-teams-meeting-58be74a4-efac-4e89-a212-8d198182081e)|Teams organizer, co-organizer and presenter can spotlight videos for everyone. Azure Communication Services does not receive the spotlight signals. |❌| -|[Allow mic for attendees](/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If Teams user is attendee, then this option controls whether Teams user can send local audio |✔️| -|[Allow camera for attendees](/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If Teams user is attendee, then this option controls whether Teams user can send local video |✔️| -|[Record automatically](/graph/api/resources/onlinemeeting)|Records meeting when anyone starts the meeting. The user in the lobby does not start a recording.|✔️| -|Allow meeting chat|If enabled, Teams users can use the chat associated with the Teams meeting.|✔️| -|[Allow reactions](/microsoftteams/meeting-policies-in-teams-general#meeting-reactions)|If enabled, Teams users can use reactions in the Teams meeting. Azure Communication Sevices don't support reactions. |❌| -|[RTMP-IN](/microsoftteams/stream-teams-meetings)|If enabled, organizers can stream meetings and webinars to external endpoints by providing a Real-Time Messaging Protocol (RTMP) URL and key to the built-in Custom Streaming app in Teams. |Not applicable| -|[Provide CART Captions](/office/use-cart-captions-in-a-microsoft-teams-meeting-human-generated-captions-2dd889e8-32a8-4582-98b8-6c96cf14eb47)|Communication access real-time translation (CART) is a service in which a trained CART captioner listens to the speech and instantaneously translates all speech to text. As a meeting organizer, you can set up and offer CART captioning to your audience instead of the Microsoft Teams built-in live captions that are automatically generated.|❌| +| | See together mode video stream | ❌ | +| | See Large gallery view | ❌ | +| | Receive video stream from Teams media bot | ❌ | +| | Receive adjusted stream for "content from Camera" | ❌ | +| | Add and remove video stream from spotlight | ❌ | +| | Allow video stream to be selected for spotlight | ❌ | +| | Apply Teams background effects | ❌ | +| Recording & transcription | Manage Teams convenient recording | ❌ | +| | Receive information of call being recorded | ✔️ | +| | Manage Teams transcription | ❌ | +| | Receive information of call being transcribed | ✔️ | +| | Manage Teams closed captions | ❌ | +| | Support for compliance recording | ✔️ | +| Engagement | Raise and lower hand | ❌ | +| | Indicate other participants' raised and lowered hands | ❌ | +| | Trigger reactions | ❌ | +| | Indicate other participants' reactions | ❌ | +| Integrations | Control Teams third-party applications | ❌ | +| | Receive PowerPoint Live stream | ❌ | +| | Receive Whiteboard stream | ❌ | +| | Interact with a poll | ❌ | +| | Interact with a Q&A | ❌ | +| Accessibility | Receive closed captions | ❌ | +| Advanced call routing | Does start a call and add user operations honor forwarding rules | ✔️ | +| | Read and configure call forwarding rules | ❌ | +| | Does start a call and add user operations honor simultaneous ringing | ✔️ | +| | Read and configure simultaneous ringing | ❌ | +| | Placing participant on hold plays music on hold | ❌ | +| | Being placed by Teams user on Teams client on hold plays music on hold | ✔️ | +| | Park a call | ❌ | +| | Be parked | ✔️ | +| | Transfer a call to a user | ✔️ | +| | Be transferred to a user or call | ✔️ | +| | Transfer a call to a call | ✔️ | +| | Transfer a call to Voicemail | ❌ | +| | Be transferred to voicemail | ✔️ | +| | Merge ongoing calls | ❌ | +| | Does start a call and add user operations honor shared line configuration | ✔️ | +| | Start a call on behalf of the Teams user | ❌ | +| | Read and configure shared line configuration | ❌ | +| | Receive a call from Teams auto attendant | ✔️ | +| | Transfer a call to Teams auto attendant | ✔️ | +| | Receive a call from Teams call queue | ✔️ | +| | Transfer a call from Teams call queue | ✔️ | +| Teams calling policy | Honor "Make private calls" | ✔️ | +| | Honor setting "Cloud recording for calling" | No API available | +| | Honor setting "Transcription" | No API available | +| | Honor setting "Call forwarding and simultaneous ringing to people in your organization" | ✔️ | +| | Honor setting "Call forwarding and simultaneous ringing to external phone numbers" | ✔️ | +| | Honor setting "Voicemail is available for routing inbound calls" | ✔️ | +| | Honor setting "Inbound calls can be routed to call groups" | ✔️ | +| | Honor setting "Delegation for inbound and outbound calls" | ✔️ | +| | Honor setting "Prevent toll bypass and send calls through the PSTN" | ❌ | +| | Honor setting "Music on hold" | ❌ | +| | Honor setting "Busy on busy when in a call" | ❌ | +| | Honor setting "Web PSTN calling" | ❌ | +| | Honor setting "Real-time captions in Teams calls" | No API available | +| | Honor setting "Automatically answer incoming meeting invites" | ❌ | +| | Honor setting "Spam filtering" | ✔️ | +| | Honor setting "SIP devices can be used for calls" | ✔️ | +| DevOps | [Azure Metrics](../metrics.md) | ✔️ | +| | [Azure Monitor](../logging-and-diagnostics.md) | ✔️ | +| | [Azure Communication Services Insights](../analytics/insights.md) | ✔️ | +| | [Azure Communication Services Voice and video calling events](../../../event-grid/communication-services-voice-video-events.md) | ❌ | +| | [Teams Call Analytics](/MicrosoftTeams/use-call-analytics-to-troubleshoot-poor-call-quality) | ✔️ | +| | [Teams real-time Analytics](/microsoftteams/use-real-time-telemetry-to-troubleshoot-poor-meeting-quality) | ❌ | ++Support for streaming, timeouts, platforms, and browsers is shared with [Communication Services calling SDK overview](../voice-video-calling/calling-sdk-features.md). ## Next steps |
communication-services | Meeting Capabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/meeting-capabilities.md | + + Title: Teams meeting capabilities for Teams users ++description: Provides an overview of supported Teams meeting capabilities for Teams users in Azure Communication Services Calling SDK. +++++ Last updated : 12/01/2021+++++# Teams meeting support for Teams user in Calling SDK ++The Azure Communication Services Calling SDK for JavaScript enables Teams user devices to drive voice and video communication experiences. This page provides detailed descriptions of Teams meeting features. To get started right away, check out [Calling quickstarts](../../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md). +++The following list of capabilities is allowed when Teams user participates in Teams meeting: ++| Group of features | Capability | JavaScript | +| -- | - | - | +| Core Capabilities | Join Teams meeting | ✔️ | +| | Leave meeting | ✔️ | +| | End meeting for everyone | ✔️ | +| | Change meeting options | ❌ | +| | Lock & unlock meeting | ❌ | +| | Prevent joining locked meeting | ✔️ | +| | Honor assigned Teams meeting role | ✔️ | +| Mid call control | Turn your video on/off | ✔️ | +| | Mute/Unmute mic | ✔️ | +| | Switch between cameras | ✔️ | +| | Local hold/un-hold | ✔️ | +| | Indicator of dominant speakers in the call | ✔️ | +| | Choose speaker device for calls | ✔️ | +| | Choose microphone for calls | ✔️ | +| | Indicator of participant's state<br/>*Idle, Early media, Connecting, Connected, On hold, In Lobby, Disconnected* | ✔️ | +| | Indicator of call's state <br/>*Early Media, Incoming, Connecting, Ringing, Connected, Hold, Disconnecting, Disconnected* | ✔️ | +| | Indicate participants being muted | ✔️ | +| | Indicate participants' reasons for terminating the call | ✔️ | +| Screen sharing | Share the entire screen from within the application | ✔️ | +| | Share a specific application (from the list of running applications) | ✔️ | +| | Share a web browser tab from the list of open tabs | ✔️ | +| | Share content in "content-only" mode | ✔️ | +| | Receive video stream with content for "content-only" screen sharing experience | ✔️ | +| | Share content in "standout" mode | ❌ | +| | Receive video stream with content for a "standout" screen sharing experience | ❌ | +| | Share content in "side-by-side" mode | ❌ | +| | Receive video stream with content for "side-by-side" screen sharing experience | ❌ | +| | Share content in "reporter" mode | ❌ | +| | Receive video stream with content for "reporter" screen sharing experience | ❌ | +| Roster | List participants | ✔️ | +| | Add an Azure Communication Services user | ❌ | +| | Add a Teams user | ✔️ | +| | Adding Teams user honors Teams external access configuration | ✔️ | +| | Adding Teams user honors Teams guest access configuration | ✔️ | +| | Add a phone number | ✔️ | +| | Remove a participant | ✔️ | +| | Manage breakout rooms | ❌ | +| | Participation in breakout rooms | ❌ | +| | Admit participants in the lobby into the Teams meeting | ❌ | +| | Be admitted from the lobby into the Teams meeting | ✔️ | +| | Promote participant to a presenter or attendee | ❌ | +| | Be promoted to presenter or attendee | ✔️ | +| | Disable or enable mic for attendees | ❌ | +| | Honor disabling or enabling a mic as an attendee | ✔️ | +| | Disable or enable camera for attendees | ❌ | +| | Honor disabling or enabling a camera as an attendee | ✔️ | +| | Adding Teams user honors information barriers | ✔️ | +| Device Management | Ask for permission to use audio and/or video | ✔️ | +| | Get camera list | ✔️ | +| | Set camera | ✔️ | +| | Get selected camera | ✔️ | +| | Get microphone list | ✔️ | +| | Set microphone | ✔️ | +| | Get selected microphone | ✔️ | +| | Get speakers list | ✔️ | +| | Set speaker | ✔️ | +| | Get selected speaker | ✔️ | +| Video Rendering | Render single video in many places (local camera or remote stream) | ✔️ | +| | Set / update scaling mode | ✔️ | +| | Render remote video stream | ✔️ | +| | See together mode video stream | ❌ | +| | See Large gallery view | ❌ | +| | Receive video stream from Teams media bot | ❌ | +| | Receive adjusted stream for "content from Camera" | ❌ | +| | Add and remove video stream from spotlight | ❌ | +| | Allow video stream to be selected for spotlight | ❌ | +| | Apply Teams background effects | ❌ | +| Recording & transcription | Manage Teams convenient recording | ❌ | +| | Receive information of call being recorded | ✔️ | +| | Manage Teams transcription | ❌ | +| | Receive information of call being transcribed | ✔️ | +| | Manage Teams closed captions | ❌ | +| | Support for compliance recording | ✔️ | +| | [Azure Communication Services recording](../../voice-video-calling/call-recording.md) | ❌ | +| Engagement | Raise and lower hand | ❌ | +| | Indicate other participants' raised and lowered hands | ❌ | +| | Trigger reactions | ❌ | +| | Indicate other participants' reactions | ❌ | +| Integrations | Control Teams third-party applications | ❌ | +| | Receive PowerPoint Live stream | ❌ | +| | Receive Whiteboard stream | ❌ | +| | Interact with a poll | ❌ | +| | Interact with a Q&A | ❌ | +| | Interact with a OneNote | ❌ | +| | Manage SpeakerCoach | ❌ | +| | [Include participant in Teams meeting attendance report](/office/view-and-download-meeting-attendance-reports-in-teams-ae7cf170-530c-47d3-84c1-3aedac74d310) | ❌ +| Accessibility | Receive closed captions | ❌ | +| | Communication access real-time translation (CART) | ❌ | +| | Language interpretation | ❌ | +| Advanced call routing | Does meeting dial-out honor forwarding rules | ✔️ | +| | Read and configure call forwarding rules | ❌ | +| | Does meeting dial-out honor simultaneous ringing | ✔️ | +| | Read and configure simultaneous ringing | ❌ | +| | Does meeting dial-out honor shared line configuration | ✔️ | +| | Dial-out from meeting on behalf of the Teams user | ❌ | +| | Read and configure shared line configuration | ❌ | +| Teams meeting policy | Honor setting "Let anonymous people join a meeting" | ✔️ | +| | Honor setting "Mode for IP audio" | ❌ | +| | Honor setting "Mode for IP video" | ❌ | +| | Honor setting "IP video" | ❌ | +| | Honor setting "Local broadcasting" | ❌ | +| | Honor setting "Media bit rate (Kbs)" | ❌ | +| | Honor setting "Network configuration lookup" | ❌ | +| | Honor setting "Transcription" | No API available | +| | Honor setting "Cloud recording" | No API available | +| | Honor setting "Meetings automatically expire" | ✔️ | +| | Honor setting "Default expiration time" | ✔️ | +| | Honor setting "Store recordings outside of your country or region" | ✔️ | +| | Honor setting "Screen sharing mode" | No API available | +| | Honor setting "Participants can give or request control" | No API available | +| | Honor setting "External participants can give or request control" | No API available | +| | Honor setting "PowerPoint Live" | No API available | +| | Honor setting "Whiteboard" | No API available | +| | Honor setting "Shared notes" | No API available | +| | Honor setting "Select video filters" | ❌ | +| | Honor setting "Let anonymous people start a meeting" | ✔️ | +| | Honor setting "Who can present in meetings" | ❌ | +| | Honor setting "Automatically admit people" | ✔️ | +| | Honor setting "Dial-in users can bypass the lobby" | ✔️ | +| | Honor setting "Meet now in private meetings" | ✔️ | +| | Honor setting "Live captions" | No API available | +| | Honor setting "Chat in meetings" | ✔️ | +| | Honor setting "Teams Q&A" | No API available | +| | Honor setting "Meeting reactions" | No API available | +| DevOps | [Azure Metrics](../../metrics.md) | ✔️ | +| | [Azure Monitor](../../logging-and-diagnostics.md) | ✔️ | +| | [Azure Communication Services Insights](../../analytics/insights.md) | ✔️ | +| | [Azure Communication Services Voice and video calling events](../../../../event-grid/communication-services-voice-video-events.md) | ❌ | +| | [Teams Call Analytics](/MicrosoftTeams/use-call-analytics-to-troubleshoot-poor-call-quality) | ✔️ | +| | [Teams real-time Analytics](/microsoftteams/use-real-time-telemetry-to-troubleshoot-poor-meeting-quality) | ❌ | +++## Teams meeting options ++Teams meeting organizers can configure the Teams meeting options to adjust the experience for participants. The following options are supported in Azure Communication Services for Teams users: ++|Option name|Description| Supported | +| | | | +| [Automatically admit people](/microsoftteams/meeting-policies-participants-and-guests#automatically-admit-people) | Teams user can bypass the lobby, if Teams meeting organizer set value to include "people in my organization" for single tenant meetings and "people in trusted organizations" for cross-tenant meetings. Otherwise, Teams users have to wait in the lobby until an authenticated user admits them.| ✔️ | +| [Always let callers bypass the lobby](/microsoftteams/meeting-policies-participants-and-guests#allow-dial-in-users-to-bypass-the-lobby)| Participants joining through phone can bypass lobby | Not applicable | +| Announce when callers join or leave| Participants hear announcement sounds when phone participants join and leave the meeting | ✔️ | +| [Choose co-organizers](/office/add-co-organizers-to-a-meeting-in-teams-0de2c31c-8207-47ff-ae2a-fc1792d466e2)| Teams user can be selected as co-organizer. It affects the availability of actions in Teams meetings. | ✔️ | +| [Who can present in meetings](/microsoftteams/meeting-policies-in-teams-general#designated-presenter-role-mode) | Controls who in the Teams meeting can share screen. | ❌ | +|[Manage what attendees see](/office/spotlight-someone-s-video-in-a-teams-meeting-58be74a4-efac-4e89-a212-8d198182081e)|Teams organizer, co-organizer and presenter can spotlight videos for everyone. Azure Communication Services does not receive the spotlight signals. |❌| +|[Allow mic for attendees](/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If Teams user is attendee, then this option controls whether Teams user can send local audio |✔️| +|[Allow camera for attendees](/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If Teams user is attendee, then this option controls whether Teams user can send local video |✔️| +|[Record automatically](/graph/api/resources/onlinemeeting)|Records meeting when anyone starts the meeting. The user in the lobby does not start a recording.|✔️| +|Allow meeting chat|If enabled, Teams users can use the chat associated with the Teams meeting.|✔️| +|[Allow reactions](/microsoftteams/meeting-policies-in-teams-general#meeting-reactions)|If enabled, Teams users can use reactions in the Teams meeting. Azure Communication Services don't support reactions. |❌| +|[RTMP-IN](/microsoftteams/stream-teams-meetings)|If enabled, organizers can stream meetings and webinars to external endpoints by providing a Real-Time Messaging Protocol (RTMP) URL and key to the built-in Custom Streaming app in Teams. |Not applicable| +|[Provide CART Captions](/office/use-cart-captions-in-a-microsoft-teams-meeting-human-generated-captions-2dd889e8-32a8-4582-98b8-6c96cf14eb47)|Communication access real-time translation (CART) is a service in which a trained CART captioner listens to the speech and instantaneously translates all speech to text. As a meeting organizer, you can set up and offer CART captioning to your audience instead of the Microsoft Teams built-in live captions that are automatically generated.|❌| +++## Next steps ++> [!div class="nextstepaction"] +> [Get started with calling](../../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md) ++For more information, see the following articles: +- Familiarize yourself with general [call flows](../../call-flows.md) +- Learn about [call types](../../voice-video-calling/about-call-types.md) |
communication-services | Phone Capabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/phone-capabilities.md | + + Title: Phone capabilities supported for Teams users ++description: Provides an overview of phone capabilities supported for Teams users in Azure Communication Services Calling SDK. +++++ Last updated : 12/01/2021+++++# Phone capabilities for Teams user in Calling SDK ++The Azure Communication Services Calling SDK for JavaScript enables Teams user devices to drive voice and video communication experiences. This page provides detailed descriptions of phone calling features. To get started right away, check out [Calling quickstarts](../../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md). ++## Phone capabilities +The following list of capabilities is supported for scenarios where at least one phone number participates in 1:1 or group call: ++| Group of features | Capability | JavaScript | +| -- | - | - | +| Core Capabilities | Place a one-to-one call to the phone number | ✔️ | +| | Place a group call with at least one phone number | ✔️ | +| | Promote a one-to-one call with a phone number into a group call | ✔️ | +| | User can dial into a group call | ❌ | +| | Dial out from a group call to a phone number | ✔️ | +| | Make an Emergency call | ✔️ | +| | Honor Security desk policy for emergency calls | ✔️ | +| | Provide a static emergency location for Teams calling plans in case of emergency calls | ✔️ | +| Connectivity | Teams calling plans | ✔️ | +| | Teams direct routings | ✔️ | +| | Teams operator connect | ✔️ | +| | Azure Communication Services direct offers | ✔️ | +| | Azure Communication Services direct routing | ✔️ | +| Mid call control | Turn your video on/off | ✔️* | +| | Mute/Unmute mic | ✔️ | +| | Switch between cameras | ✔️* | +| | Local hold/un-hold | ✔️ | +| | Indicator of dominant speakers in the call | ✔️ | +| | Choose speaker device for calls | ✔️ | +| | Choose microphone for calls | ✔️ | +| | Indicator of participant's state<br/>*Idle, Early media, Connecting, Connected, On hold, In Lobby, Disconnected* | ✔️ | +| | Indicator of call's state <br/>*Early Media, Incoming, Connecting, Ringing, Connected, Hold, Disconnecting, Disconnected* | ✔️ | +| | Indicate participants being muted | ✔️ | +| | Indicate participants' reasons for terminating the call | ✔️ | +| Screen sharing | Share the entire screen from within the application | ✔️* | +| | Share a specific application (from the list of running applications) | ✔️* | +| | Share a web browser tab from the list of open tabs | ✔️* | +| | Share content in "content-only" mode | ✔️* | +| | Receive video stream with content for "content-only" screen sharing experience | ✔️* | +| | Share content in "standout" mode | ❌ | +| | Receive video stream with content for a "standout" screen sharing experience | ❌ | +| | Share content in "side-by-side" mode | ❌ | +| | Receive video stream with content for "side-by-side" screen sharing experience | ❌ | +| | Share content in "reporter" mode | ❌ | +| | Receive video stream with content for "reporter" screen sharing experience | ❌ | +| Roster | List participants | ✔️ | +| | Add an Azure Communication Services user | ❌ | +| | Add a Teams user | ✔️ | +| | Adding Teams user honors Teams external access configuration | ✔️ | +| | Adding Teams user honors Teams guest access configuration | ✔️ | +| | Add a phone number | ✔️ | +| | Remove a participant | ✔️ | +| | Adding Teams user honors information barriers | ✔️ | +| Device Management | Ask for permission to use audio and/or video | ✔️* | +| | Get camera list | ✔️* | +| | Set camera | ✔️* | +| | Get selected camera | ✔️* | +| | Get microphone list | ✔️ | +| | Set microphone | ✔️ | +| | Get selected microphone | ✔️ | +| | Get speakers list | ✔️ | +| | Set speaker | ✔️ | +| | Get selected speaker | ✔️ | +| Video Rendering | Render single video in many places (local camera or remote stream) | ✔️* | +| | Set / update scaling mode | ✔️* | +| | Render remote video stream | ✔️* | +| Recording & transcription | Manage Teams convenient recording | ❌ | +| | Receive information of call being recorded | ✔️ | +| | Manage Teams transcription | ❌ | +| | Receive information of call being transcribed | ✔️ | +| | Support for compliance recording | ✔️ | +| Media | Support for early media | ❌ | +| | Place a phone call honors location-based routing | ❌ | +| | Support for survivable branch appliance | ❌ | +| Accessibility | Receive closed captions | ❌ | +| Advanced call routing | Does start a call and add user operations honor forwarding rules | ✔️ | +| | Read and configure call forwarding rules | ❌ | +| | Does start a call and add user operations honor simultaneous ringing | ✔️ | +| | Read and configure simultaneous ringing | ❌ | +| | Placing participant on hold plays music on hold | ❌ | +| | Being placed by Teams user on Teams client on hold plays music on hold | ✔️ | +| | Park a call | ❌ | +| | Be parked | ✔️ | +| | Transfer a call to a user | ✔️ | +| | Be transferred to a user or call | ✔️ | +| | Transfer a call to a call | ✔️ | +| | Transfer a call to Voicemail | ❌ | +| | Be transferred to voicemail | ✔️ | +| | Merge ongoing calls | ❌ | +| | Does start a call and add user operations honor shared line configuration | ✔️ | +| | Start a call on behalf of the Teams user | ❌ | +| | Read and configure shared line configuration | ❌ | +| | Receive a call from Teams auto attendant | ✔️ | +| | Transfer a call to Teams auto attendant | ✔️ | +| | Receive a call from Teams call queue | ✔️ | +| | Transfer a call from Teams call queue | ✔️ | +| Teams caller ID policies | Block incoming caller ID | ❌ | +| | Override the caller ID policy | ❌ | +| | Calling Party Name | ❌ | +| | Replace the caller ID with | ❌ | +| | Replace the caller ID with this service number | ❌ | +| Teams dial out plan policies | Start a phone call honoring dial plan policy | ❌ | +| DevOps | [Azure Metrics](../../metrics.md) | ✔️ | +| | [Azure Monitor](../../logging-and-diagnostics.md) | ✔️ | +| | [Azure Communication Services Insights](../../analytics/insights.md) | ✔️ | +| | [Azure Communication Services Voice and video calling events](../../../../event-grid/communication-services-voice-video-events.md) | ❌ | +| | [Teams Call Analytics](/MicrosoftTeams/use-call-analytics-to-troubleshoot-poor-call-quality) | ✔️ | +| | [Teams real-time Analytics](/microsoftteams/use-real-time-telemetry-to-troubleshoot-poor-meeting-quality) | ❌ | ++Note: Participants joining via phone number can't see video content. Therefore actions involving video do not impact them but can apply when VoIP participants join. ++## Next steps ++> [!div class="nextstepaction"] +> [Get started with calling](../../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md) ++For more information, see the following articles: +- Familiarize yourself with general [call flows](../../call-flows.md) +- Learn about [call types](../../voice-video-calling/about-call-types.md) |
communication-services | Teams Client Experience | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/teams-client-experience.md | The following image illustrates the experience of a Teams user using Teams clien The following image illustrates the experience of a Teams user using Teams client interacting with another Teams user from a different organization using Azure Communication Services SDK who joined Teams meeting.  +## Start a call to Teams user within the organization +The following image illustrates the experience of a Teams user using Teams client calling another Teams user from the same organization using Azure Communication Services SDK. First, the user opens a chat with the person and selects the call button. + ++If callee accepts the call, both users are connected via a 1:1 VoIP call. + ++## Start a call to Teams users from different organization +The following image illustrates the experience of a Teams user using Teams client calling another Teams user from a different organization using Azure Communication Services SDK. First, the user opens a chat with the person and selects the call button. + ++If callee accepts the call, both users are connected via a 1:1 VoIP call. + ++## Incoming call from Teams user within the organization +The following image illustrates the experience of a Teams user using Teams client receiving a notification of an incoming call from another Teams user from the same organization. The caller is using Azure Communication Services SDK. + ++## Incoming call from Teams user from a different organization +The following image illustrates the experience of a Teams user using Teams client receiving a notification of an incoming call from another Teams user from a different organization. The caller is using Azure Communication Services SDK. + ++ ## Next steps > [!div class="nextstepaction"] |
communication-services | Sdk Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sdk-options.md | The Calling package supports UWP apps build with .NET Native or C++/WinRT on: Communication Services APIs are documented alongside other [Azure REST APIs](/rest/api/azure/). This documentation will tell you how to structure your HTTP messages and offers guidance for using [Postman](../tutorials/postman-tutorial.md). REST interface documentation is also published in Swagger format on [GitHub](https://github.com/Azure/azure-rest-api-specs). You can find throttling limits for individual APIs on [service limits page](./service-limits.md). -### REST API Throttles --Certain REST APIs and corresponding SDK methods have throttle limits you should be mindful of. Exceeding these throttle limits will trigger a`429 - Too Many Requests` error response. These limits can be increased through [a request to Azure Support](../../azure-portal/supportability/how-to-create-azure-support-request.md). --| API| Throttle| -||| -| [All Search Telephone Number Plan APIs](/rest/api/communication/phonenumbers) | 4 requests/day| -| [Purchase Telephone Number Plan](/rest/api/communication/phonenumbers/purchasephonenumbers) | 1 purchase a month| -| [Send SMS](/rest/api/communication/sms/send) | 200 requests/minute | - ## API stability expectations > [!IMPORTANT] |
communication-services | Service Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/service-limits.md | -This document explains some of the limitations of Azure Communication Services and what to do if you are running into these limitations. +This document explains the limitations of Azure Communication Services APIs and possible resolutions. ## Throttling patterns and architecture-When you hit service limitations you will generally receive an HTTP status code 429 (Too many requests). In general, the following are best practices for handling throttling: +When you hit service limitations, you'll generally receive an HTTP status code 429 (Too many requests). In general, the following are best practices for handling throttling: - Reduce the number of operations per request. - Reduce the frequency of calls.-- Avoid immediate retries, because all requests accrue against your usage limits.+- Avoid immediate retries because all requests accrue against your usage limits. -You can find more general guidance on how to set up your service architecture to handle throttling and limitations in the [Azure Architecture](/azure/architecture) documentation for [throttling patterns](/azure/architecture/patterns/throttling). +You can find more general guidance on how to set up your service architecture to handle throttling and limitations in the [Azure Architecture](/azure/architecture) documentation for [throttling patterns](/azure/architecture/patterns/throttling). Throttling limits can be increased through [a request to Azure Support](../../azure-portal/supportability/how-to-create-azure-support-request.md). ## Acquiring phone numbers-Before trying to acquire a phone number, make sure your subscription meets the [geographic and subscription](./telephony/plan-solution.md) requirements, otherwise you can't purchase a phone number. The below limitations apply to purchasing numbers through the [Phone Numbers SDK](./reference.md) and the [Azure portal](https://portal.azure.com/). +Before acquiring a phone number, make sure your subscription meets the [geographic and subscription](./telephony/plan-solution.md) requirements. Otherwise, you can't purchase a phone number. The below limitations apply to purchasing numbers through the [Phone Numbers SDK](./reference.md) and the [Azure portal](https://portal.azure.com/). | Operation | Scope | Timeframe | Limit (number of requests) |-| | -- | -- | -- | +||--|--|--| | Purchase phone number | Azure tenant | - | 1 |-| Search for phone numbers | Azure tenant | 1 week | 5 | +| Search for phone numbers | Azure tenant | one week | 5 | ### Action to take For more information, see the [phone number types](./telephony/plan-solution.md) concept page and the [telephony concept](./telephony/telephony-concept.md) overview page. -If you would like to purchase more phone numbers or put in a special order, follow the [instructions here](https://github.com/Azure/Communication/blob/master/special-order-numbers.md). If you would like to port toll-free phone numbers from external accounts to their Azure Communication Services account follow the [instructions here](https://github.com/Azure/Communication/blob/master/port-numbers.md). +If you want to purchase more phone numbers or place a special order, follow the [instructions here](https://github.com/Azure/Communication/blob/master/special-order-numbers.md). If you would like to port toll-free phone numbers from external accounts to their Azure Communication Services account, follow the [instructions here](https://github.com/Azure/Communication/blob/master/port-numbers.md). ## Identity If you would like to purchase more phone numbers or put in a special order, foll | **exchangeTokens**| 30 | 500 | ### Action to take-We always recommend you acquire identities and tokens in advance of starting other transactions like creating chat threads or starting calls, for example, right when your webpage is initially loaded or when the app is starting up. +We recommend acquiring identities and tokens before creating chat threads or starting calls. For example, when the webpage loads or the application starts. For more information, see the [identity concept overview](./authentication.md) page. ## SMS-When sending or receiving a high volume of messages, you might receive a ```429``` error. This indicates you are hitting the service limitations and your messages will be queued to be sent once the number of requests is below the threshold. +When sending or receiving a high volume of messages, you might receive a ```429``` error. This error indicates you're hitting the service limitations, and your messages will be queued to be sent once the number of requests is below the threshold. Rate Limits for SMS: Rate Limits for SMS: |Send Message|Per Number|60|200|200| ### Action to take-If you require sending an amount of messages that exceeds the rate-limits, please email us at phone@microsoft.com. +If you require to send a volume of messages that exceed the rate limits, email us at phone@microsoft.com. For more information on the SMS SDK and service, see the [SMS SDK overview](./sms/sdk-features.md) page or the [SMS FAQ](./sms/sms-faq.md) page. ## Email-Sending high volume of messages has a set of limitation on the number of email messages that you can send. If you hit these limits, your messages will not be queued to be sent. You can submit these requests again, once the Retry-After time expires. +Sending a high volume of messages has a set of limitations on the number of email messages you can send. If you hit these limits, your messages won't be queued to be sent. You can submit these requests again, once the Retry-After time expires. ### Rate Limits -|Operation|Scope|Timeframe (minutes)| Limit (number of email) | +|Operation|Scope|Timeframe (minutes)| Limit (number of emails) | ||--|-|-| |Send Email|Per Subscription|1|10| |Send Email|Per Subscription|60|25| Sending high volume of messages has a set of limitation on the number of email m | **Name** | Limit | |--|--| |Number of recipients in Email|50 |-|Attachment size - per messages|10 MB | +|Attachment size - per message |10 MB | ### Action to take-This sandbox setup is to help developers to start building the application and gradually you can request to increase the sending volume as soon as the application is ready to go live. If you require sending a number of messages that exceeds the rate-limits, please submit a support request to increase to your desired sending limit. +This sandbox setup is to help developers start building the application. You can gradually request to increase the sending volume once the application is ready to go live. Submit a support request to raise your desired sending limit if you require sending a volume of messages exceeding the rate limits. ## Chat The Communication Services Calling SDK supports the following streaming configur | Limit | Web | Windows/Android/iOS | | - | | -- |-| **Maximum # of outgoing local streams that can be sent simultaneously** | 1 video or 1 screen sharing | 1 video + 1 screen sharing | -| **Maximum # of incoming remote streams that can be rendered simultaneously** | 4 videos + 1 screen sharing | 6 videos + 1 screen sharing | +| **Maximum # of outgoing local streams that you can send simultaneously** | one video or one screen sharing | one video + one screen sharing | +| **Maximum # of incoming remote streams that you can render simultaneously** | four videos + one screen sharing | six videos + one screen sharing | While the Calling SDK won't enforce these limits, your users may experience performance degradation if they're exceeded. The following timeouts apply to the Communication Services Calling SDKs: For more information about the voice and video calling SDK and service, see the [calling SDK overview](./voice-video-calling/calling-sdk-features.md) page or [known issues](./known-issues.md). ## Teams Interoperability and Microsoft Graph-If you are using a Teams interoperability scenario, you will likely end up using some Microsoft Graph APIs to create [meetings](/graph/cloud-communications-online-meetings). +Using a Teams interoperability scenario, you'll likely use some Microsoft Graph APIs to create [meetings](/graph/cloud-communications-online-meetings). Each service offered through Microsoft Graph has different limitations; service-specific limits are [described here](/graph/throttling) in more detail. You can find more information on Microsoft Graph [throttling](/graph/throttling) | **Issue Relay Configuration** | 5 | 30000| ### Action to take-We always recommend you acquire tokens in advance of starting other transactions like creating a relay connection. +We recommend acquiring tokens before starting other transactions, like creating a relay connection. For more information, see the [network traversal concept overview](./network-traversal.md) page. -## Still need help? -See the [help and support](../support.md) options available to you. - ## Next steps+See the [help and support](../support.md) options. |
communication-services | Teams Interop | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/teams-interop.md | Applications can implement both authentication models and leave the choice of au |||| |Target user base|Customers|Enterprise| |Identity provider|Any|Azure Active Directory|+| Display name |Any with the suffix "(External)"| Azure Active Directory user's value of the property "Display name" | |Authentication & authorization|Custom*| Azure Active Directory and custom*| |Calling available via | Communication Services Calling SDKs | Communication Services Calling SDKs | |Chat is available via | Communication Services Chat SDKs | Graph API | |
confidential-computing | Overview Azure Products | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/overview-azure-products.md | Verifying that applications are running confidentially form the very foundation Technologies like [Intel Software Guard Extensions](https://www.intel.com.au/content/www/au/en/architecture-and-technology/software-guard-extensions-enhanced-data-protection.html) (Intel SGX), or [AMD Secure Encrypted Virtualization](https://www.amd.com/en/processors/amd-secure-encrypted-virtualization) (SEV-SNP) are recent CPU improvements supporting confidential computing implementations. These technologies are designed as virtualization extensions and provide feature sets including memory encryption and integrity, CPU-state confidentiality and integrity, and attestation, for building the confidential computing threat model. Azure Computational Computing leverages these technologies in the following computation resources: -- [Confidential VMs with Intel SGX application enclaves](confidential-computing-enclaves.md). Azure offers the [DCsv2](../virtual-machines/dcv2-series.md), [DCsv3, and DCdsv3](../virtual-machines/dcv3-series.md) series built on Intel SGX technology for hardware-based enclave creation. You can build secure enclave-based applications to run in a series of VMs to protect your application data and code in use.+- [VMs with Intel SGX application enclaves](confidential-computing-enclaves.md). Azure offers the [DCsv2](../virtual-machines/dcv2-series.md), [DCsv3, and DCdsv3](../virtual-machines/dcv3-series.md) series built on Intel SGX technology for hardware-based enclave creation. You can build secure enclave-based applications to run in a series of VMs to protect your application data and code in use. - [App-enclave aware containers](enclave-aware-containers.md) running on Azure Kubernetes Service (AKS). Confidential computing nodes on AKS use Intel SGX to create isolated enclave environments in the nodes between each container application. |
container-instances | Container Instances Github Action | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-github-action.md | Get the resource ID of your container registry. Substitute the name of your regi ```azurecli registryId=$(az acr show \ --name <registry-name> \+ --resource-group <resource-group-name> \ --query id --output tsv) ``` |
cosmos-db | Linux Emulator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/linux-emulator.md | Use the following steps to run the emulator on Linux: | Ports: `-p` | | Currently, only ports 8081 and 10251-10255 are needed by the emulator endpoint. | | `AZURE_COSMOS_EMULATOR_PARTITION_COUNT` | 10 | Controls the total number of physical partitions, which in return controls the number of containers that can be created and can exist at a given point in time. We recommend starting small to improve the emulator start up time, i.e 3. | | Memory: `-m` | | On memory, 3 GB or more is required. |-| Cores: `--cpus` | | Make sure to allocate enough memory and CPU cores. At least two cores are recommended. | +| Cores: `--cpus` | | Make sure to allocate enough memory and CPU cores. At least four cores are recommended. | |`AZURE_COSMOS_EMULATOR_ENABLE_DATA_PERSISTENCE` | false | This setting used by itself will help persist the data between container restarts. |-|`AZURE_COSMOS_EMULATOR_ENABLE_MONGODB_ENDPOINT` | | This setting enables the MongoDB API endpoint for the emulator and configures the MongoDB server version. (Valid server version values include ``3.2``, ``3.6``, and ``4.0``) | +|`AZURE_COSMOS_EMULATOR_ENABLE_MONGODB_ENDPOINT` | | This setting enables the MongoDB API endpoint for the emulator and configures the MongoDB server version. (Valid server version values include ``3.2``, ``3.6``, ``4.0`` and ``4.2``) | ## Troubleshoot issues |
cosmos-db | Resource Locks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/resource-locks.md | ms.devlang: azurecli # Prevent Azure Cosmos DB resources from being deleted or changed+ [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)] -As an administrator, you may need to lock an Azure Cosmos account, database or container to prevent other users in your organization from accidentally deleting or modifying critical resources. You can set the lock level to CanNotDelete or ReadOnly. +As an administrator, you may need to lock an Azure Cosmos account, database or container. Locks prevent other users in your organization from accidentally deleting or modifying critical resources. You can set the lock level to ``CanNotDelete`` or ``ReadOnly``. -- **CanNotDelete** means authorized users can still read and modify a resource, but they can't delete the resource.-- **ReadOnly** means authorized users can read a resource, but they can't delete or update the resource. Applying this lock is similar to restricting all authorized users to the permissions granted by the Reader role.+| Level | Description | +| | | +| ``CanNotDelete`` | Authorized users can still read and modify a resource, but they can't delete the resource. | +| ``ReadOnly`` | Authorized users can read a resource, but they can't delete or update the resource. Applying this lock is similar to restricting all authorized users to the permissions granted by the **Reader** role. | ## How locks are applied When you apply a lock at a parent scope, all resources within that scope inherit the same lock. Even resources you add later inherit the lock from the parent. The most restrictive lock in the inheritance takes precedence. -Unlike Azure role-based access control, you use management locks to apply a restriction across all users and roles. To learn about Azure RBAC for Azure Cosmos DB see, [Azure role-based access control in Azure Cosmos DB](role-based-access-control.md). +Unlike Azure role-based access control, you use management locks to apply a restriction across all users and roles. To learn about role-based access control for Azure Cosmos DB see, [Azure role-based access control in Azure Cosmos DB](role-based-access-control.md). -Resource Manager locks apply only to operations that happen in the management plane, which consists of operations sent to https://management.azure.com. The locks don't restrict how resources perform their own functions. Resource changes are restricted, but resource operations aren't restricted. For example, a ReadOnly lock on an Azure Cosmos container prevents you from deleting or modifying the container. It doesn't prevent you from creating, updating, or deleting data in the container. Data transactions are permitted because those operations aren't sent to https://management.azure.com. +Resource Manager locks apply only to operations that happen in the management plane, which consists of operations sent to <https://management.azure.com>. The locks don't restrict how resources perform their own functions. Resource changes are restricted, but resource operations aren't restricted. For example, a ReadOnly lock on an Azure Cosmos container prevents you from deleting or modifying the container. It doesn't prevent you from creating, updating, or deleting data in the container. Data transactions are permitted because those operations aren't sent to <https://management.azure.com>. ## Manage locks -> [!WARNING] -> Resource locks do not work for changes made by users accessing Azure Cosmos DB using account keys unless the Azure Cosmos account is first locked by enabling the disableKeyBasedMetadataWriteAccess property. Care should be taken before enabling this property to ensure it does not break existing applications that make changes to resources using any SDK, Azure portal or 3rd party tools that connect via account keys and modify resources such as changing throughput, updating index policies, etc. To learn more and to go through a checklist to ensure your applications continue to function see, [Preventing changes from the Azure Cosmos DB SDKs](role-based-access-control.md#prevent-sdk-changes) +Resource locks don't work for changes made by users accessing Azure Cosmos DB using account keys unless the Azure Cosmos account is first locked by enabling the ``disableKeyBasedMetadataWriteAccess`` property. Ensure this property doesn't break existing applications that make changes to resources using any SDK, Azure portal, or third party tools. Enabling this property will break applications that connect via account keys and modify resources such as changing throughput, updating index policies, etc. To learn more and to go through a checklist to ensure your applications continue to function, see [preventing changes from the Azure Cosmos DB SDKs](role-based-access-control.md#prevent-sdk-changes) ++### [PowerShell](#tab/powershell) ++```powershell-interactive +$RESOURCE_GROUP_NAME = "myResourceGroup" +$ACCOUNT_NAME = "my-cosmos-account" +$LOCK_NAME = "$accountName-Lock" +``` ++First, update the account to prevent changes by anything that connects via account keys. -### PowerShell +```powershell-interactive +$parameters = @{ + Name = $ACCOUNT_NAME + ResourceGroupName = $RESOURCE_GROUP_NAME + DisableKeyBasedMetadataWriteAccess = true +} +Update-AzCosmosDBAccount @parameters +``` ++Create a Delete Lock on an Azure Cosmos account resource and all child resources. ```powershell-interactive-$resourceGroupName = "myResourceGroup" -$accountName = "my-cosmos-account" -$lockName = "$accountName-Lock" --# First, update the account to prevent changes by anything that connects via account keys -Update-AzCosmosDBAccount -ResourceGroupName $resourceGroupName -Name $accountName -DisableKeyBasedMetadataWriteAccess true --# Create a Delete Lock on an Azure Cosmos account resource and all child resources -New-AzResourceLock ` - -ApiVersion "2020-04-01" ` - -ResourceType "Microsoft.DocumentDB/databaseAccounts" ` - -ResourceGroupName $resourceGroupName ` - -ResourceName $accountName ` - -LockName $lockName ` - -LockLevel "CanNotDelete" # CanNotDelete or ReadOnly +$parameters = @{ + ResourceGroupName = $RESOURCE_GROUP_NAME + ResourceName = $ACCOUNT_NAME + LockName = $LOCK_NAME + ApiVersion = "2020-04-01" + ResourceType = "Microsoft.DocumentDB/databaseAccounts" + LockLevel = "CanNotDelete" +} +New-AzResourceLock @parameters ``` -### Azure CLI +### [Azure CLI](#tab/azure-cli) ```azurecli-interactive resourceGroupName='myResourceGroup' accountName='my-cosmos-account'-$lockName="$accountName-Lock" +lockName="$accountName-Lock" +``` ++First, update the account to prevent changes by anything that connects via account keys. ++```azurecli-interactive +az cosmosdb update \ + --name $accountName \ + --resource-group $resourceGroupName \ + --disable-key-based-metadata-write-access true +``` -# First, update the account to prevent changes by anything that connects via account keys -az cosmosdb update --name $accountName --resource-group $resourceGroupName --disable-key-based-metadata-write-access true +Create a Delete Lock on an Azure Cosmos account resource -# Create a Delete Lock on an Azure Cosmos account resource -az lock create --name $lockName \ +```azurecli-interactive +az lock create \ + --name $lockName \ --resource-group $resourceGroupName \+ --lock-type 'CanNotDelete' \ --resource-type Microsoft.DocumentDB/databaseAccount \- --lock-type 'CanNotDelete' # CanNotDelete or ReadOnly \ --resource $accountName ``` -### Template + -When applying a lock to an Azure Cosmos DB resource, use the following formats: +### Template -- name - `{resourceName}/Microsoft.Authorization/{lockName}`-- type - `{resourceProviderNamespace}/{resourceType}/providers/locks`+When applying a lock to an Azure Cosmos DB resource, use the [``Microsoft.Authorization/locks``](/azure/templates/microsoft.authorization/2017-04-01/locks) Azure Resource Manager (ARM) resource. -> [!IMPORTANT] -> When modifying an existing Azure Cosmos account, make sure to include the other properties for your account and child resources when redploying with this property. Do not deploy this template as is or it will reset all of your account properties. +#### [JSON](#tab/json) ```json-"resources": [ - { - "type": "Microsoft.DocumentDB/databaseAccounts", - "name": "[variables('accountName')]", - "apiVersion": "2020-04-01", - "kind": "GlobalDocumentDB", - "location": "[parameters('location')]", - "properties": { - "consistencyPolicy": "[variables('consistencyPolicy')[parameters('defaultConsistencyLevel')]]", - "locations": "[variables('locations')]", - "databaseAccountOfferType": "Standard", - "enableAutomaticFailover": "[parameters('automaticFailover')]", - "disableKeyBasedMetadataWriteAccess": true - } - }, - { - "type": "Microsoft.DocumentDB/databaseAccounts/providers/locks", - "apiVersion": "2020-04-01", - "name": "[concat(variables('accountName'), '/Microsoft.Authorization/siteLock')]", - "dependsOn": [ - "[resourceId('Microsoft.DocumentDB/databaseAccounts', variables('accountName'))]" - ], - "properties": { +{ + "type": "Microsoft.Authorization/locks", + "apiVersion": "2017-04-01", + "name": "cosmoslock", + "dependsOn": [ + "[resourceId('Microsoft.DocumentDB/databaseAccounts', parameters('accountName'))]" + ], + "properties": { "level": "CanNotDelete",- "notes": "Cosmos account should not be deleted." - } - } -] + "notes": "Do not delete Azure Cosmos DB account." + }, + "scope": "[resourceId('Microsoft.DocumentDB/databaseAccounts', parameters('accountName'))]" +} +``` ++#### [Bicep](#tab/bicep) ++```bicep +resource lock 'Microsoft.Authorization/locks@2017-04-01' = { + name: 'cosmoslock' + scope: account + properties: { + level: 'CanNotDelete' + notes: 'Do not delete Azure Cosmos DB SQL API account.' + } +} ``` ++ ## Samples Manage resource locks for Azure Cosmos DB: |
cosmos-db | Best Practice Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/best-practice-dotnet.md | -This article walks through the best practices for using the Azure Cosmos DB .NET SDK. Following these practices, will help improve your latency, availability, and boost overall performance. +This article walks through the best practices for using the Azure Cosmos DB .NET SDK. Following these practices, will help improve your latency, availability, and boost overall performance. Watch the video below to learn more about using the .NET SDK from a Cosmos DB engineer! Watch the video below to learn more about using the .NET SDK from a Cosmos DB en |<input type="checkbox"/> | Ephemeral Port Exhaustion | For sparse or sporadic connections, we set the [`IdleConnectionTimeout`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.idletcpconnectiontimeout?view=azure-dotnet&preserve-view=true) and [`PortReuseMode`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.portreusemode?view=azure-dotnet&preserve-view=true) to `PrivatePortPool`. The `IdleConnectionTimeout` property helps which control the time unused connections are closed. This will reduce the number of unused connections. By default, idle connections are kept open indefinitely. The value set must be greater than or equal to 10 minutes. We recommended values between 20 minutes and 24 hours. The `PortReuseMode` property allows the SDK to use a small pool of ephemeral ports for various Azure Cosmos DB destination endpoints. | |<input type="checkbox"/> | Use Async/Await | Avoid blocking calls: `Task.Result`, `Task.Wait`, and `Task.GetAwaiter().GetResult()`. The entire call stack is asynchronous in order to benefit from [async/await](/dotnet/csharp/programming-guide/concepts/async/) patterns. Many synchronous blocking calls lead to [Thread Pool starvation](/archive/blogs/vancem/diagnosing-net-core-threadpool-starvation-with-perfview-why-my-service-is-not-saturating-all-cores-or-seems-to-stall) and degraded response times. | |<input type="checkbox"/> | End-to-End Timeouts | To get end-to-end timeouts, you'll need to use both `RequestTimeout` and `CancellationToken` parameters. For more details on timeouts with Cosmos DB [visit](troubleshoot-dot-net-sdk-request-timeout.md) |-|<input type="checkbox"/> | Retry Logic | A transient error is an error that has an underlying cause that soon resolves itself. Applications that connect to your database should be built to expect these transient errors. To handle them, implement retry logic in your code instead of surfacing them to users as application errors. The SDK has built-in logic to handle these transient failures on retryable requests like read or query operations. The SDK won't retry on writes for transient failures as writes aren't idempotent. The SDK does allow users to configure retry logic for throttles. For details on which errors to retry on [visit](troubleshoot-dot-net-sdk.md#retry-logics) | +|<input type="checkbox"/> | Retry Logic | A transient error is an error that has an underlying cause that soon resolves itself. Applications that connect to your database should be built to expect these transient errors. To handle them, implement retry logic in your code instead of surfacing them to users as application errors. The SDK has built-in logic to handle these transient failures on retryable requests like read or query operations. The SDK won't retry on writes for transient failures as writes aren't idempotent. The SDK does allow users to configure retry logic for throttles. For details on which errors to retry on [visit](conceptual-resilient-sdk-applications.md#should-my-application-retry-on-errors) | |<input type="checkbox"/> | Caching database/collection names | Retrieve the names of your databases and containers from configuration or cache them on start. Calls like `ReadDatabaseAsync` or `ReadDocumentCollectionAsync` and `CreateDatabaseQuery` or `CreateDocumentCollectionQuery` will result in metadata calls to the service, which consume from the system-reserved RU limit. `CreateIfNotExist` should also only be used once for setting up the database. Overall, these operations should be performed infrequently. | |<input type="checkbox"/> | Bulk Support | In scenarios where you may not need to optimize for latency, we recommend enabling [Bulk support](https://devblogs.microsoft.com/cosmosdb/introducing-bulk-support-in-the-net-sdk/) for dumping large volumes of data. | | <input type="checkbox"/> | Parallel Queries | The Cosmos DB SDK supports [running queries in parallel](performance-tips-query-sdk.md?pivots=programming-language-csharp) for better latency and throughput on your queries. We recommend setting the `MaxConcurrency` property within the `QueryRequestsOptions` to the number of partitions you have. If you aren't aware of the number of partitions, start by using `int.MaxValue`, which will give you the best latency. Then decrease the number until it fits the resource restrictions of the environment to avoid high CPU issues. Also, set the `MaxBufferedItemCount` to the expected number of results returned to limit the number of pre-fetched results. | Watch the video below to learn more about using the .NET SDK from a Cosmos DB en | <input type="checkbox"/> | Document Size | The request charge of a specified operation correlates directly to the size of the document. We recommend reducing the size of your documents as operations on large documents cost more than operations on smaller documents. | | <input type="checkbox"/> | Increase the number of threads/tasks | Because calls to Azure Cosmos DB are made over the network, you might need to vary the degree of concurrency of your requests so that the client application spends minimal time waiting between requests. For example, if you're using the [.NET Task Parallel Library](/dotnet/standard/parallel-programming/task-parallel-library-tpl), create on the order of hundreds of tasks that read from or write to Azure Cosmos DB. | | <input type="checkbox"/> | Enabling Query Metrics | For more logging of your backend query executions, you can enable SQL Query Metrics using our .NET SDK. For instructions on how to collect SQL Query Metrics [visit](profile-sql-api-query.md) |-| <input type="checkbox"/> | SDK Logging | Use SDK logging to capture extra diagnostics information and troubleshoot latency issues. Log the [diagnostics string](/dotnet/api/microsoft.azure.documents.client.resourceresponsebase.requestdiagnosticsstring?view=azure-dotnet&preserve-view=true) in the V2 SDK or [`Diagnostics`](/dotnet/api/microsoft.azure.cosmos.responsemessage.diagnostics?view=azure-dotnet&preserve-view=true) in v3 SDK for more detailed cosmos diagnostic information for the current request to the service. As an example use case, capture Diagnostics on any exception and on completed operations if the `Diagnostics.ElapsedTime` is greater than a designated threshold value (that is, if you have an SLA of 10 seconds, then capture diagnostics when `ElapsedTime` > 10 seconds). It's advised to only use these diagnostics during performance testing. | +| <input type="checkbox"/> | SDK Logging | Log [SDK diagnostics](#capture-diagnostics) for outstanding scenarios, such as exceptions or when requests go beyond an expected latency. | <input type="checkbox"/> | DefaultTraceListener | The DefaultTraceListener poses performance issues on production environments causing high CPU and I/O bottlenecks. Make sure you're using the latest SDK versions or [remove the DefaultTraceListener from your application](performance-tips-dotnet-sdk-v3-sql.md#logging-and-tracing) | +## Capture diagnostics ++ ## Best practices when using Gateway mode+ Increase `System.Net MaxConnections` per host when you use Gateway mode. Azure Cosmos DB requests are made over HTTPS/REST when you use Gateway mode. They're subject to the default connection limit per hostname or IP address. You might need to set `MaxConnections` to a higher value (from 100 through 1,000) so that the client library can use multiple simultaneous connections to Azure Cosmos DB. In .NET SDK 1.8.0 and later, the default value for `ServicePointManager.DefaultConnectionLimit` is 50. To change the value, you can set `Documents.Client.ConnectionPolicy.MaxConnectionLimit` to a higher value. ## Best practices for write-heavy workloads+ For workloads that have heavy create payloads, set the `EnableContentResponseOnWrite` request option to `false`. The service will no longer return the created or updated resource to the SDK. Normally, because the application has the object that's being created, it doesn't need the service to return it. The header values are still accessible, like a request charge. Disabling the content response can help improve performance, because the SDK no longer needs to allocate memory or serialize the body of the response. It also reduces the network bandwidth usage to further help performance. ## Next steps+ For a sample application that's used to evaluate Azure Cosmos DB for high-performance scenarios on a few client machines, see [Performance and scale testing with Azure Cosmos DB](performance-testing.md). To learn more about designing your application for scale and high performance, see [Partitioning and scaling in Azure Cosmos DB](../partitioning-overview.md). |
cosmos-db | Create Cosmosdb Resources Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-cosmosdb-resources-portal.md | -This quickstart demonstrates how to use the Azure portal to create an Azure Cosmos DB [SQL API](../introduction.md) account, create a document database, and container, and add data to the container. +This quickstart demonstrates how to use the Azure portal to create an Azure Cosmos DB [SQL API](../introduction.md) account, create a document database, and container, and add data to the container. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb) ## Prerequisites |
cosmos-db | Odbc Driver | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/odbc-driver.md | Follow these steps to create a view for your data: You can create as many views as you like. Once you're done defining the views, select **Sample** to sample the data. +> [!IMPORTANT] +> The query text in the view definition should not contain line breaks. Otherwise, we will get a generic error when previewing the view. ++ ## Query with SQL Server Management Studio Once you set up an Azure Cosmos DB ODBC Driver User DSN, you can query Azure Cosmos DB from SQL Server Management Studio (SSMS) by setting up a linked server connection. You can use your DSN to connect to Azure Cosmos DB with any ODBC-compliant tools ## Next steps - To learn more about Azure Cosmos DB, see [Welcome to Azure Cosmos DB](../introduction.md).-- For more information about creating visualizations in Power BI Desktop, see [Visualization types in Power BI](https://powerbi.microsoft.com/documentation/powerbi-service-visualization-types-for-reports-and-q-and-a/).+- For more information about creating visualizations in Power BI Desktop, see [Visualization types in Power BI](https://powerbi.microsoft.com/documentation/powerbi-service-visualization-types-for-reports-and-q-and-a/). |
cosmos-db | Quick Create Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/quick-create-bicep.md | -Azure Cosmos DB is MicrosoftΓÇÖs fast NoSQL database with open APIs for any scale. You can use Azure Cosmos DB to quickly create and query key/value databases, document databases, and graph databases. This quickstart focuses on the process of deploying a Bicep file to create an Azure Cosmos database and a container within that database. You can later store data in this container. +Azure Cosmos DB is MicrosoftΓÇÖs fast NoSQL database with open APIs for any scale. You can use Azure Cosmos DB to quickly create and query key/value databases, document databases, and graph databases. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). This quickstart focuses on the process of deploying a Bicep file to create an Azure Cosmos database and a container within that database. You can later store data in this container. [!INCLUDE [About Bicep](../../../includes/resource-manager-quickstart-bicep-introduction.md)] |
cosmos-db | Sql Api Dotnet Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-dotnet-application.md | -This tutorial shows you how to use Azure Cosmos DB to store and access data from an ASP.NET MVC application that is hosted on Azure. In this tutorial, you use the .NET SDK V3. The following image shows the web page that you'll build by using the sample in this article: +This tutorial shows you how to use Azure Cosmos DB to store and access data from an ASP.NET MVC application that is hosted on Azure. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). In this tutorial, you use the .NET SDK V3. The following image shows the web page that you'll build by using the sample in this article: :::image type="content" source="./media/sql-api-dotnet-application/asp-net-mvc-tutorial-image01.png" alt-text="Screenshot of the todo list MVC web application created by this tutorial - ASP NET Core MVC tutorial step by step"::: This tutorial covers: Before following the instructions in this article, make sure that you have the following resources: -* An active Azure account. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. +* An active Azure account. If you don't have an Azure subscription, without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb) [!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)] |
cosmos-db | Sql Api Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-get-started.md | -Welcome to the Azure Cosmos DB SQL API get started tutorial. After following this tutorial, you'll have a console application that creates and queries Azure Cosmos DB resources. +Welcome to the Azure Cosmos DB SQL API get started tutorial. After following this tutorial, you'll have a console application that creates and queries Azure Cosmos DB resources. This tutorial uses version 3.0 or later of the [Azure Cosmos DB .NET SDK](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) and [.NET 6](https://dotnet.microsoft.com/download). Now let's get started! ## Prerequisites -An active Azure account. If you don't have one, you can sign up for a [free account](https://azure.microsoft.com/free/). +An active Azure account. If you don't have one, without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). [!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)] That's it, build it, and you're on your way! * If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md) * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md) -[cosmos-db-create-account]: create-sql-api-java.md#create-a-database-account +[cosmos-db-create-account]: create-sql-api-java.md#create-a-database-account |
cosmos-db | Sql Api Nodejs Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-nodejs-application.md | -This tutorial demonstrates how to create a SQL API account in Azure Cosmos DB by using the Azure portal. You then build and run a web app that is built on the Node.js SDK to create a database and container, and add items to the container. This tutorial uses JavaScript SDK version 3.0. +This tutorial demonstrates how to create a SQL API account in Azure Cosmos DB by using the Azure portal. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). You then build and run a web app that is built on the Node.js SDK to create a database and container, and add items to the container. This tutorial uses JavaScript SDK version 3.0. This tutorial covers the following tasks: This tutorial covers the following tasks: Before following the instructions in this article, ensure that you have the following resources: -* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. +* If you don't have an Azure subscription, without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). [!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)] When these resources are no longer needed, you can delete the resource group, Az [Node.js]: https://nodejs.org/ [Git]: https://git-scm.com/-[GitHub]: https://github.com/Azure-Samples/azure-cosmos-db-sql-api-nodejs-todo-app +[GitHub]: https://github.com/Azure-Samples/azure-cosmos-db-sql-api-nodejs-todo-app |
cosmos-db | Troubleshoot Dot Net Sdk Slow Request | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-dot-net-sdk-slow-request.md | description: Learn how to diagnose and fix slow requests when you use Azure Cosm Previously updated : 08/19/2022 Last updated : 08/30/2022 When you design your application, [follow the .NET SDK best practices](performan Consider the following when developing your application: * The application should be in the same region as your Azure Cosmos DB account.-* Your [ApplicationRegion](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationregion), [ApplicationPreferredRegions](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationpreferredregions), or [PreferredLocations](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.preferredlocations) for V2 SDK configuration is should reflect your regional preference and point to the region your application is deployed on. +* Your [ApplicationRegion](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationregion) or [ApplicationPreferredRegions](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationpreferredregions) should reflect your regional preference and point to the region your application is deployed on. * There might be a bottleneck on the Network interface because of high traffic. If the application is running on Azure Virtual Machines, there are possible workarounds: * Consider using a [Virtual Machine with Accelerated Networking enabled](../../virtual-network/create-vm-accelerated-networking-powershell.md). * Enable [Accelerated Networking on an existing Virtual Machine](../../virtual-network/create-vm-accelerated-networking-powershell.md#enable-accelerated-networking-on-existing-vms). If you need to verify that a database or container exists, don't do so by callin ## Slow requests on bulk mode -[Bulk mode](tutorial-sql-api-dotnet-bulk-import.md) is a throughput optimized mode meant for high data volume operations, not a latency optimized mode; it's meant to saturate the available throughput. If you are experiencing slow requests when using bulk mode make sure that: +[Bulk mode](tutorial-sql-api-dotnet-bulk-import.md) is a throughput optimized mode meant for high data volume operations, not a latency optimized mode; it's meant to saturate the available throughput. If you're experiencing slow requests when using bulk mode make sure that: * Your application is compiled in Release configuration.-* You are not measuring latency while debugging the application (no debuggers attached). +* You aren't measuring latency while debugging the application (no debuggers attached). * The volume of operations is high, don't use bulk for less than 1000 operations. Your provisioned throughput dictates how many operations per second you can process, your goal with bulk would be to utilize as much of it as possible.-* Monitor the container for [throttling scenarios](troubleshoot-request-rate-too-large.md). If the container is getting heavily throttled it means the volume of data is larger than your provisioned throughput, you need to either scale up the container or reduce the volume of data (maybe create smaller batches of data at a time). -* You are correctly using the `async/await` pattern to [process all concurrent Tasks](tutorial-sql-api-dotnet-bulk-import.md#step-6-populate-a-list-of-concurrent-tasks) and not [blocking any async operation](https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AsyncGuidance.md#avoid-using-taskresult-and-taskwait). +* Monitor the container for [throttling scenarios](troubleshoot-request-rate-too-large.md). If the container is getting heavily throttled it means the volume of data is larger than your provisioned throughput, you need to either scale up the container, or reduce the volume of data (maybe create smaller batches of data at a time). +* You're correctly using the `async/await` pattern to [process all concurrent Tasks](tutorial-sql-api-dotnet-bulk-import.md#step-6-populate-a-list-of-concurrent-tasks) and not [blocking any async operation](https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AsyncGuidance.md#avoid-using-taskresult-and-taskwait). -## <a name="capture-diagnostics"></a>Capture the diagnostics +## Capture diagnostics -All the responses in the SDK, including `CosmosException`, have a `Diagnostics` property. This property records all the information related to the single request, including if there were retries or any transient failures. --The diagnostics are returned as a string. The string changes with each version, as it's improved for troubleshooting different scenarios. With each version of the SDK, the string will have breaking changes to the formatting. Don't parse the string to avoid breaking changes. The following code sample shows how to read diagnostic logs by using the .NET SDK: --```c# -try -{ - ItemResponse<Book> response = await this.Container.CreateItemAsync<Book>(item: testItem); - if (response.Diagnostics.GetClientElapsedTime() > ConfigurableSlowRequestTimeSpan) - { - // Log the response.Diagnostics.ToString() and add any additional info necessary to correlate to other logs - } -} -catch (CosmosException cosmosException) -{ - // Log the full exception including the stack trace with: cosmosException.ToString() - - // The Diagnostics can be logged separately if required with: cosmosException.Diagnostics.ToString() -} --// When using Stream APIs -ResponseMessage response = await this.Container.CreateItemStreamAsync(partitionKey, stream); -if (response.Diagnostics.GetClientElapsedTime() > ConfigurableSlowRequestTimeSpan || !response.IsSuccessStatusCode) -{ - // Log the diagnostics and add any additional info necessary to correlate to other logs with: response.Diagnostics.ToString() -} -``` ## Diagnostics in version 3.19 and later The JSON structure has breaking changes with each version of the SDK. This makes it unsafe to be parsed. The JSON represents a tree structure of the request going through the SDK. The following sections cover a few key things to look at. -### <a name="cpu-history"></a>CPU history +### CPU history High CPU utilization is the most common cause for slow requests. For optimal latency, CPU usage should be roughly 40 percent. Use 10 seconds as the interval to monitor maximum (not average) CPU utilization. CPU spikes are more common with cross-partition queries, where the requests might do multiple connections for a single query. The timeouts include diagnostics, which contain the following, for example: * If the `cpu` values are over 70 percent, the timeout is likely to be caused by CPU exhaustion. In this case, the solution is to investigate the source of the high CPU utilization and reduce it, or scale the machine to a larger resource size. * If the `threadInfo/isThreadStarving` nodes have `True` values, the cause is thread starvation. In this case, the solution is to investigate the source or sources of the thread starvation (potentially locked threads), or scale the machine or machines to a larger resource size.-* If the `dateUtc` time between measurements is not approximately 10 seconds, it also indicates contention on the thread pool. CPU is measured as an independent task that is enqueued in the thread pool every 10 seconds. If the time between measurements is longer, it indicates that the async tasks aren't able to be processed in a timely fashion. The most common scenario is when your application code is [blocking calls over async code](https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AsyncGuidance.md#avoid-using-taskresult-and-taskwait). +* If the `dateUtc` time between measurements isn't approximately 10 seconds, it also indicates contention on the thread pool. CPU is measured as an independent task that is enqueued in the thread pool every 10 seconds. If the time between measurements is longer, it indicates that the async tasks aren't able to be processed in a timely fashion. The most common scenario is when your application code is [blocking calls over async code](https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AsyncGuidance.md#avoid-using-taskresult-and-taskwait). # [Older SDK](#tab/cpu-old) CPU count: 8) ``` * If the CPU measurements are over 70 percent, the timeout is likely to be caused by CPU exhaustion. In this case, the solution is to investigate the source of the high CPU utilization and reduce it, or scale the machine to a larger resource size.-* If the CPU measurements are not happening every 10 seconds (for example, there are gaps, or measurement times indicate longer times in between measurements), the cause is thread starvation. In this case the solution is to investigate the source or sources of the thread starvation (potentially locked threads), or scale the machine or machines to a larger resource size. +* If the CPU measurements aren't happening every 10 seconds (for example, there are gaps, or measurement times indicate longer times in between measurements), the cause is thread starvation. In this case the solution is to investigate the source or sources of the thread starvation (potentially locked threads), or scale the machine or machines to a larger resource size. CPU count: 8) The client application that uses the SDK should be scaled up or out. -### <a name="httpResponseStats"></a>HttpResponseStats +### HttpResponseStats `HttpResponseStats` are requests that go to the [gateway](sql-sdk-connection-modes.md). Even in direct mode, the SDK gets all the metadata information from the gateway. If the request is slow, first verify that none of the previous suggestions yield ] ``` -### <a name="storeResult"></a>StoreResult +### StoreResult `StoreResult` represents a single request to Azure Cosmos DB, by using direct mode with the TCP protocol. For multiple store results for a single request, be aware of the following: * Strong consistency and bounded staleness consistency always have at least two store results. * Check the status code of each `StoreResult`. The SDK retries automatically on multiple different [transient failures](troubleshoot-dot-net-sdk-request-timeout.md). The SDK is constantly improved to cover more scenarios. -### <a name="rntbdRequestStats"></a>RntbdRequestStats +### RntbdRequestStats Show the time for the different stages of sending and receiving a request in the transport layer. Show the time for the different stages of sending and receiving a request in the * *Transit time is large*, which leads to a networking problem. Compare this number to the `BELatencyInMs`. If `BELatencyInMs` is small, then the time was spent on the network, and not on the Azure Cosmos DB service. * *Received time is large* might be caused by a thread starvation problem. This is the time between having the response and returning the result. -### <a name="ServiceEndpointStatistics"></a>ServiceEndpointStatistics +### ServiceEndpointStatistics + Information about a particular backend server. The SDK can open multiple connections to a single backend server depending upon the number of pending requests and the MaxConcurrentRequestsPerConnection. -* `inflightRequests` The number of pending requests to a backend server (maybe from different partitions). A high number may to lead to more traffic and higher latencies. -* `openConnections` is the total Number of connections open to a single backend server. This can be useful to show SNAT port exhausion if this number is very high. +* `inflightRequests` The number of pending requests to a backend server (maybe from different partitions). A high number may lead to more traffic and higher latencies. +* `openConnections` is the total Number of connections open to a single backend server. This can be useful to show SNAT port exhaustion if this number is very high. ++### ConnectionStatistics -### <a name="ConnectionStatistics"></a>ConnectionStatistics Information about the particular connection (new or old) the request get's assigned to. * `waitforConnectionInit`: The current request was waiting for new connection initialization to complete. This will lead to higher latencies.-* `callsPendingReceive`: Number of calls that was pending receive before this call was sent. A high number can show us that there were a lot of calls before this call and it may lead to higher latencies. If this number is high it points to a head of line blocking issue possibly caused by another request like query or feed operation that is taking a long time to process. Try lowering the CosmosClientOptions.MaxRequestsPerTcpConnection to increase the number of channels. -* `LastSentTime`: Time of last request that was sent to this server. This along with LastReceivedTime can be used to see connectivity or endpoint issues. For example if there are a lot of receive timeouts, Sent time will be much larger than the Receive time. +* `callsPendingReceive`: Number of calls that was pending receive before this call was sent. A high number can show us that there were many calls before this call and it may lead to higher latencies. If this number is high it points to a head of line blocking issue possibly caused by another request like query or feed operation that is taking a long time to process. Try lowering the CosmosClientOptions.MaxRequestsPerTcpConnection to increase the number of channels. +* `LastSentTime`: Time of last request that was sent to this server. This along with LastReceivedTime can be used to see connectivity or endpoint issues. For example if there are many receive timeouts, Sent time will be much larger than the Receive time. * `lastReceive`: Time of last request that was received from this server * `lastSendAttempt`: Time of the last send attempt -### <a name="Request and response sizes"></a>Request and response sizes +### Request and response sizes + * `requestSizeInBytes`: The total size of the request sent to Cosmos DB * `responseMetadataSizeInBytes`: The size of headers returned from Cosmos DB * `responseBodySizeInBytes`: The size of content returned from Cosmos DB Contact [Azure support](https://aka.ms/azure-support). ## Next steps * [Diagnose and troubleshoot](troubleshoot-dot-net-sdk.md) problems when you use the Azure Cosmos DB .NET SDK.-* Learn about performance guidelines for [.NET v3](performance-tips-dotnet-sdk-v3-sql.md) and [.NET v2](performance-tips.md). +* Learn about performance guidelines for the [.NET SDK](performance-tips-dotnet-sdk-v3-sql.md). +* Learn about the best practices for the [.NET SDK](best-practice-dotnet.md) |
cosmos-db | Troubleshoot Dot Net Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-dot-net-sdk.md | Title: Diagnose and troubleshoot issues when using Azure Cosmos DB .NET SDK description: Use features like client-side logging and other third-party tools to identify, diagnose, and troubleshoot Azure Cosmos DB issues when using .NET SDK. Previously updated : 08/19/2022 Last updated : 08/30/2022 The .NET SDK provides client-side logical representation to access the Azure Cos Consider the following checklist before you move your application to production. Using the checklist will prevent several common issues you might see. You can also quickly diagnose when an issue occurs: -* Use the latest [SDK](sql-api-sdk-dotnet-standard.md). Preview SDKs should not be used for production. This will prevent hitting known issues that are already fixed. -* Review the [performance tips](performance-tips.md), and follow the suggested practices. This will help prevent scaling, latency, and other performance issues. -* Enable the SDK logging to help you troubleshoot an issue. Enabling the logging may affect performance so it's best to enable it only when troubleshooting issues. You can enable the following logs: -* [Log metrics](../monitor-cosmos-db.md) by using the Azure portal. Portal metrics show the Azure Cosmos DB telemetry, which is helpful to determine if the issue corresponds to Azure Cosmos DB or if it's from the client side. -* Log the [diagnostics string](/dotnet/api/microsoft.azure.documents.client.resourceresponsebase.requestdiagnosticsstring) in the V2 SDK or [diagnostics](/dotnet/api/microsoft.azure.cosmos.responsemessage.diagnostics) in V3 SDK from the point operation responses. -* Log the [SQL Query Metrics](sql-api-query-metrics.md) from all the query responses -* Follow the setup for [SDK logging]( https://github.com/Azure/azure-cosmos-dotnet-v2/blob/master/docs/documentdb-sdk_capture_etl.md) +* Use the latest [SDK](sql-api-sdk-dotnet-standard.md). Preview SDKs shouldn't be used for production. This will prevent hitting known issues that are already fixed. +* Review the [performance tips](performance-tips-dotnet-sdk-v3-sql.md), and follow the suggested practices. This will help prevent scaling, latency, and other performance issues. +* Enable the SDK logging to help you troubleshoot an issue. Enabling the logging may affect performance so it's best to enable it only when troubleshooting issues. You can enable the following logs: + * [Log metrics](../monitor-cosmos-db.md) by using the Azure portal. Portal metrics show the Azure Cosmos DB telemetry, which is helpful to determine if the issue corresponds to Azure Cosmos DB or if it's from the client side. + * Log the [diagnostics string](#capture-diagnostics) from the operations and/or exceptions. -Take a look at the [Common issues and workarounds](#common-issues-workarounds) section in this article. +Take a look at the [Common issues and workarounds](#common-issues-and-workarounds) section in this article. -Check the [GitHub issues section](https://github.com/Azure/azure-cosmos-dotnet-v2/issues) that's actively monitored. Check to see if any similar issue with a workaround is already filed. If you didn't find a solution, then file a GitHub issue. You can open a support tick for urgent issues. +Check the [GitHub issues section](https://github.com/Azure/azure-cosmos-dotnet-v3/issues) that's actively monitored. Check to see if any similar issue with a workaround is already filed. If you didn't find a solution, then file a GitHub issue. You can open a support tick for urgent issues. +## Capture diagnostics -## <a name="common-issues-workarounds"></a>Common issues and workarounds ++## Common issues and workarounds ### General suggestions-* Run your app in the same Azure region as your Azure Cosmos DB account, whenever possible. ++* Follow any `aka.ms` link included in the exception details. +* Run your app in the same Azure region as your Azure Cosmos DB account, whenever possible. * You may run into connectivity/availability issues due to lack of resources on your client machine. We recommend monitoring your CPU utilization on nodes running the Azure Cosmos DB client, and scaling up/out if they're running at high load. ### Check the portal metrics-Checking the [portal metrics](../monitor-cosmos-db.md) will help determine if it's a client-side issue or if there is an issue with the service. For example, if the metrics contain a high rate of rate-limited requests (HTTP status code 429) which means the request is getting throttled then check the [Request rate too large](troubleshoot-request-rate-too-large.md) section. -## Retry design <a id="retry-logics"></a><a id="retry-design"></a><a id="error-codes"></a> +Checking the [portal metrics](../monitor-cosmos-db.md) will help determine if it's a client-side issue or if there's an issue with the service. For example, if the metrics contain a high rate of rate-limited requests (HTTP status code 429) which means the request is getting throttled then check the [Request rate too large](troubleshoot-request-rate-too-large.md) section. ++### Retry design + See our guide to [designing resilient applications with Azure Cosmos SDKs](conceptual-resilient-sdk-applications.md) for guidance on how to design resilient applications and learn which are the retry semantics of the SDK. -### <a name="snat"></a>Azure SNAT (PAT) port exhaustion +### SNAT If your app is deployed on [Azure Virtual Machines without a public IP address](../../load-balancer/load-balancer-outbound-connections.md), by default [Azure SNAT ports](../../load-balancer/load-balancer-outbound-connections.md#preallocatedports) establish connections to any endpoint outside of your VM. The number of connections allowed from the VM to the Azure Cosmos DB endpoint is limited by the [Azure SNAT configuration](../../load-balancer/load-balancer-outbound-connections.md#preallocatedports). This situation can lead to connection throttling, connection closure, or the above mentioned [Request timeouts](troubleshoot-dot-net-sdk-request-timeout.md). Azure SNAT ports are used only when your VM has a private IP address is connecting to a public IP address. There are two workarounds to avoid Azure SNAT limitation (p |