Updates from: 09/01/2022 01:08:25
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory How To Mfa Server Migration Utility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-server-migration-utility.md
+
+ Title: How to use the MFA Server Migration Utility to migrate to Azure AD MFA - Azure Active Directory
+description: Step-by-step guidance to migrate MFA server settings to Azure AD using the MFA Server Migration Utility.
+++++ Last updated : 08/30/2022++++++++
+# MFA Server migration
+
+This topic covers how to migrate MFA settings for Azure Active Directory (Azure AD) users from on-premises Azure MFA Server to Azure AD Multi-Factor Authentication.
+
+## Solution overview
+
+The MFA Server Migration Utility helps synchronize multifactor authentication data stored in the on-premises Azure MFA Server directly to Azure AD MFA.
+After the authentication data is migrated to Azure AD, users can perform cloud-based MFA seamlessly without having to register again or confirm authentication methods.
+Admins can use the MFA Server Migration Utility to target single users or groups of users for testing and controlled rollout without having to make any tenant-wide changes.
+
+## Limitations and requirements
+
+- The MFA Server Migration Utility is currently in public preview. Some features might not be supported or have limited capabilities. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+- The MFA Server Migration Utility requires a new preview build of the MFA Server solution to be installed on your Primary MFA Server. The build makes updates to the MFA Server data file, and includes the new MFA Server Migration Utility. You donΓÇÖt have to update the WebSDK or User portal. Installing the update _doesn't_ start the migration automatically.
+- The MFA Server Migration Utility copies the data from the database file onto the user objects in Azure AD. During migration, users can be targeted for Azure AD MFA for testing purposes using [Staged Rollout](../hybrid/how-to-connect-staged-rollout.md). Staged migration lets you test without making any changes to your domain federation settings. Once migrations are complete, you must finalize your migration by making changes to your domain federation settings.
+- AD FS running Windows Server 2016 or higher is required to provide MFA authentication on any AD FS relying parties, not including Azure AD and Office 365.
+- Review your AD FS claims rules and make sure none requires MFA to be performed on-premises as part of the authentication process.
+- Staged rollout can target a maximum of 500,000 users (10 groups containing a maximum of 50,000 users each).
+
+## Migration guide
+
+|Phase|Steps|
+|:|:--|
+|Preparations |[Identify Azure AD MFA Server dependencies](#identify-azure-ad-mfa-server-dependencies) |
+||[Backup Azure AD MFA Server datafile](#backup-azure-ad-mfa-server-datafile) |
+||[Install MFA Server update](#install-mfa-server-update) |
+||[Configure MFA Server Migration Utility](#configure-the-mfa-server-migration-utility) |
+|Migrations |[Migrate user data](#migrate-user-data)|
+||[Validate and test](#validate-and-test)|
+||[Staged Rollout](#enable-staged-rollout-using-azure-portal) |
+||[Educate users](#educate-users)|
+||[Complete user migration](#complete-user-migration)|
+|Finalize |[Migrate MFA Server dependencies](#migrate-mfa-server-dependencies)|
+||[Update domain federation settings](#update-domain-federation-settings)|
+||[Disable MFA Server User portal](#optional-disable-mfa-server-user-portal)|
+||[Decommission MFA server](#decommission-mfa-server)|
+
+An MFA Server migration generally includes the steps in the following process:
++
+A few important points:
+
+**Phase 1** should be repeated as you add test users.
+ - The migration tool uses Azure AD groups for determining the users for which authentication data should be synced between MFA Server and Azure AD MFA. After user data has been synced, that user is then ready to use Azure AD MFA.
+ - Staged Rollout allows you to reroute users to Azure AD MFA, also using Azure AD groups.
+ While you certainly could use the same groups for both tools, we recommend against it as users could potentially be redirected to Azure AD MFA before the tool has synched their data. We recommend setting up Azure AD groups for syncing authentication data by the MFA Server Migration Utility, and another set of groups for Staged Rollout to direct targeted users to Azure AD MFA rather than on-premises.
+
+**Phase 2** should be repeated as you migrate your user base. By the end of Phase 2, your entire user base should be using Azure AD MFA for all workloads federated against Azure AD.
+
+During the previous phases, you can remove users from the Staged Rollout folders to take them out of scope of Azure AD MFA and route them back to your on-premises Azure MFA server for all MFA requests originating from Azure AD.
+
+**Phase 3** requires moving all clients that authenticate to the on-premises MFA Server (VPNs, password managers, and so on) to Azure AD federation via SAML/OAUTH. If modern authentication standards arenΓÇÖt supported, you're required to stand up NPS server(s) with the Azure AD MFA extension installed. Once dependencies are migrated, users should no longer use the User portal on the MFA Server, but rather should manage their authentication methods in Azure AD ([aka.ms/mfasetup](https://aka.ms/mfasetup)). Once users begin managing their authentication data in Azure AD, those methods won't be synced back to MFA Server. If you roll back to the on-premises MFA Server after users have made changes to their Authentication Methods in Azure AD, those changes will be lost. After user migrations are complete, change the [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-1.0#federatedidpmfabehavior-values&preserve-view=true) domain federation setting. The change tells Azure AD to no longer perform MFA on-premises and to perform _all_ MFA requests with Azure AD MFA, regardless of group membership.
+
+The following sections explain the migration steps in more detail.
+
+### Identify Azure AD MFA Server dependencies
+
+We've worked hard to ensure that moving onto our cloud-based Azure AD MFA solution will maintain and even improve your security posture. There are three broad categories that should be used to group dependencies:
+
+- [MFA methods](#mfa-methods)
+- [User portal](#user-portal)
+- [Authentication services](#authentication-services)
+
+To help your migration, we've matched widely used MFA Server features with the functional equivalent in Azure AD MFA for each category.
+
+#### MFA methods
+
+Open MFA Server, click **Company Settings**:
+++
+|MFA Server|Azure AD MFA|
+|:|:--|
+|**General Tab**||
+|**User Defaults section**||
+|Phone call (Standard)|No action needed|
+|Text message (OTP)<sup>*</sup>|No action needed|
+|Mobile app (Standard)|No action needed|
+|Phone Call (PIN)<sup>*</sup>|Enable Voice OTP |
+|Text message (OTP + PIN)<sup>**</sup>|No action needed|
+|Mobile app (PIN)<sup>*</sup>|Enable [number matching](how-to-mfa-number-match.md) |
+|Phone call/text message/mobile app/OATH token language|Language settings will be automatically applied to a user based on the locale settings in their browser|
+|**Default PIN rules section**|Not applicable; see updated methods in the preceding screenshot|
+|**Username Resolution tab**|Not applicable; username resolution isn't required for Azure AD MFA|
+|**Text Message tab**|Not applicable; Azure AD MFA uses a default message for text messages|
+|OATH Token tab|Not applicable; Azure AD MFA uses a default message for OATH tokens|
+|Reports|[Azure AD Authentication Methods Activity reports](howto-authentication-methods-activity.md)|
+
+<sup>*</sup>When a PIN is used to provide proof-of-presence functionality, the functional equivalent is provided above. PINs that arenΓÇÖt cryptographically tied to a device don't sufficiently protect against scenarios where a device has been compromised. To protect against these scenarios, including [SIM swap attacks](https://wikipedia.org/wiki/SIM_swap_scam), move users to more secure methods according to Microsoft authentication methods [best practices](concept-authentication-methods.md).
+
+<sup>**</sup>The default SMS MFA experience in Azure AD MFA sends users a code, which they're required to enter in the login window as part of authentication. The requirement to roundtrip the SMS code provides proof-of-presence functionality.
+
+#### User portal
+
+Open MFA Server, click **User Portal**:
++
+|MFA Server|Azure AD MFA|
+|:--:|:-:|
+|**Settings Tab**||
+|User portal URL|[aka.ms/mfasetup](https://aka.ms/mfasetup)|
+|Allow user enrollment|See [Combined security information registration](concept-registration-mfa-sspr-combined.md)|
+|- Prompt for backup phone|See [MFA Service settings](howto-mfa-mfasettings.md#mfa-service-settings)|
+|- Prompt for third-party OATH token|See [MFA Service settings](howto-mfa-mfasettings.md#mfa-service-settings)|
+|Allow users to initiate a One-Time Bypass|See [Azure AD TAP functionality](howto-authentication-temporary-access-pass.md)|
+|Allow users to select method|See [MFA Service settings](howto-mfa-mfasettings.md#mfa-service-settings)|
+|- Phone call|See [Phone call documentation](howto-mfa-mfasettings.md#phone-call-settings)|
+|- Text message|See [MFA Service settings](howto-mfa-mfasettings.md#mfa-service-settings)|
+|- Mobile app|See [MFA Service settings](howto-mfa-mfasettings.md#mfa-service-settings)|
+|- OATH token|See [OATH token documentation](howto-mfa-mfasettings.md#oath-tokens)|
+|Allow users to select language|Language settings will be automatically applied to a user based on the locale settings in their browser|
+|Allow users to activate mobile app|See [MFA Service settings](howto-mfa-mfasettings.md#mfa-service-settings)|
+|- Device limit|Azure AD limits users to 5 cumulative devices (mobile app instances + hardware OATH token + software OATH token) per user|
+|Use security questions for fallback|Azure AD allows users to choose a fallback method at authentication time should the chosen authentication method fail|
+|- Questions to answer|Security Questions in Azure AD can only be used for SSPR. See more details for [Azure AD Custom Security Questions](concept-authentication-security-questions.md#custom-security-questions)|
+|Allow users to associate third-party OATH token|See [OATH token documentation](howto-mfa-mfasettings.md#oath-tokens)|
+|Use OATH token for fallback|See [OATH token documentation](howto-mfa-mfasettings.md#oath-tokens)|
+|Session Timeout||
+|**Security Questions tab** |Security questions in MFA Server were used to gain access to the User portal. Azure AD MFA only supports security questions for self-service password reset. See [security questions documentation](concept-authentication-security-questions.md).|
+|**Passed Sessions tab**|All authentication method registration flows are managed by Azure AD and donΓÇÖt require configuration|
+|**Trusted IPs**|[Azure AD trusted IPs](howto-mfa-mfasettings.md#trusted-ips)|
+
+Any MFA methods available in MFA Server must be enabled in Azure AD MFA by using [MFA Service settings](howto-mfa-mfasettings.md#mfa-service-settings).
+Users can't try their newly migrated MFA methods unless they're enabled.
+
+#### Authentication services
+Azure MFA Server can provide MFA functionality for third-party solutions that use RADIUS or LDAP by acting as an authentication proxy. To discover RADIUS or LDAP dependencies, click **RADIUS Authentication** and **LDAP Authentication** options in MFA Server. For each of these dependencies, determine if these third parties support modern authentication. If so, consider federation directly with Azure AD.
+
+For RADIUS deployments that canΓÇÖt be upgraded, youΓÇÖll need to deploy an NPS Server and install the [Azure AD MFA NPS extension](howto-mfa-nps-extension.md).
+
+For LDAP deployments that canΓÇÖt be upgraded or moved to RADIUS, [determine if Azure Active Directory Domain Services can be used](/azure/active-directory/fundamentals/auth-ldap). In most cases, LDAP was deployed to support in-line password changes for end users. Once migrated, end users can manage their passwords by using [self-service password reset in Azure AD](tutorial-enable-sspr.md).
+
+If you enabled the [MFA Server Authentication provider in AD FS 2.0](/azure/active-directory/authentication/howto-mfaserver-adfs-windows-server#secure-windows-server-ad-fs-with-azure-multi-factor-authentication-server) on any relying party trusts except for the Office 365 relying party trust, youΓÇÖll need to upgrade to [AD FS 3.0](/windows-server/identity/ad-fs/deployment/upgrading-to-ad-fs-in-windows-server) or federate those relying parties directly to Azure AD if they support modern authentication methods. Determine the best plan of action for each of the dependencies.
+
+### Backup Azure AD MFA Server datafile
+Make a backup of the MFA Server data file located at %programfiles%\Multi-Factor Authentication Server\Data\PhoneFactor.pfdata (default location) on your primary MFA Server. Make sure you have a copy of the installer for your currently installed version in case you need to roll back. If you no longer have a copy, contact Customer Support Services.
+
+Depending on user activity, the data file can become outdated quickly. Any changes made to MFA Server, or any end-user changes made through the portal after the backup won't be captured. If you roll back, any changes made after this point won't be restored.
+
+### Install MFA Server update
+Run the new installer on the Primary MFA Server. Before you upgrade a server, remove it from load balancing or traffic sharing with other MFA Servers. You don't need to uninstall your current MFA Server before running the installer. The installer performs an in-place upgrade using the current installation path (for example, C:\Program Files\Multi-Factor Authentication Server). If you're prompted to install a Microsoft Visual C++ 2015 Redistributable update package, accept the prompt. Both the x86 and x64 versions of the package are installed. It isn't required to install updates for User portal, Web SDK, or AD FS Adapter.
+
+After the installation is complete, it can take several minutes for the datafile to be upgraded. During this time, the User portal may have issues connecting to the MFA Service. **Don't restart the MFA Service, or the MFA Server during this time.** This behavior is normal. Once the upgrade is complete, the primary serverΓÇÖs main service will again be functional.
+
+You can check \Program Files\Multi-Factor Authentication Server\Logs\MultiFactorAuthSvc.log to make sure the upgrade is complete. You should see **Completed performing tasks to upgrade from 23 to 24**.
+
+>[!NOTE]
+>After you run the installer on your primary server, secondary servers may begin to log **Unhandled SB** entries. This is due to schema changes made on the primary server that will not be recognized by secondary servers. These errors are expected. In environments with 10,000 users or more, the amount of log entries can increase significantly. To mitigate this issue, you can increase the file size of your MFA Server logs, or upgrade your secondary servers.
+
+### Configure the MFA Server Migration Utility
+After installing the MFA Server update, open an elevated PowerShell command prompt: hover over the PowerShell icon, right-click, and click **Run as Administrator**. Run the .\Configure-MultiFactorAuthMigrationUtility.ps1 script found in your MFA Server installation directory (C:\Program Files\Multi-factor Authentication Server by default).
+
+This script will require you to provide credentials for an Application Administrator in your Azure AD tenant. The script will then create a new MFA Server Migration Utility application within Azure AD, which will be used to write user authentication methods to each Azure AD user object.
+
+For government cloud customers who wish to carry out migrations, replace ".com" entries in the script with ".us". This script will then write the HKLM:\SOFTWARE\WOW6432Node\Positive Networks\PhoneFactor\ StsUrl and GraphUrl registry entries and instruct the Migration Utility to use the appropriate GRAPH endpoints.
+
+You'll also need access to the following URLs:
+
+- `https://graph.microsoft.com/*` (or `https://graph.microsoft.us/*` for government cloud customers)
+- `https://login.microsoftonline.com/*` (or `https://login.microsoftonline.us/*` for government cloud customers)
+
+The script will instruct you to grant admin consent to the newly created application. Navigate to the URL provided, or within the Azure AD portal, click **Application Registrations**, find and select the **MFA Server Migration Utility** app, click on **API permissions** and then granting the appropriate permissions.
++
+Once complete, navigate to the Multi-factor Authentication Server folder, and open the **MultiFactorAuthMigrationUtilityUI** application. You should see the following screen:
++
+You've successfully installed the Migration Utility.
+
+### Migrate user data
+Migrating user data doesn't remove or alter any data in the Multi-Factor Authentication Server database. Likewise, this process won't change where a user performs MFA. This process is a one-way copy of data from the on-premises server to the corresponding user object in Azure AD.
+
+The MFA Server Migration utility targets a single Azure AD group for all migration activities. You can add users directly to this group, or add other groups. You can also add them in stages during the migration.
+
+To begin the migration process, enter the name or GUID of the Azure AD group you want to migrate. Once complete, press Tab or click outside the window and the utility will begin searching for the appropriate group. The window will populate all users in the group. A large group can take several minutes to finish.
+
+To view user attribute data for a user, highlight the user, and select **View**:
++
+This window displays the attributes for the selected user in both Azure AD and the on-premises MFA Server. You can use this window to view how data was written to a user after theyΓÇÖve been migrated.
+
+The settings option allows you to change the settings for the migration process:
++
+- Migrate ΓÇô This setting allows you to specify which method(s) should be migrated for the selection of users
+- User Match ΓÇô Allows you to specify a different attribute for matching users instead of the default UPN-matching
+- Automatic synchronization ΓÇô Starts a background service that will continually monitor any authentication method changes to users in the on-premises MFA Server, and write them to Azure AD at the specified time interval defined
+
+The migration process can be an automatic process, or a manual process.
+
+The manual process steps are:
+
+1. To begin the migration process for a user or selection of multiple users, press and hold the Ctrl key while selecting each of the user(s) you wish to migrate.
+1. After you select the desired users, click **Migrate Users** > **Selected users** > **OK**.
+1. To migrate all users in the group, click **Migrate Users** > **All users in AAD group** > **OK**.
+
+For the automatic process, click **Automatic synchronization** in the settings dialog, and then select whether you want all users to be synced, or only members of a given Azure AD group.
+
+The following table lists the sync logic for the various methods.
+
+| Method | Logic |
+|--|-|
+|**Phone** |If there's no extension, update MFA phone.<br>If there's an extension, update Office phone.<br> Exception: If the default method is Text Message, drop extension and update MFA phone.|
+|**Backup Phone**|If there's no extension, update Alternate phone.<br>If there's an extension, update Office phone.<br>Exception: If both Phone and Backup Phone have an extension, skip Backup Phone.|
+|**Mobile App**|Maximum of five devices will be migrated or only four if the user also has a hardware OATH token.<br>If there are multiple devices with the same name, only migrate the most recent one.<br>Devices will be ordered from newest to oldest.<br>If devices already exist in Azure AD, match on OATH Token Secret Key and update.<br>- If there's no match on OATH Token Secret Key, match on Device Token<br>-- If found, create a Software OATH Token for the MFA Server device to allow OATH Token method to work. Notifications will still work using the existing Azure AD MFA device.<br>-- If not found, create a new device.<br>If adding a new device will exceed the five-device limit, the device will be skipped. |
+|**OATH Token**|If devices already exist in Azure AD, match on OATH Token Secret Key and update.<br>- If not found, add a new Hardware OATH Token device.<br>If adding a new device will exceed the five-device limit, the OATH token will be skipped.|
+
+MFA Methods will be updated based on what was migrated and the default method will be set. MFA Server will track the last migration timestamp and only migrate the user again if the userΓÇÖs MFA settings change or an admin modifies what to migrate in the **Settings** dialog.
+
+During testing, we recommend doing a manual migration first, and test to ensure a given number of users behave as expected. Once testing is successful, turn on automatic synchronization for the Azure AD group you wish to migrate. As you add users to this group, their information will be automatically synchronized to Azure AD. MFA Server Migration Utility targets one Azure AD group, however that group can encompass both users and nested groups of users.
+
+Once complete, a confirmation will inform you of the tasks completed:
++
+As mentioned in the confirmation message, it can take several minutes for the migrated data to appear on user objects within Azure AD. Users can view their migrated methods by navigating to [aka.ms/mfasetup](https://aka.ms/mfasetup).
+
+### Validate and test
+
+Once you've successfully migrated user data, you can validate the end-user experience using Staged Rollout before making the global tenant change. The following process will allow you to target specific Azure AD group(s) for Staged Rollout for MFA. Staged Rollout tells Azure AD to perform MFA by using Azure AD MFA for users in the targeted groups, rather than sending them on-premises to perform MFA. You can validate and testΓÇöwe recommend using the Azure portal, but if you prefer, you can also use Microsoft Graph.
+
+#### Enable Staged Rollout using Azure portal
+
+1. Navigate to the following url: [Enable staged rollout features - Microsoft Azure](https://portal.azure.com/?mfaUIEnabled=true%2F#view/Microsoft_AAD_IAM/StagedRolloutEnablementBladeV2).
+
+1. Change **Azure multifactor authentication (preview)** to **On**, and then click **Manage groups**.
+
+ :::image type="content" border="true" source="./media/how-to-mfa-server-migration-utility/staged-rollout.png" alt-text="Screenshot of Staged Rollout.":::
+
+1. Click **Add groups** and add the group(s) containing users you wish to enable for Azure MFA. Selected groups appear in the displayed list.
+
+ >[!NOTE]
+ >Any groups you target using the Microsoft Graph method below will also appear in this list.
+
+ :::image type="content" border="true" source="./media/how-to-mfa-server-migration-utility/managed-groups.png" alt-text="Screenshot of Manage Groups menu.":::
+
+#### Enable Staged Rollout using Microsoft Graph
+
+1. Create the featureRolloutPolicy
+ 1. Navigate to [aka.ms/ge](https://aka.ms/ge) and login to Graph Explorer using a Hybrid Identity Administrator account in the tenant you wish to setup for Staged Rollout.
+ 1. Ensure POST is selected targeting the following endpoint:
+ `https://graph.microsoft.com/v1.0/policies/featureRolloutPolicies`
+ 1. The body of your request should contain the following (change **MFA rollout policy** to a name and description for your organization):
+
+ ```msgraph-interactive
+ {
+ "displayName": "MFA rollout policy",
+ "description": "MFA rollout policy",
+ "feature": "multiFactorAuthentication",
+ "isEnabled": true,
+ "isAppliedToOrganization": false
+ }
+ ```
+
+ :::image type="content" border="true" source="./media/how-to-mfa-server-migration-utility/body.png" alt-text="Screenshot of request.":::
+
+ 1. Perform a GET with the same endpoint and make note of the **ID** value (crossed out in the following image):
+
+ :::image type="content" border="true" source="./media/how-to-mfa-server-migration-utility/get.png" alt-text="Screenshot of GET command.":::
+
+1. Target the Azure AD group(s) that contain the users you wish to test
+ 1. Create a POST request with the following endpoint (replace {ID of policy} with the **ID** value you copied from step 1d):
+
+ `https://graph.microsoft.com/v1.0/policies/featureRolloutPolicies/{ID of policy}/appliesTo/$ref`
+
+ 1. The body of the request should contain the following (replace {ID of group} with the object ID of the group you wish to target for staged rollout):
+
+ ```msgraph-interactive
+ {
+ "@odata.id": "https://graph.microsoft.com/v1.0/directoryObjects/{ID of group}"
+ }
+ ```
+
+ 1. Repeat steps a and b for any other groups you wish to target with staged rollout.
+ 1. You can view the current policy in place by doing a GET against the following URL:
+
+ `https://graph.microsoft.com/v1.0/policies/featureRolloutPolicies/{policyID}?$expand=appliesTo`
+
+ The preceding process uses the [featureRolloutPolicy resource](/graph/api/resources/featurerolloutpolicy?view=graph-rest-1.0&preserve-view=true). The public documentation hasn't yet been updated with the new multifactorAuthentication feature, but it has detailed information on how to interact with the API.
+
+1. Confirm that the end-user MFA experience. Here are a few things to check:
+ 1. Do users see their methods in [aka.ms/mfasetup](https://aka.ms/mfasetup)?
+ 1. Do users receive phone calls/text messages?
+ 1. Are they able to successfully authenticate using the above methods?
+ 1. Do users successfully receive Authenticator notifications? Are they able to approve these notifications? Is authentication successful?
+ 1. Are users able to authenticate successfully using Hardware OATH tokens?
+
+### Educate users
+Ensure users know what to expect when they're moved to Azure MFA, including new authentication flows. You may also wish to instruct users to use the Azure AD Combined Registration portal ([aka.ms/mfasetup](https://aka.ms/mfasetup)) to manage their authentication methods rather than the User portal once migrations are complete. Any changes made to authentication methods in Azure AD won't propagate back to your on-premises environment. In a situation where you had to roll back to MFA Server, any changes users have made in Azure AD wonΓÇÖt be available in the MFA Server User portal.
+
+If you use third-party solutions that depend on Azure MFA Server for authentication (see [Authentication services](#authentication-services)), youΓÇÖll want users to continue to make changes to their MFA methods in the User portal. These changes will be synced to Azure AD automatically. Once you've migrated these third party solutions, you can move users to the Azure AD combined registration page.
+
+### Complete user migration
+Repeat migration steps found in [Migrate user data](#migrate-user-data) and [Validate and test](#validate-and-test) sections until all user data is migrated.
+
+### Migrate MFA Server dependencies
+Using the data points you collected in [Authentication services](#authentication-services), begin carrying out the various migrations necessary. Once this is completed, consider having users manage their authentication methods in the combined registration portal, rather than in the User portal on MFA server.
+
+### Update domain federation settings
+Once you've completed user migrations, and moved all of your [Authentication services](#authentication-services) off of MFA Server, itΓÇÖs time to update your domain federation settings. After the update, Azure AD no longer sends MFA request to your on-premises federation server.
+
+To configure Azure AD to ignore MFA requests to your on-premises federation server, install the [Microsoft Graph PowerShell SDK](/powershell/microsoftgraph/installation?view=graph-powershell-&preserve-view=true) and set [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-1.0#federatedidpmfabehavior-values&preserve-view=true) to `rejectMfaByFederatedIdp`, as shown in the following example.
+
+#### Request
+<!-- {
+ "blockType": "request",
+ "name": "update_internaldomainfederation"
+}
+-->
+``` http
+PATCH https://graph.microsoft.com/beta/domains/contoso.com/federationConfiguration/6601d14b-d113-8f64-fda2-9b5ddda18ecc
+Content-Type: application/json
+{
+ "federatedIdpMfaBehavior": "rejectMfaByFederatedIdp"
+}
+```
++
+#### Response
+>**Note:** The response object shown here might be shortened for readability.
+<!-- {
+ "blockType": "response",
+ "truncated": true,
+ "@odata.type": "microsoft.graph.internalDomainFederation"
+}
+-->
+``` http
+HTTP/1.1 200 OK
+Content-Type: application/json
+{
+ "@odata.type": "#microsoft.graph.internalDomainFederation",
+ "id": "6601d14b-d113-8f64-fda2-9b5ddda18ecc",
+ "issuerUri": "http://contoso.com/adfs/services/trust",
+ "metadataExchangeUri": "https://sts.contoso.com/adfs/services/trust/mex",
+ "signingCertificate": "MIIE3jCCAsagAwIBAgIQQcyDaZz3MI",
+ "passiveSignInUri": "https://sts.contoso.com/adfs/ls",
+ "preferredAuthenticationProtocol": "wsFed",
+ "activeSignInUri": "https://sts.contoso.com/adfs/services/trust/2005/usernamemixed",
+ "signOutUri": "https://sts.contoso.com/adfs/ls",
+ "promptLoginBehavior": "nativeSupport",
+ "isSignedAuthenticationRequestRequired": true,
+ "nextSigningCertificate": "MIIE3jCCAsagAwIBAgIQQcyDaZz3MI",
+ "signingCertificateUpdateStatus": {
+ "certificateUpdateResult": "Success",
+ "lastRunDateTime": "2021-08-25T07:44:46.2616778Z"
+ },
+ "federatedIdpMfaBehavior": "rejectMfaByFederatedIdp"
+}
+```
+
+Set the **Staged Rollout for Azure MFA** to **Off**. Users will once again be redirected to your on-premises federation server for MFA.
+
+>[!NOTE]
+>The update of the domain federation setting can take up to 24 hours to take effect.
+
+### Optional: Disable MFA Server User portal
+Once you've completed migrating all user data, end users can begin using the Azure AD combined registration pages to manage MFA Methods. There are a couple ways to prevent users from using the User portal in MFA Server:
+
+- Redirect your MFA Server User portal URL to [aka.ms/mfasetup](https://aka.ms/mfasetup)
+- Clear the **Allow users to log in** checkbox under the **Settings** tab in the User portal section of MFA Server to prevent users from logging into the portal altogether.
+
+### Decommission MFA Server
+
+When you no longer need the Azure MFA server, follow your normal server deprecation practices. No special action is required in Azure AD to indicate MFA Server retirement.
+
+## Rollback plan
+
+If the upgrade had issues, follow these steps to roll back:
+
+1. Uninstall MFA Server 8.1.
+1. Replace PhoneFactor.pfdata with the backup made before upgrading.
+
+ >[!NOTE]
+ >Any changes since the backup was made will be lost, but should be minimal if backup was made right before upgrade and upgrade was unsuccessful.
+
+1. Run the installer for your previous version (for example, 8.0.x.x).
+1. Configure Azure AD to accept MFA requests to your on-premises federation server. Use Graph PowerShell to set [federatedIdpMfaBehavior](/graph/api/resources/internaldomainfederation?view=graph-rest-1.0#federatedidpmfabehavior-values&preserve-view=true) to `enforceMfaByFederatedIdp`, as shown in the following example.
+
+ **Request**
+ <!-- {
+ "blockType": "request",
+ "name": "update_internaldomainfederation"
+ }
+ -->
+ ``` http
+ PATCH https://graph.microsoft.com/beta/domains/contoso.com/federationConfiguration/6601d14b-d113-8f64-fda2-9b5ddda18ecc
+ Content-Type: application/json
+ {
+ "federatedIdpMfaBehavior": "enforceMfaByFederatedIdp"
+ }
+ ```
+
+ The following response object is shortened for readability.
+
+ **Response**
+
+ <!-- {
+ "blockType": "response",
+ "truncated": true,
+ "@odata.type": "microsoft.graph.internalDomainFederation"
+ }
+ -->
+ ``` http
+ HTTP/1.1 200 OK
+ Content-Type: application/json
+ {
+ "@odata.type": "#microsoft.graph.internalDomainFederation",
+ "id": "6601d14b-d113-8f64-fda2-9b5ddda18ecc",
+ "issuerUri": "http://contoso.com/adfs/services/trust",
+ "metadataExchangeUri": "https://sts.contoso.com/adfs/services/trust/mex",
+ "signingCertificate": "MIIE3jCCAsagAwIBAgIQQcyDaZz3MI",
+ "passiveSignInUri": "https://sts.contoso.com/adfs/ls",
+ "preferredAuthenticationProtocol": "wsFed",
+ "activeSignInUri": "https://sts.contoso.com/adfs/services/trust/2005/usernamemixed",
+ "signOutUri": "https://sts.contoso.com/adfs/ls",
+ "promptLoginBehavior": "nativeSupport",
+ "isSignedAuthenticationRequestRequired": true,
+ "nextSigningCertificate": "MIIE3jCCAsagAwIBAgIQQcyDaZz3MI",
+ "signingCertificateUpdateStatus": {
+ "certificateUpdateResult": "Success",
+ "lastRunDateTime": "2021-08-25T07:44:46.2616778Z"
+ },
+ "federatedIdpMfaBehavior": "enforceMfaByFederatedIdp"
+ }
+ ```
+
+Users will no longer be redirected to your on-premises federation server for MFA, whether theyΓÇÖre targeted by the Staged Rollout tool or not. Note this can take up to 24 hours to take effect.
++
+## Next steps
+
+- [Overview of how to migrate from MFA Server to Azure AD Multi-Factor Authentication](how-to-migrate-mfa-server-to-azure-mfa.md)
+- [Migrate to cloud authentication using Staged Rollout](../hybrid/how-to-connect-staged-rollout.md)
active-directory How To Migrate Mfa Server To Azure Mfa User Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md
Previously updated : 04/07/2022 Last updated : 08/30/2022
Multi-factor authentication (MFA) helps secure your infrastructure and assets fr
MicrosoftΓÇÖs Multi-Factor Authentication Server (MFA Server) is no longer offered for new deployments. Customers who are using MFA Server should move to Azure AD Multi-Factor Authentication (Azure AD MFA).
-There are several options for migrating your multi-factor authentication (MFA) from MFA Server to Azure Active Directory (Azure AD).
-These include:
+There are several options for migrating from MFA Server to Azure Active Directory (Azure AD):
* Good: Moving only your [MFA service to Azure AD](how-to-migrate-mfa-server-to-azure-mfa.md). * Better: Moving your MFA service and user authentication to Azure AD, covered in this article.
This process enables the iterative migration of users from MFA Server to Azure M
Each step is explained in the subsequent sections of this article. >[!NOTE]
->If you are planning on moving any applications to Azure Active Directory as a part of this migration, you should do so prior to your MFA migration. If you move all of your apps, you can skip sections of the MFA migration process. See the section on moving applications at the end of this article.
+>If you're planning on moving any applications to Azure Active Directory as a part of this migration, you should do so prior to your MFA migration. If you move all of your apps, you can skip sections of the MFA migration process. See the section on moving applications at the end of this article.
## Process to migrate to Azure AD and user authentication
Each step is explained in the subsequent sections of this article.
## Prepare groups and Conditional Access Groups are used in three capacities for MFA migration.
-* **To iteratively move users to Azure AD MFA with staged rollout.**
-Use a group created in Azure AD, also known as a cloud-only group. You can use Azure AD security groups or Microsoft 365 Groups for both moving users to MFA and for Conditional Access policies. For more information see creating an Azure AD security group, and this overview of Microsoft 365 Groups for administrators.
+
+* **To iteratively move users to Azure AD MFA with Staged Rollout.**
+
+ Use a group created in Azure AD, also known as a cloud-only group. You can use Azure AD security groups or Microsoft 365 Groups for both moving users to MFA and for Conditional Access policies.
+ >[!IMPORTANT]
- >Nested and dynamic groups are not supported in the staged rollout process. Do not use these types of groups for your staged rollout effort.
+ >Nested and dynamic groups aren't supported for Staged Rollout. Don't use these types of groups.
+ * **Conditional Access policies**.
-You can use either Azure AD or on-premises groups for conditional access.
+ You can use either Azure AD or on-premises groups for conditional access.
+ * **To invoke Azure AD MFA for AD FS applications with claims rules.**
-This applies only if you have applications on AD FS.
-This must be an on-premises Active Directory security group. Once Azure AD MFA is an additional authentication method, you can designate groups of users to use that method on each relying party trust. For example, you can call Azure AD MFA for users you have already migrated, and MFA Server for those not yet migrated. This is helpful both in testing, and during migration.
+ This step applies only if you use applications with AD FS.
+
+ You must use an on-premises Active Directory security group. Once Azure AD MFA is an additional authentication method, you can designate groups of users to use that method on each relying party trust. For example, you can call Azure AD MFA for users you already migrated, and MFA Server for users who aren't migrated yet. This strategy is helpful both in testing and during migration.
>[!NOTE]
->We do not recommend that you reuse groups that are used for security. When using a security group to secure a group of high-value apps via a Conditional Access policy, that should be the only use of that group.
+>We don't recommend that you reuse groups that are used for security. Only use the security group to secure a group of high-value apps with a Conditional Access policy.
### Configure Conditional Access policies
-If you are already using Conditional Access to determine when users are prompted for MFA, you wonΓÇÖt need any changes to your policies.
-As users are migrated to cloud authentication, they will start using Azure AD MFA as defined by your existing Conditional Access policies.
+If you're already using Conditional Access to determine when users are prompted for MFA, you wonΓÇÖt need any changes to your policies.
+As users are migrated to cloud authentication, they'll start using Azure AD MFA as defined by your existing Conditional Access policies.
They wonΓÇÖt be redirected to AD FS and MFA Server anymore.
-If your federated domain(s) have the **federatedIdpMfaBehavior** set to `enforceMfaByFederatedIdp` or **SupportsMfa** flag set to `$True` (the **federatedIdpMfaBehavior** overrides **SupportsMfa** when both are set), you are likely enforcing MFA on AD FS using claims rules.
-In this case, you will need to analyze your claims rules on the Azure AD relying party trust and create Conditional Access policies that support the same security goals.
+If your federated domains have the **federatedIdpMfaBehavior** set to `enforceMfaByFederatedIdp` or **SupportsMfa** flag set to `$True` (the **federatedIdpMfaBehavior** overrides **SupportsMfa** when both are set), you're likely enforcing MFA on AD FS by using claims rules.
+In this case, you'll need to analyze your claims rules on the Azure AD relying party trust and create Conditional Access policies that support the same security goals.
-If you need to configure Conditional Access policies, you need to do so before enabling staged rollout.
+If necessary, configure Conditional Access policies before you enable Staged Rollout.
For more information, see the following resources:+ * [Plan a Conditional Access deployment](../conditional-access/plan-conditional-access.md) * [Common Conditional Access policies](../conditional-access/concept-conditional-access-policy-common.md) ## Prepare AD FS
-If you do not have any applications in AD FS that require MFA, you can skip this section and go to the section Prepare staged rollout.
+If you don't have any applications in AD FS that require MFA, you can skip this section and go to the section [Prepare Staged Rollout](#prepare-staged-rollout).
### Upgrade AD FS server farm to 2019, FBL 4
-In AD FS 2019, Microsoft released new functionality that provides the ability to specify additional authentication methods for a relying party, such as an application.
-This is done by using group membership to determine the authentication provider.
+In AD FS 2019, Microsoft released new functionality to help specify additional authentication methods for a relying party, such as an application.
+You can specify an additional authentication method by using group membership to determine the authentication provider.
By specifying an additional authentication method, you can transition to Azure AD MFA while keeping other authentication intact during the transition. For more information, see [Upgrading to AD FS in Windows Server 2016 using a WID database](/windows-server/identity/ad-fs/deployment/upgrading-to-ad-fs-in-windows-server).
The article covers both upgrading your farm to AD FS 2019 and upgrading your FBL
### Configure claims rules to invoke Azure AD MFA
-Now that you have Azure AD MFA as an additional authentication method, you can assign groups of users to use Azure AD MFA. You do this by configuring claims rules, also known as *relying party trusts*. By using groups, you can control which authentication provider is called either globally or by application. For example, you can call Azure AD MFA for users who have registered for combined security information or had their phone numbers migrated, while calling MFA Server for those who have not.
+Now that Azure AD MFA is an additional authentication method, you can assign groups of users to use Azure AD MFA by configuring claims rules, also known as *relying party trusts*. By using groups, you can control which authentication provider is called either globally or by application. For example, you can call Azure AD MFA for users who registered for combined security information or had their phone numbers migrated, while calling MFA Server for users whose phone numbers haven't migrated.
>[!NOTE] >Claims rules require on-premises security group.
Now that you have Azure AD MFA as an additional authentication method, you can a
#### Back up existing rules Before configuring new claims rules, back up your existing rules.
-You will need to restore these as a part of your clean up steps.
+You'll need to restore claims rules as a part of your cleanup steps.
Depending on your configuration, you may also need to copy the existing rule and append the new rules being created for the migration. To view existing global rules, run: + ```powershell Get-AdfsAdditionalAuthenticationRule ```
This command will move the logic from your current Access Control Policy into Ad
#### Set up the group, and find the SID You will need to have a specific group in which you place users for whom you want to invoke Azure AD MFA. You will need to find the security identifier (SID) for that group.
-To find the group SID use the following command, with your group name
-`Get-ADGroup ΓÇ£GroupNameΓÇ¥`
+To find the group SID, run the following command and replace `GroupName` with your group name:
+
+```powershell
+Get-ADGroup GroupName
+```
![PowerShell command to get the group SID.](media/how-to-migrate-mfa-server-to-azure-mfa-user-authentication/find-the-sid.png) #### Setting the claims rules to call Azure MFA
-The following PowerShell cmdlets invoke Azure AD MFA for those in the group when they arenΓÇÖt on the corporate network.
-You must replace "YourGroupSid" with the SID found by running the preceding cmdlet.
+The following PowerShell cmdlets invoke Azure AD MFA for users in the group when they arenΓÇÖt on the corporate network.
+You must replace `"YourGroupSid"` with the SID found by running the preceding cmdlet.
Make sure you review the [How to Choose Additional Auth Providers in 2019](/windows-server/identity/ad-fs/overview/whats-new-active-directory-federation-services-windows-server#how-to-choose-additional-auth-providers-in-2019).
Value==ΓÇ£YourGroupSid"]) => issue(Type =
### Configure Azure AD MFA as an authentication provider in AD FS In order to configure Azure AD MFA for AD FS, you must configure each AD FS server.
-If you have multiple AD FS servers in your farm, you can configure them remotely using Azure AD PowerShell.
+If multiple AD FS servers are in your farm, you can configure them remotely using Azure AD PowerShell.
For step-by-step directions on this process, see [Configure the AD FS servers](/windows-server/identity/ad-fs/operations/configure-ad-fs-and-azure-mfa#configure-the-ad-fs-servers).
-Once you have configured the servers, you can add Azure AD MFA as an additional authentication method.
+After you configure the servers, you can add Azure AD MFA as an additional authentication method.
![Screenshot of how to add Azure AD MFA as an additional authentication method.](media/how-to-migrate-mfa-server-to-azure-mfa-user-authentication/edit-authentication-methods.png)
-## Prepare staged rollout
+## Prepare Staged Rollout
-Now you are ready to enable the staged rollout feature. Staged rollout helps you to iteratively move your users to either PHS or PTA.
+Now you're ready to enable [Staged Rollout](../hybrid/how-to-connect-staged-rollout.md). Staged Rollout helps you to iteratively move your users to either PHS or PTA while also migrating their on-premises MFA settings.
* Be sure to review the [supported scenarios](../hybrid/how-to-connect-staged-rollout.md#supported-scenarios).
-* First you will need to do either the [prework for PHS](../hybrid/how-to-connect-staged-rollout.md#pre-work-for-password-hash-sync) or the [prework for PTA](../hybrid/how-to-connect-staged-rollout.md#pre-work-for-pass-through-authentication). We recommend PHS.
-* Next you will do the [prework for seamless SSO](../hybrid/how-to-connect-staged-rollout.md#pre-work-for-seamless-sso).
-* [Enable the staged rollout of cloud authentication](../hybrid/how-to-connect-staged-rollout.md#enable-a-staged-rollout-of-a-specific-feature-on-your-tenant) for your selected authentication method.
-* Add the group(s) you created for staged rollout. Remember that you will add users to groups iteratively, and that they cannot be dynamic groups or nested groups.
+* First, you'll need to do either the [prework for PHS](../hybrid/how-to-connect-staged-rollout.md#pre-work-for-password-hash-sync) or the [prework for PTA](../hybrid/how-to-connect-staged-rollout.md#pre-work-for-pass-through-authentication). We recommend PHS.
+* Next, you'll do the [prework for seamless SSO](../hybrid/how-to-connect-staged-rollout.md#pre-work-for-seamless-sso).
+* [Enable the Staged Rollout of cloud authentication](../hybrid/how-to-connect-staged-rollout.md#enable-a-staged-rollout-of-a-specific-feature-on-your-tenant) for your selected authentication method.
+* Add the group(s) you created for Staged Rollout. Remember that you'll add users to groups iteratively, and that they can't be dynamic groups or nested groups.
## Register users for Azure MFA
There are two ways to register users for Azure MFA:
* Register for combined security (MFA and self-service-password reset) * Migrate phone numbers from MFA Server
-The Microsoft Authenticator app can be used as a passwordless sign in method as well as a second factor for MFA with either method.
+Microsoft Authenticator can be used as a passwordless sign-in method and a second factor for MFA with either method.
### Register for combined security registration (recommended) We recommend having your users register for combined security information, which is a single place to register their authentication methods and devices for both MFA and SSPR.
-While it is possible to migrate data from the MFA Server to Azure AD MFA, the following challenges occur:
+While it's possible to migrate data from the MFA Server to Azure AD MFA, you face these challenges:
* Only phone numbers can be migrated. * Authenticator apps will need to be reregistered. * Stale data can be migrated. Microsoft provides communication templates that you can provide to your users to guide them through the combined registration process.
-These include templates for email, posters, table tents, and a variety of other assets. Users register their information at `https://aka.ms/mysecurityinfo`, which takes them to the combined security registration screen.
+These include templates for email, posters, table tents, and various other assets. Users register their information at `https://aka.ms/mysecurityinfo`, which takes them to the combined security registration screen.
We recommend that you [secure the security registration process with Conditional Access](../conditional-access/howto-conditional-access-policy-registration.md) that requires the registration to occur from a trusted device or location. For information on tracking registration statuses, see [Authentication method activity for Azure Active Directory](howto-authentication-methods-activity.md). > [!NOTE] > Users who MUST register their combined security information from a non-trusted location or device can be issued a Temporary Access Pass or alternatively, temporarily excluded from the policy.
-### Migrate phone numbers from MFA Server
+### Migrate MFA settings from MFA Server
-While you can migrate usersΓÇÖ registered MFA phone numbers and hardware tokens, you cannot migrate device registrations such as their Microsoft Authenticator app settings.
-Migrating phone numbers can lead to stale numbers being migrated, and make users more likely to stay on phone-based MFA instead of setting up more secure methods like [passwordless sign-in with the Microsoft Authenticator app](howto-authentication-passwordless-phone.md).
-We therefore recommend that regardless of the migration path you choose, that you have all users register for [combined security information](howto-registration-mfa-sspr-combined.md).
-Combined security information enables users to also register for self-service password reset.
+You can use the [MFA Server Migration utility](how-to-mfa-server-migration-utility.md) to synchronize registered MFA settings for users from MFA Server to Azure AD.
+You can synchronize phone numbers, hardware tokens, and device registrations such as Microsoft Authenticator app settings.
-If having users register their combined security information is not an option, it is possible to export the users along with their phone numbers from MFA Server and import the phone numbers into Azure AD.
+### Migrate phone numbers from MFA Server
+
+If you only want to migrate registered MFA phone numbers, you can export the users along with their phone numbers from MFA Server and import the phone numbers into Azure AD.
#### Export user phone numbers from MFA Server 1. Open the Multi-Factor Authentication Server admin console on the MFA Server. 1. Select **File** > **Export Users**.
-3) Save the CSV file. The default name is Multi-Factor Authentication Users.csv.
+1. Save the .csv file. The default name is Multi-Factor Authentication Users.csv.
#### Interpret and format the .csv file
-The .csv file contains a number of fields not necessary for migration and will need to be edited and formatted prior to importing the phone numbers into Azure AD.
+The .csv file contains many fields not necessary for migration and will need to be edited and formatted prior to importing the phone numbers into Azure AD.
-When opening the .csv file, columns of interest include Username, Primary Phone, Primary Country Code, Backup Country Code, Backup Phone, Backup Extension. You must interpret this data and format it, as necessary.
+In the .csv file, columns of interest include Username, Primary Phone, Primary Country Code, Backup Country Code, Backup Phone, Backup Extension. You must interpret this data and format it, as necessary.
#### Tips to avoid errors during import
-* The CSV file will need to be modified prior to using the Authentication Methods API to import the phone numbers into Azure AD.
+* The .csv file will need to be modified prior to using the Authentication Methods API to import the phone numbers into Azure AD.
* We recommend simplifying the .csv to three columns: UPN, PhoneType, and PhoneNumber. ![Screenshot of a csv example.](media/how-to-migrate-mfa-server-to-azure-mfa-user-authentication/csv-example.png)
-* Make sure the exported MFA Server Username matches the Azure AD UserPrincipalName. If it does not, update the username in the CSV file to match what is in Azure AD, otherwise the user will not be found.
+* Make sure the exported MFA Server Username matches the Azure AD UserPrincipalName. If it doesn't, update the username in the .csv file to match what is in Azure AD, otherwise the user won't be found.
Users may have already registered phone numbers in Azure AD.
-When importing the phone numbers using the Authentication Methods API, you must decide whether to overwrite the existing phone number or to add the imported number as an alternate phone number.
+When importing the phone numbers using the Authentication Methods API, you must decide whether to overwrite the existing phone number, or to add the imported number as an alternate phone number.
-The following PowerShell cmdlets takes the CSV file you supply and add the exported phone numbers as a phone number for each UPN using the Authentication Methods API. You must replace "myPhones" with the name of your CSV file.
+The following PowerShell cmdlets takes the .csv file you supply and add the exported phone numbers as a phone number for each UPN using the Authentication Methods API. You must replace "myPhones" with the name of your .csv file.
```powershell
$csv = import-csv myPhones.csv
$csv|% { New-MgUserAuthenticationPhoneMethod -UserId $_.UPN -phoneType $_.PhoneType -phoneNumber $_.PhoneNumber} ```
-For more information about managing usersΓÇÖ authentication methods, see [Manage authentication methods for Azure AD Multi-Factor Authentication](howto-mfa-userdevicesettings.md).
+For more information about managing authentication methods, see [Manage authentication methods for Azure AD Multi-Factor Authentication](howto-mfa-userdevicesettings.md).
### Add users to the appropriate groups * If you created new conditional access policies, add the appropriate users to those groups. * If you created on-premises security groups for claims rules, add the appropriate users to those groups.
-* Only after you have added users to the appropriate conditional access rules, add users to the group that you created for staged rollout. Once done, they will begin to use the Azure authentication method that you selected (PHS or PTA) and Azure AD MFA when they are required to perform multi-factor authentication.
+* Only after you add users to the appropriate conditional access rules, add users to the group that you created for Staged Rollout. Once done, they'll begin to use the Azure authentication method that you selected (PHS or PTA) and Azure AD MFA when they are required to perform MFA.
> [!IMPORTANT]
-> Nested and dynamic groups are not supported in the staged rollout process. Do not use these types of groups.
+> Nested and dynamic groups aren't supported for Staged Rollout. Do not use these types of groups.
-We do not recommend that you reuse groups that are used for security. Therefore, if you are using a security group to secure a group of high-value apps via a Conditional Access policy, that should be the only use of that group.
+We don't recommend that you reuse groups that are used for security. Therefore, if you're using a security group to secure a group of high-value apps via a Conditional Access policy, that should be the only use of that group.
## Monitoring
-A number of [Azure Monitor workbooks](../reports-monitoring/howto-use-azure-monitor-workbooks.md) and usage & insights reports are available to monitor your deployment.
-These can be found in Azure AD in the navigation pane under **Monitoring**.
+Many [Azure Monitor workbooks](../reports-monitoring/howto-use-azure-monitor-workbooks.md) and **Usage & Insights** reports are available to monitor your deployment.
+These reports can be found in Azure AD in the navigation pane under **Monitoring**.
-### Monitoring staged rollout
+### Monitoring Staged Rollout
In the [workbooks](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Workbooks) section, select **Public Templates**. Under **Hybrid Auth** section select the **Groups, Users and Sign-ins in Staged Rollout** workbook.
-This workbook can be used to monitor the following:
+This workbook can be used to monitor the following activities:
* Users and groups added to Staged Rollout. * Users and groups removed from Staged Rollout.
-* Sign-in failures for users in staged rollout, and the reasons for failures.
+* Sign-in failures for users in Staged Rollout, and the reasons for failures.
### Monitoring Azure MFA registration Azure MFA registration can be monitored using the [Authentication methods usage & insights report](https://portal.azure.com/#blade/Microsoft_AAD_IAM/AuthenticationMethodsMenuBlade/AuthMethodsActivity/menuId/AuthMethodsActivity). This report can be found in Azure AD. Select **Monitoring**, then select **Usage & insights**.
Detailed Azure MFA registration information can be found on the Registration tab
### Monitoring app sign-in health
-Monitor applications you have moved to Azure AD with the App sign-in health workbook or the application activity usage report.
+Monitor applications you moved to Azure AD with the App sign-in health workbook or the application activity usage report.
* **App sign-in health workbook**. See [Monitoring application sign-in health for resilience](../fundamentals/monitor-sign-in-health-for-resilience.md) for detailed guidance on using this workbook. * **Azure AD application activity usage report**. This [report](https://portal.azure.com/#blade/Microsoft_AAD_IAM/UsageAndInsightsMenuBlade/Azure%20AD%20application%20activity) can be used to view the successful and failed sign-ins for individual applications as well as the ability to drill down and view sign-in activity for a specific application. ## Clean up tasks
-Once you have moved all your users to Azure AD cloud authentication and Azure MFA, you should be ready to decommission your MFA Server.
+After you move all users to Azure AD cloud authentication and Azure MFA, you are ready to decommission your MFA Server.
We recommend reviewing MFA Server logs to ensure no users or applications are using it before you remove the server. ### Convert your domains to managed authentication
-You should now [convert your federated domains in Azure AD to managed](../hybrid/migrate-from-federation-to-cloud-authentication.md#convert-domains-from-federated-to-managed) and remove the staged rollout configuration.
-This ensures new users use cloud authentication without being added to the migration groups.
+You should now [convert your federated domains in Azure AD to managed](../hybrid/migrate-from-federation-to-cloud-authentication.md#convert-domains-from-federated-to-managed) and remove the Staged Rollout configuration.
+This conversion ensures new users use cloud authentication without being added to the migration groups.
### Revert claims rules on AD FS and remove MFA Server authentication provider
-Follow the steps under [Configure claims rules to invoke Azure AD MFA](#configure-claims-rules-to-invoke-azure-ad-mfa) to revert back to the backed up claims rules and remove any AzureMFAServerAuthentication claims rules.
+Follow the steps under [Configure claims rules to invoke Azure AD MFA](#configure-claims-rules-to-invoke-azure-ad-mfa) to revert the claims rules and remove any AzureMFAServerAuthentication claims rules.
-For example, remove the following from the rule(s):
+For example, remove the following section from the rule(s):
```console c:[Type == "https://schemas.microsoft.com/ws/2008/06/identity/claims/groupsid", Value ==
Value=="YourGroupSid"]) => issue(Type =
"AzureMfaServerAuthentication");ΓÇÖ ``` - ### Disable MFA Server as an authentication provider in AD FS This change ensures only Azure MFA is used as an authentication provider.
Possible considerations when decommissions the MFA Server include:
## Move application authentication to Azure Active Directory
-If you migrate all your application authentication along with your MFA and user authentication, you will be able to remove significant portions of your on-premises infrastructure, reducing costs and risks.
+If you migrate all your application authentication along with your MFA and user authentication, you'll be able to remove significant portions of your on-premises infrastructure, reducing costs and risks.
If you move all application authentication, you can skip the [Prepare AD FS](#prepare-ad-fs) stage and simplify your MFA migration. The process for moving all application authentication is shown in the following diagram. ![Process to migrate applications to to Azure AD MFA.](media/how-to-migrate-mfa-server-to-azure-mfa-user-authentication/mfa-app-migration-flow.png)
-If it is not possible to move all your applications prior to the migration, move applications that you can before starting.
-For more information on migrating applications to Azure, see [Resources for migrating applications to Azure Active Directory](../manage-apps/migration-resources.md).
+If you can't move all your applications before the migration, move as many as possible before you start.
+For more information about migrating applications to Azure, see [Resources for migrating applications to Azure Active Directory](../manage-apps/migration-resources.md).
## Next steps -- [Migrate from Microsoft MFA Server to Azure multi-factor authentication (Overview)](how-to-migrate-mfa-server-to-azure-mfa.md)
+- [Migrate from Microsoft MFA Server to Azure MFA (Overview)](how-to-migrate-mfa-server-to-azure-mfa.md)
- [Migrate applications from Windows Active Directory to Azure Active Directory](../manage-apps/migrate-application-authentication-to-azure-active-directory.md) - [Plan your cloud authentication strategy](../fundamentals/active-directory-deployment-plans.md)
active-directory How To Migrate Mfa Server To Azure Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-migrate-mfa-server-to-azure-mfa.md
Previously updated : 06/23/2022 Last updated : 08/18/2022 -+
There are multiple possible end states to your migration, depending on your goal
| <br> | Goal: Decommission MFA Server ONLY | Goal: Decommission MFA Server and move to Azure AD Authentication | Goal: Decommission MFA Server and AD FS | |||-|--| |MFA provider | Change MFA provider from MFA Server to Azure AD Multi-Factor Authentication. | Change MFA provider from MFA Server to Azure AD Multi-Factor Authentication. | Change MFA provider from MFA Server to Azure AD Multi-Factor Authentication. |
-|User authentication |Continue to use federation for Azure AD authentication. | Move to Azure AD with Password Hash Synchronization (preferred) or Passthrough Authentication **and** Seamless Single Sign-On (SSO).| Move to Azure AD with Password Hash Synchronization (preferred) or Passthrough Authentication **and** SSO. |
+|User authentication |Continue to use federation for Azure AD authentication. | Move to Azure AD with Password Hash Synchronization (preferred) or Passthrough Authentication **and** Seamless single sign-on (SSO).| Move to Azure AD with Password Hash Synchronization (preferred) or Passthrough Authentication **and** SSO. |
|Application authentication | Continue to use AD FS authentication for your applications. | Continue to use AD FS authentication for your applications. | Move apps to Azure AD before migrating to Azure AD Multi-Factor Authentication. | If you can, move both your multifactor authentication and your user authentication to Azure. For step-by-step guidance, see [Moving to Azure AD Multi-Factor Authentication and Azure AD user authentication](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md).
MicrosoftΓÇÖs MFA server can be integrated with many systems, and you must evalu
### Migrating MFA user information
-Common ways to think about moving users in batches include moving them by regions, departments, or roles such as administrators.
-Whichever strategy you choose, ensure that you move users iteratively, starting with test and pilot groups, and that you have a rollback plan in place.
+Common ways to think about moving users in batches include moving them by regions, departments, or roles such as administrators. You should move user accounts iteratively, starting with test and pilot groups, and make sure you have a rollback plan in place.
-While you can migrate usersΓÇÖ registered multifactor authentication phone numbers and hardware tokens, you can't migrate device registrations such as their Microsoft Authenticator app settings.
-Users will need to register and add a new account on the Authenticator app and remove the old account.
+You can use the [MFA Server Migration Utility](how-to-mfa-server-migration-utility.md) to synchronize MFA data stored in the on-premises Azure MFA Server to Azure AD MFA and use [Staged Rollout](../hybrid/how-to-connect-staged-rollout.md) to reroute users to Azure MFA. Staged Rollout helps you test without making any changes to your domain federation settings.
To help users to differentiate the newly added account from the old account linked to the MFA Server, make sure the Account name for the Mobile App on the MFA Server is named in a way to distinguish the two accounts.
-For example, the Account name that appears under Mobile App on the MFA Server has been renamed to On-Premises MFA Server.
-The account name on the Authenticator App will change with the next push notification to the user.
+For example, the Account name that appears under Mobile App on the MFA Server has been renamed to **On-Premises MFA Server**.
+The account name on Microsoft Authenticator will change with the next push notification to the user.
Migrating phone numbers can also lead to stale numbers being migrated and make users more likely to stay on phone-based MFA instead of setting up more secure methods like Microsoft Authenticator in passwordless mode. We therefore recommend that regardless of the migration path you choose, that you have all users register for [combined security information](howto-registration-mfa-sspr-combined.md). - #### Migrating hardware security keys
-Azure AD provides support for OATH hardware tokens.
-In order to migrate the tokens from MFA Server to Azure AD Multi-Factor Authentication, the [tokens must be uploaded into Azure AD using a CSV file](concept-authentication-oath-tokens.md#oath-hardware-tokens-preview), commonly referred to as a "seed file".
+Azure AD provides support for OATH hardware tokens. You can use the [MFA Server Migration Utility](how-to-mfa-server-migration-utility.md) to synchronize MFA settings between MFA Server and Azure AD MFA and use [Staged Rollout](../hybrid/how-to-connect-staged-rollout.md) to test user migrations without changing domain federation settings.
+
+If you only want to migrate OATH hardware tokens, you need to [upload tokens to Azure AD by using a CSV file](concept-authentication-oath-tokens.md#oath-hardware-tokens-preview), commonly referred to as a "seed file".
The seed file contains the secret keys, token serial numbers, and other necessary information needed to upload the tokens into Azure AD. If you no longer have the seed file with the secret keys, it isn't possible to export the secret keys from MFA Server. If you no longer have access to the secret keys, contact your hardware vendor for support. The MFA Server Web Service SDK can be used to export the serial number for any OATH tokens assigned to a given user.
-Using this information along with the seed file, IT admins can import the tokens into Azure AD and assign the OATH token to the specified user based on the serial number.
+You can use this information along with the seed file to import the tokens into Azure AD and assign the OATH token to the specified user based on the serial number.
The user will also need to be contacted at the time of import to supply OTP information from the device to complete the registration.
-Refer to the GetUserInfo > userSettings > OathTokenSerialNumber topic in the Multi-Factor Authentication Server help file on your MFA Server.
-
+Refer to the help file topic **GetUserInfo** > **userSettings** > **OathTokenSerialNumber** in Multi-Factor Authentication Server on your MFA Server.
### More migrations
The decision to migrate from MFA Server to Azure AD Multi-Factor Authentication
- Your willingness to use Azure AD authentication for users - Your willingness to move your applications to Azure AD
-Because MFA Server is deeply integrated with both applications and user authentication, you may want to consider moving both of those functions to Azure as a part of your MFA migration, and eventually decommissioning AD FS.
+Because MFA Server is integral to both application and user authentication, consider moving both of those functions to Azure as a part of your MFA migration, and eventually decommission AD FS.
Our recommendations: - Use Azure AD for authentication as it enables more robust security and governance - Move applications to Azure AD if possible
-To select the user authentication method best for your organization, see [Choose the right authentication method for your Azure AD hybrid identity solution](../hybrid/choose-ad-authn.md).
+To select the best user authentication method for your organization, see [Choose the right authentication method for your Azure AD hybrid identity solution](../hybrid/choose-ad-authn.md).
We recommend that you use Password Hash Synchronization (PHS). ### Passwordless authentication
-As part of enrolling users to use Microsoft Authenticator as a second factor, we recommend you enable passwordless phone sign-in as part of their registration. For more information, including other passwordless methods such as FIDO and Windows Hello for Business, visit [Plan a passwordless authentication deployment with Azure AD](howto-authentication-passwordless-deployment.md#plan-for-and-deploy-microsoft-authenticator).
+As part of enrolling users to use Microsoft Authenticator as a second factor, we recommend you enable passwordless phone sign-in as part of their registration. For more information, including other passwordless methods such as FIDO2 security keys and Windows Hello for Business, visit [Plan a passwordless authentication deployment with Azure AD](howto-authentication-passwordless-deployment.md#plan-for-and-deploy-microsoft-authenticator).
### Microsoft Identity Manager self-service password reset
Check with the service provider for supported product versions and their capabil
- The NPS extension doesn't use Azure AD Conditional Access policies. If you stay with RADIUS and use the NPS extension, all authentication requests going to NPS will require the user to perform MFA. - Users must register for Azure AD Multi-Factor Authentication prior to using the NPS extension. Otherwise, the extension fails to authenticate the user, which can generate help desk calls. - When the NPS extension invokes MFA, the MFA request is sent to the user's default MFA method.
- - Because the sign-in happens on non-Microsoft applications, it's unlikely that the user will see visual notification that multifactor authentication is required and that a request has been sent to their device.
+ - Because the sign-in happens on non-Microsoft applications, the user often can't see visual notification that multifactor authentication is required and that a request has been sent to their device.
- During the multifactor authentication requirement, the user must have access to their default authentication method to complete the requirement. They can't choose an alternative method. Their default authentication method will be used even if it's disabled in the tenant authentication methods and multifactor authentication policies. - Users can change their default multifactor authentication method in the Security Info page (aka.ms/mysecurityinfo). - Available MFA methods for RADIUS clients are controlled by the client systems sending the RADIUS access requests.
- - MFA methods that require user input after they enter a password can only be used with systems that support access-challenge responses with RADIUS. Input methods might include OTP, hardware OATH tokens or the Microsoft Authenticator application.
+ - MFA methods that require user input after they enter a password can only be used with systems that support access-challenge responses with RADIUS. Input methods might include OTP, hardware OATH tokens or Microsoft Authenticator.
- Some systems might limit available multifactor authentication methods to Microsoft Authenticator push notifications and phone calls. - >[!NOTE] >The password encryption algorithm used between the RADIUS client and the NPS system, and the input methods the client can use affect which authentication methods are available. For more information, see [Determine which authentication methods your users can use](howto-mfa-nps-extension.md).
Others might include:
- [Moving to Azure AD Multi-Factor Authentication with federation](how-to-migrate-mfa-server-to-azure-mfa-with-federation.md) - [Moving to Azure AD Multi-Factor Authentication and Azure AD user authentication](how-to-migrate-mfa-server-to-azure-mfa-user-authentication.md)-
+- [How to use the MFA Server Migration Utility](how-to-mfa-server-migration-utility.md)
active-directory Active Directory Certificate Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-certificate-credentials.md
To compute the assertion, you can use one of the many JWT libraries in the langu
| | | | `alg` | Should be **RS256** | | `typ` | Should be **JWT** |
-| `x5t` | Base64-encoded SHA-1 thumbprint of the X.509 certificate thumbprint. For example, given an X.509 certificate hash of `84E05C1D98BCE3A5421D225B140B36E86A3D5534` (Hex), the `x5t` claim would be `hOBcHZi846VCHSJbFAs26Go9VTQ=` (Base64). |
+| `x5t` | Base64-encoded SHA-1 thumbprint of the X.509 certificate. For example, given an X.509 certificate hash of `84E05C1D98BCE3A5421D225B140B36E86A3D5534` (Hex), the `x5t` claim would be `hOBcHZi846VCHSJbFAs26Go9VTQ=` (Base64). |
### Claims (payload)
active-directory Msal Error Handling Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-error-handling-dotnet.md
do
} else if (retryAfter.Date.HasValue) {
- delay = retryAfter.Date.Value.Offset;
+ delay = (retryAfter.Date.Value ΓÇô DateTimeOffset.Now).TotalMilliseconds;
} } }
active-directory Check Status Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/check-status-workflow.md
+
+ Title: Check status of a Lifecycle workflow - Azure Active Directory
+description: This article guides a user on checking the status of a Lifecycle workflow
++++++ Last updated : 03/10/2022+++++
+# Check the status of a workflow (Preview)
+
+When a workflow is created, it's important to check its status, and run history to make sure it ran properly for the users it processed both by schedule and by on-demand. To get information about the status of workflows, Lifecycle Workflows allows you to check run and user processing history. This history also gives you summaries to see how often a workflow has run, and who it ran successfully for. You're also able to check the status of both the workflow, and its tasks. Checking the status of workflows and their tasks allows you to troubleshoot potential problems that could come up during their execution.
++
+## Run workflow history using the Azure portal
+
+You're able to retrieve run information of a workflow using Lifecycle Workflows. To check the runs of a workflow using the Azure portal, you would do the following steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Select **Azure Active Directory** and then select **Identity Governance**.
+
+1. On the left menu, select **Lifecycle Workflows (Preview)**.
+
+1. On the Lifecycle Workflows overview page, select **Workflows (Preview)**.
+
+1. Select the workflow you want to run history of.
+
+1. On the workflow overview screen, select **Audit logs**.
+
+1. On the history page, select the **Runs** button.
+
+1. Here you'll see a summary of workflow runs.
+ :::image type="content" source="media/check-status-workflow/run-list.png" alt-text="Screenshot of a workflow Runs list.":::
+1. The runs summary cards include the total number of processed runs, the number of successful runs, the number of failed runs, and the total number of failed tasks.
+
+## User workflow history using the Azure portal
+
+To get further information than just the runs summary for a workflow, you're also able to get information about users processed by a workflow. To check the status of users a workflow has processed using the Azure portal, you would do the following steps:
+
+
+1. In the left menu, select **Lifecycle Workflows (Preview)**.
+
+1. select **Workflows (Preview)**.
+
+1. select the workflow you want to see user processing information for.
+
+1. On the workflow overview screen, select **Workflow history (Preview)**.
+ :::image type="content" source="media/check-status-workflow/workflow-history.png" alt-text="Screenshot of a workflow overview history.":::
+1. On the workflow history page, you're presented with a summary of every user processed by the workflow along with counts of successful and failed users and tasks.
+ :::image type="content" source="media/check-status-workflow/workflow-history-list.png" alt-text="Screenshot of a list of workflow summaries.":::
+1. By selecting total tasks by a user you're able to see which tasks have successfully completed, or are currently in progress.
+ :::image type="content" source="media/check-status-workflow/task-history-status.png" alt-text="Screenshot of workflow task history status.":::
+1. By selecting failed tasks, you're able to see which tasks have failed for a specific user.
+ :::image type="content" source="media/check-status-workflow/task-history-failed.png" alt-text="Screenshot of workflow failed tasks history.":::
+1. By selecting unprocessed tasks, you're able to see which tasks are unprocessed.
+ :::image type="content" source="media/check-status-workflow/task-history-unprocessed.png" alt-text="Screenshot of unprocessed tasks of a workflow.":::
++
+## User workflow history using Microsoft Graph
+
+### List user processing results using Microsoft Graph
+
+To view a status list of users processed by a workflow, which are UserProcessingResults, you'd make the following API call:
+
+```http
+GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/userProcessingResults
+```
+
+By default **userProcessingResults** returns only information from the last 7 days. To get information as far back as 30 days, you would run the following API call:
+
+```http
+GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/userProcessingResults?$filter=<Date range for processing results>
+```
+
+by default **userProcessingResults** returns only information from the last 7 days. To filter information as far back as 30 days, you would run the following API call:
+
+```http
+GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<id>/userProcessingResults?$filter=<Date range for processing results>
+```
+
+An example of a call to get **userProcessingResults** for a month would be as follows:
+
+```http
+GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/userProcessingResults?$filter=< startedDateTime ge 2022-05-23T00:00:00Z and startedDateTime le 2022-06-22T00:00:00Z
+```
+
+### User processing results using Microsoft Graph
+
+When multiple user events are processed by a workflow, running the **userProcessingResults** may give incomprehensible information. To get a summary of information such as total users and tasks, and failed users and tasks, Lifecycle Workflows provides a call to get count totals.
+
+To view a summary in count form, you would run the following API call:
+```http
+GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/userProcessingResults/summary(<Date Range>)
+```
+
+An example to get the summary between May 1, and May 30, you would run the following call:
+
+```http
+GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/userProcessingResults/summary(startDateTime=2022-05-01T00:00:00Z,endDateTime=2022-05-30T00:00:00Z)
+```
+
+### List task processing results of a given user processing result
+
+```http
+GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/userProcessingResults/<userProcessingResultId>/taskProcessingResults/
+```
+
+## Run workflow history via Microsoft Graph
+
+### List runs using Microsoft Graph
+
+With Microsoft Graph, you're able to get full details of workflow and user processing run information.
+
+To view a list of runs, you'd make the following API call:
+
+```http
+GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/runs
+```
+
+### Get a summary of runs using Microsoft Graph
+
+To get a summary of runs for a workflow, which includes detailed information for counts of failed runs and tasks, along with successful runs and tasks for a time range, you'd make the following API call:
+
+```http
+GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/runs/summary(startDateTime=<time>,endDateTime=<time>)
+```
+An example to get a summary of runs of a workflow through the time interval of May 2022 would be as follows:
+
+```http
+GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/runs/summary(startDateTime=2022-05-01T00:00:00Z,endDateTime=202205-31T00:00:00Z)
+```
+
+### List user and task processing results of a given run using Microsoft Graph
+
+With Lifecycle Workflows, you're able to check the status of each user and task who had a workflow processed for them as part of a run.
+
+
+You're also able to use **userProcessingResults** with the run call to get users processed for a run by making the following API call:
+
+```http
+GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId>/runs/<runId>/userProcessingResults
+```
+
+This API call will also return a **userProcessingResults ID** value, which can be used to retrieve task processing information in the following call:
+
+```http
+GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflowId> /runs/<runId>/userProcessingResults/<userProcessingResultId>/taskProcessingResults
+```
+
+> [!NOTE]
+> A workflow must have activity in the past 7 days to get **userProcessingResults ID**. If there has not been any activity in that time-frame, the **userProcessingResults** call will not return a value.
++
+## Next steps
+
+- [Manage workflow versions](manage-workflow-tasks.md)
+- [Delete Lifecycle Workflows](delete-lifecycle-workflow.md)
active-directory Configure Logic App Lifecycle Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/configure-logic-app-lifecycle-workflows.md
+
+ Title: Configure a Logic App for Lifecycle Workflow use
+description: Configure an Azure Logic App for use with Lifecycle Workflows
++++ Last updated : 08/28/2022+++++
+# Configure a Logic App for Lifecycle Workflow use (Preview)
+
+Before you can use an existing Azure Logic App with the custom task extension feature of Lifecycle Workflows, it must first be made compatible. This reference guide provides a list of steps that must be taken to make the Azure Logic App compatible the custom task extension. For a simpler guide on creating a new Logic App with the custom task extension via the Lifecycle Workflows portal, see [Trigger Logic Apps based on custom task extensions (preview)](trigger-custom-task.md).
+
+## Configure existing Logic Apps for LCW use with Microsoft Graph
+
+Making an Azure Logic app compatible to run with the **Custom Task Extension** requires the following steps:
+
+- Configure the logic app trigger
+- Configure the callback action (only applicable to the callback scenario)
+- Enable system assigned managed identity.
+- Configure AuthZ policies.
+
+> [!NOTE]
+> For our public preview we will provide a UI and a deployment script that will automate the following steps.
+
+To configure those you'll follow these steps:
+
+1. Open the Azure Logic App you want to use with Lifecycle Workflow. Logic Apps may greet you with an introduction screen, which you can close with the X in the upper right corner.
+
+1. On the left of the screen select **Logic App code view**.
+
+1. In the editor paste the following code:
+ ```LCW Logic App code view template
+ {
+ "definition": {
+ "$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
+ "actions": {
+ "HTTP": {
+ "inputs": {
+ "authentication": {
+ "audience": "https://graph.microsoft.com",
+ "type": "ManagedServiceIdentity"
+ },
+ "body": {
+ "data": {
+ "operationStatus": "Completed"
+ },
+ "source": "sample",
+ "type": "lifecycleEvent"
+ },
+ "method": "POST",
+ "uri": "https://graph.microsoft.com/beta@{triggerBody()?['data']?['callbackUriPath']}"
+ },
+ "runAfter": {},
+ "type": "Http"
+ }
+ },
+ "contentVersion": "1.0.0.0",
+ "outputs": {},
+ "parameters": {},
+ "triggers": {
+ "manual": {
+ "inputs": {
+ "schema": {
+ "properties": {
+ "data": {
+ "properties": {
+ "callbackUriPath": {
+ "description": "CallbackUriPath used for Resume Action",
+ "title": "Data.CallbackUriPath",
+ "type": "string"
+ },
+ "subject": {
+ "properties": {
+ "displayName": {
+ "description": "DisplayName of the Subject",
+ "title": "Subject.DisplayName",
+ "type": "string"
+ },
+ "email": {
+ "description": "Email of the Subject",
+ "title": "Subject.Email",
+ "type": "string"
+ },
+ "id": {
+ "description": "Id of the Subject",
+ "title": "Subject.Id",
+ "type": "string"
+ },
+ "manager": {
+ "properties": {
+ "displayName": {
+ "description": "DisplayName parameter for Manager",
+ "title": "Manager.DisplayName",
+ "type": "string"
+ },
+ "email": {
+ "description": "Mail parameter for Manager",
+ "title": "Manager.Mail",
+ "type": "string"
+ },
+ "id": {
+ "description": "Id parameter for Manager",
+ "title": "Manager.Id",
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "userPrincipalName": {
+ "description": "UserPrincipalName of the Subject",
+ "title": "Subject.UserPrincipalName",
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "task": {
+ "properties": {
+ "displayName": {
+ "description": "DisplayName for Task Object",
+ "title": "Task.DisplayName",
+ "type": "string"
+ },
+ "id": {
+ "description": "Id for Task Object",
+ "title": "Task.Id",
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "taskProcessingResult": {
+ "properties": {
+ "createdDateTime": {
+ "description": "CreatedDateTime for TaskProcessingResult Object",
+ "title": "TaskProcessingResult.CreatedDateTime",
+ "type": "string"
+ },
+ "id": {
+ "description": "Id for TaskProcessingResult Object",
+ "title": "TaskProcessingResult.Id",
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
+ "workflow": {
+ "properties": {
+ "displayName": {
+ "description": "DisplayName for Workflow Object",
+ "title": "Workflow.DisplayName",
+ "type": "string"
+ },
+ "id": {
+ "description": "Id for Workflow Object",
+ "title": "Workflow.Id",
+ "type": "string"
+ },
+ "workflowVerson": {
+ "description": "WorkflowVersion for Workflow Object",
+ "title": "Workflow.WorkflowVersion",
+ "type": "integer"
+ }
+ },
+ "type": "object"
+ }
+ },
+ "type": "object"
+ },
+ "source": {
+ "description": "Context in which an event happened",
+ "title": "Request.Source",
+ "type": "string"
+ },
+ "type": {
+ "description": "Value describing the type of event related to the originating occurrence.",
+ "title": "Request.Type",
+ "type": "string"
+ }
+ },
+ "type": "object"
+ }
+ },
+ "kind": "Http",
+ "type": "Request"
+ }
+ }
+ },
+ "parameters": {}
+ }
+
+ ```
+1. Select Save.
+
+1. Switch to the **Logic App designer** and inspect the configured trigger and callback action. To build your custom business logic, add other actions between the trigger and callback action. If you're only interested in the fire-and-forget scenario, you may remove the callback action.
+
+1. On the left of the screen select **Identity**.
+
+1. Under the system assigned tab enable the status to register it with Azure Active Directory.
+
+1. Select Save.
+
+1. For Logic Apps authorization policy, we'll need the managed identities **Application ID**. Since the Azure portal only shows the Object ID, we need to look up the Application ID. You can search for the managed identity by Object ID under **Enterprise Applications in the Azure AD Portal** to find the required Application ID.
+
+1. Go back to the logic app you created, and select **Authorization**.
+
+1. Create a new authorization policy based on the table below:
+
+ |Claim |Value |
+ |||
+ |Issuer | https://sts.windows.net/(Tenant ID)/ |
+ |Audience | Application ID of your Logic Apps Managed Identity |
+ |appID | ce79fdc4-cd1d-4ea5-8139-e74d7dbe0bb7 |
++
+1. Save the Authorization policy.
+> [!NOTE]
+> Due to a current bug in the Logic Apps UI you may have to save the authorization policy after each claim before adding another.
+
+> [!CAUTION]
+> Please pay attention to the details as minor differences can lead to problems later.
+- For Issuer, ensure you did include the slash after your Tenant ID
+- For Audience, ensure you're using the Application ID and not the Object ID of your Managed Identity
+- For appid, ensure the custom claim is ΓÇ£appidΓÇ¥ in all lowercase. The appid value represents Lifecycle Workflows and is always the same.
+
+
+
+## Linking Lifecycle Workflows with Logic Apps using Microsoft Graph
+
+After the Logic App, we can now integrate it with Lifecycle Workflows. As outlined in the high-level steps we first need to create the customTaskExtension and afterwards, we can reference the customTaskExtension in our ΓÇ£Run a custom task extensionΓÇ¥ task.
+
+The API call for creating a customTaskExtension is as follows:
+```http
+POST https://graph.microsoft.com/beta/identityGovernance/lifecycleManagement/customTaskExtensions
+Content-type: application/json
+
+{
+ "displayName": "<Custom task extension name>",
+ "description": "<description for custom task extension>",
+ "callbackConfiguration": {
+ "@odata.type": "#microsoft.graph.identityGovernance.customTaskExtensionCallbackConfiguration",
+ "durationBeforeTimeout": "PT1H"
+ },
+ "endpointConfiguration": {
+ "@odata.type": "#microsoft.graph.logicAppTriggerEndpointConfiguration",
+ "subscriptionId": "<Your Azure subscription>",
+ "resourceGroupName": "<Resource group where the Logic App is located>",
+ "logicAppWorkflowName": "<Logic App workflow name>"
+ },
+ "authenticationConfiguration": {
+ "@odata.type": "#microsoft.graph.azureAdTokenAuthentication",
+ "resourceId": " f9c5dc6b-d72b-4226-8ccd-801f7a290428"
+ },
+ "clientConfiguration": {
+ "timeoutInMilliseconds": 1000,
+ "maximumRetries": 1
+ }
+}
+```
+> [!NOTE]
+> To create a custom task extension instance that does not wait for a response from the logic app, remove the **callbackConfiguration** parameter.
+
+After the task is created, you can run the following GET call to retrieve its details:
+
+```http
+GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/customTaskExtensions
+```
+
+An example response is as follows:
+ ```Example Custom Task Extension return
+{
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#identityGovernance/lifecycleWorkflows/customTaskExtensions",
+ "@odata.count": 1,
+ "value": [
+ {
+ "@odata.context": "https://graph.microsoft.com/beta/$metadata#identityGovernance/lifecycleWorkflows/customTaskExtensions",
+ "@odata.count": 1,
+ "value": [
+ {
+ "id": "def9685c-e0f6-45aa-8fe8-a9f7ee6d30d6",
+ "displayName": "My Custom Task Extension",
+ "description": "My Custom Task Extension to test Lifecycle workflows Logic App integration",
+ "createdDateTime": "2022-06-28T10:47:08.9359567Z",
+ "lastModifiedDateTime": "2022-06-28T10:47:08.936017Z",
+ "endpointConfiguration": {
+ "@odata.type": "#microsoft.graph.logicAppTriggerEndpointConfiguration",
+ "subscriptionId": "c500b67c-e9b7-4ad2-a90d-77d41385ae55",
+ "resourceGroupName": "RG-LCM",
+ "logicAppWorkflowName": "LcwDocsTest"
+ },
+ "authenticationConfiguration": {
+ "@odata.type": "#microsoft.graph.azureAdTokenAuthentication",
+ "resourceId": "f74118f0-849a-457d-a7e4-ee97eab6017a"
+ },
+ "clientConfiguration": {
+ "maximumRetries": 1,
+ "timeoutInMilliseconds": 1000
+ },
+ "callbackConfiguration": {
+ "@odata.type": "#microsoft.graph.identityGovernance.customTaskExtensionCallbackConfiguration",
+ "timeoutDuration": "PT1H"
+ }
+ }
+ ]
+}
+
+```
+
+You'll then take the custom extension **ID**, and use it as the value in the customTaskExtensionId parameter for the custom task example here:
+
+> [!NOTE]
+> The new ΓÇ£Run a Custom Task ExtensionΓÇ¥ task is already available in the Public Preview UI.
+
+```Example of Custom Task extension task
+"tasks":[
+ {
+ "taskDefinitionId": "4262b724-8dba-4fad-afc3-43fcbb497a0e",
+ "continueOnError": false,
+ "displayName": "<Custom Task Extension displayName>",
+ "description": "<Custom Task Extension description>",
+ "isEnabled": true,
+ "arguments": [
+ {
+ "name": "customTaskExtensionID",
+ "value": "<ID of your Custom Task Extension>"
+ }
+ ]
+}
++
+```
+
+## Next steps
+
+- [Lifecycle workflow extensibility (Preview)](lifecycle-workflow-extensibility.md)
+- [Manage Workflow Versions](manage-workflow-tasks.md)
active-directory Create Lifecycle Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/create-lifecycle-workflow.md
+
+ Title: Create a Lifecycle Workflow- Azure AD (preview)
+description: This article guides a user to creating a workflow using Lifecycle Workflows
++++++ Last updated : 02/15/2022++++
+# Create a Lifecycle workflow (Preview)
+Lifecycle Workflows allows for tasks associated with the lifecycle process to be run automatically for users as they move through their life cycle in your organization. Workflows are made up of:
+ - tasks - Actions taken when a workflow is triggered.
+ - execution conditions - Define the who and when of a workflow. That is, who (scope) should this workflow run against, and when (trigger) should it run.
+
+Workflows can be created and customized for common scenarios using templates, or you can build a template from scratch without using a template. Currently if you use the Azure portal, a created workflow must be based off a template. If you wish to create a workflow without using a template, you must create it using Microsoft Graph.
+
+## Prerequisites
++
+## Create a Lifecycle workflow using a template in the Azure portal
+
+If you are using the Azure portal to create a workflow, you can customize existing templates to meet your organization's needs. This means you can customize the pre-hire common scenario template. To create a workflow based on one of these templates using the Azure portal do the following steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Select **Azure Active Directory** and then select **Identity Governance**.
+
+1. In the left menu, select **Lifecycle Workflows (Preview)**.
+
+1. select **Workflows (Preview)**
+
+1. On the workflows screen, select the workflow template that you want to use.
+ :::image type="content" source="media/create-lifecycle-workflow/template-list.png" alt-text="Screenshot of a list of lifecycle workflows templates.":::
+1. Enter a display name and description for the workflow. The display name must be unique and not match the name of any other workflow you've created.
+ :::image type="content" source="media/create-lifecycle-workflow/template-basics.png" alt-text="Screenshot of workflow template basic information.":::
+
+1. Select the **Trigger type** to be used for this workflow.
+
+1. On **Days from event**, you enter a value of days when you want the workflow to go into effect. The valid values are 0 to 60.
+
+1. **Event timing** allows you to choose if the days from event are either before or after.
+
+1. **Event user attribute** is the event being used to trigger the workflow. For example, with the pre hire workflow template, an event user attribute is the employee hire date.
++
+1. Select the **Property**, **Operator**, and give it a **value**. The following picture gives an example of a rule being set up for a sales department.
+
+ :::image type="content" source="media/create-lifecycle-workflow/template-scope.png" alt-text="Screenshot of Lifecycle Workflows template scope configuration options.":::
+
+1. To view your rule syntax, select the **View rule syntax** button.
+ :::image type="content" source="media/create-lifecycle-workflow/template-syntax.png" alt-text="Screenshot of workflow rule syntax.":::
+
+1. You can copy and paste multiple user property rules on this screen. For more detailed information on which properties that can be included see: [User Properties](/graph/aad-advanced-queries?tabs=http#user-properties)
+
+1. To Add a task to the template, select **Add task**.
+
+ :::image type="content" source="media/create-lifecycle-workflow/template-tasks.png" alt-text="Screenshot of adding tasks to templates.":::
+
+1. To enable an existing task on the list, select **enable**. You're also able to disable a task by selecting **disable**.
+
+1. To remove a task from the template, select **Remove** on the selected task.
+
+1. Review the workflow's settings.
+
+ :::image type="content" source="media/create-lifecycle-workflow/template-review.png" alt-text="Screenshot of reviewing and creating a template.":::
+
+1. Select **Create** to create the workflow.
++
+> [!IMPORTANT]
+> By default, a newly created workflow is disabled to allow for the testing of it first on smaller audiences. For more information about testing workflows before rolling them out to many users, see: [run an on-demand workflow](on-demand-workflow.md).
+
+## Create a workflow using Microsoft Graph
+
+Workflows can be created using Microsoft Graph API. Creating a workflow using the Graph API allows you to automatically set it to enabled. Setting it to enabled is done using the `isEnabled` parameter.
+
+The table below shows the parameters that must be defined during workflow creation:
+
+|Parameter |Description |
+|||
+|category | A string that identifies the category of the workflow. String is "joiner", "mover", or "leaver. Category of tasks within a workflow must also contain the category of the workflow to run. For full task definitions, see: [Lifecycle workflow tasks and definitions](lifecycle-workflow-tasks.md) |
+|displayName | A unique string that identifies the workflow. |
+|description | A string that describes the purpose of the workflow for administrative use. (Optional) |
+|isEnabled | A boolean value that denotes whether the workflow is set to run or not. If set to ΓÇ£true" then the workflow will run. |
+|IsSchedulingEnabled | A Boolean value that denotes whether scheduling is enabled or not. Unlike isEnbaled, a workflow can still be run on demand if this value is set to false. |
+|executionConditions | An argument that contains: A time-based attribute and an integer parameter defining when a workflow will run between -60 and a scope attribute defining who the workflow runs for. |
+|tasks | An argument in a workflow that has a unique displayName and a description. It defines the specific tasks to be executed in the workflow. The specified task is outlined by the taskDefinitionID and its parameters. For a list of supported tasks, and their corresponding IDs, see [Lifecycle Workflow tasks and definitions](lifecycle-workflow-tasks.md). |
++++
+To create a joiner workflow, in Microsoft Graph, use the following request and body:
+```http
+POST https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows
+Content-type: application/json
+```
+
+```Request body
+{
+ "category": "joiner",
+ "displayName": "<Unique workflow name string>",
+ "description": "<Unique workflow description>",
+ "isEnabled":true,
+ "tasks":[
+ {
+ "category": "joiner",
+ "isEnabled": true,
+ "taskTemplateId": "<Unique Task template>",
+ "displayName": "<Unique task name>",
+ "description": "<Task template description>",
+ "arguments": "<task arguments>"
+ }
+ ],
+ "executionConditions": {
+ "@odata.type" : "microsoft.graph.identityGovernance.scopeAndTriggerBasedCondition",
+ "trigger": {
+ "@odata.type" : "microsoft.graph.identityGovernance.timeBasedAttributeTrigger",
+ "timeBasedAttribute":"<time-based trigger argument>",
+ "arguments": -7
+ },
+ "scope": {
+ "@odata.type" : "microsoft.graph.identityGovernance.ruleBasedScope",
+ "rule": "employeeType eq '<Employee type attribute>' AND department -eq '<department attribute>'"
+ }
+ }
+}
+
+> [!NOTE]
+> time based trigger arguments can be from -60 to 60. The negative value denotes **Before** a time based argument, while a positive value denotes **After**. For example the -7 in the workflow example above denotes the workflow will run 1 week before the time-based argument happens.
+
+```
+
+To change this workflow from joiner to leaver, replace the category parameters to "leaver". To get a list of the task definitions that can be added to your workflow run the following call:
+
+```http
+GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/taskDefinitions
+```
+
+The response to the code will look like:
+
+```Response body
+{
+ "@odata.context": "https://graph.microsoft-ppe.com/testppebetalcwpp4/$metadata#identityGovernance/lifecycleWorkflows/taskDefinitions",
+ "@odata.count": 13,
+ "value": [
+ {
+ "category": "joiner,leaver",
+ "description": "Add user to a group",
+ "displayName": "Add User To Group",
+ "id": "22085229-5809-45e8-97fd-270d28d66910",
+ "version": 1,
+ "parameters": [
+ {
+ "name": "groupID",
+ "values": [],
+ "valueType": "string"
+ }
+ ]
+ },
+ {
+ "category": "joiner,leaver",
+ "description": "Disable user account in the directory",
+ "displayName": "Disable User Account",
+ "id": "1dfdfcc7-52fa-4c2e-bf3a-e3919cc12950",
+ "version": 1,
+ "parameters": []
+ },
+ {
+ "category": "joiner,leaver",
+ "description": "Enable user account in the directory",
+ "displayName": "Enable User Account",
+ "id": "6fc52c9d-398b-4305-9763-15f42c1676fc",
+ "version": 1,
+ "parameters": []
+ },
+ {
+ "category": "joiner,leaver",
+ "description": "Run a custom task extension",
+ "displayName": "run a Custom Task Extension",
+ "id": "4262b724-8dba-4fad-afc3-43fcbb497a0e",
+ "version": 1,
+ "parameters":
+ {
+ "name": "customtaskextensionID",
+ "values": [],
+ "valueType": "string"
+ }
+ ]
+ },
+ {
+ "category": "joiner,leaver",
+ "description": "Remove user from membership of selected Azure AD groups",
+ "displayName": "Remove user from selected groups",
+ "id": "1953a66c-751c-45e5-8bfe-01462c70da3c",
+ "version": 1,
+ "parameters": [
+ {
+ "name": "groupID",
+ "values": [],
+ "valueType": "string"
+ }
+ ]
+ },
+ {
+ "category": "joiner",
+ "description": "Generate Temporary Access Password and send via email to user's manager",
+ "displayName": "Generate TAP And Send Email",
+ "id": "1b555e50-7f65-41d5-b514-5894a026d10d",
+ "version": 1,
+ "parameters": [
+ {
+ "name": "tapLifetimeMinutes",
+ "values": [],
+ "valueType": "string"
+ },
+ {
+ "name": "tapIsUsableOnce",
+ "values": [
+ "true",
+ "false"
+ ],
+ "valueType": "enum"
+ }
+ ]
+ },
+ {
+ "category": "joiner",
+ "description": "Send welcome email to new hire",
+ "displayName": "Send Welcome Email",
+ "id": "70b29d51-b59a-4773-9280-8841dfd3f2ea",
+ "version": 1,
+ "parameters": []
+ },
+ {
+ "category": "joiner,leaver",
+ "description": "Add user to a team",
+ "displayName": "Add User To Team",
+ "id": "e440ed8d-25a1-4618-84ce-091ed5be5594",
+ "version": 1,
+ "parameters": [
+ {
+ "name": "teamID",
+ "values": [],
+ "valueType": "string"
+ }
+ ]
+ },
+ {
+ "category": "leaver",
+ "description": "Delete user account in Azure AD",
+ "displayName": "Delete User Account",
+ "id": "8d18588d-9ad3-4c0f-99d0-ec215f0e3dff",
+ "version": 1,
+ "parameters": []
+ },
+ {
+ "category": "joiner,leaver",
+ "description": "Remove user from membership of selected Teams",
+ "displayName": "Remove user from selected Teams",
+ "id": "06aa7acb-01af-4824-8899-b14e5ed788d6",
+ "version": 1,
+ "parameters": [
+ {
+ "name": "teamID",
+ "values": [],
+ "valueType": "string"
+ }
+ ]
+ },
+ {
+ "category": "leaver",
+ "description": "Remove user from all Azure AD groups memberships",
+ "displayName": "Remove user from all groups",
+ "id": "b3a31406-2a15-4c9a-b25b-a658fa5f07fc",
+ "version": 1,
+ "parameters": []
+ },
+ {
+ "category": "leaver",
+ "description": "Remove user from all Teams memberships",
+ "displayName": "Remove user from all Teams",
+ "id": "81f7b200-2816-4b3b-8c5d-dc556f07b024",
+ "version": 1,
+ "parameters": []
+ },
+ {
+ "category": "leaver",
+ "description": "Remove all licenses assigned to the user",
+ "displayName": "Remove all licenses for user",
+ "id": "8fa97d28-3e52-4985-b3a9-a1126f9b8b4e",
+ "version": 1,
+ "parameters": []
+ }
+ ]
+}
+
+```
+For further details on task definitions and their parameters, see [Lifecycle Workflow Tasks](lifecycle-workflow-tasks.md).
++
+## Next steps
+
+- [Manage a workflow's properties](manage-workflow-properties.md)
+- [Manage Workflow Versions](manage-workflow-tasks.md)
active-directory Customize Workflow Schedule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/customize-workflow-schedule.md
+
+ Title: 'Customize workflow schedule - Azure Active Directory'
+description: Describes how to customize the schedule of a Lifecycle Workflow.
++++++ Last updated : 01/20/2022++++++
+# Customize the schedule of workflows (Preview)
+
+Workflows created using Lifecycle Workflows can be fully customized to match the schedule that fits your organization's needs. By default, workflows are scheduled to run every 3 hours, but the interval can be set as frequent as 1 hour, or as infrequent as 24 hours.
++
+## Customize the schedule of workflows using Microsoft Graph
++
+First, to view the current schedule interval of your workflows, run the following get call:
+
+```http
+GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/settings
+```
++
+To customize a workflow in Microsoft Graph, use the following request and body:
+```http
+PATCH https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/settings
+Content-type: application/json
+
+{
+"workflowScheduleIntervalInHours":<Interval between 0-24>
+}
+
+```
+
+## Next steps
+
+- [Manage workflow properties](manage-workflow-properties.md)
+- [Delete Lifecycle Workflows](delete-lifecycle-workflow.md)
active-directory Delete Lifecycle Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/delete-lifecycle-workflow.md
+
+ Title: 'Delete a Lifecycle workflow - Azure Active Directory'
+description: Describes how to delete a Lifecycle Workflow using.
++++++ Last updated : 01/20/2022++++++
+# Delete a Lifecycle workflow (Preview)
+
+You can remove workflows that are no longer needed. Deleting these workflows allows you to make sure your lifecycle strategy is up to date. When a workflow is deleted, it enters a soft delete state. During this period, it's still able to be viewed within the deleted workflows list, and can be restored if needed. 30 days after a workflow enters a soft delete state it will be permanently removed. If you don't wish to wait 30 days for a workflow to permanently delete you can always manually delete it yourself.
++
+## Delete a workflow using the Azure portal
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Select **Azure Active Directory** and then select **Identity Governance**.
+
+1. In the left menu, select **Lifecycle Workflows (Preview)**.
+
+1. select **Workflows (Preview)**.
+
+1. On the workflows screen, select the workflow you want to delete.
+
+ :::image type="content" source="media/delete-lifecycle-workflow/delete-button.png" alt-text="Screenshot of list of Workflows to delete.":::
+
+1. With the workflow highlighted, select **Delete**.
+
+1. Confirm you want to delete the selected workflow.
+
+ :::image type="content" source="media/delete-lifecycle-workflow/delete-workflow.png" alt-text="Screenshot of confirming to delete a workflow.":::
+
+## View deleted workflows
+
+After deleting workflows, you can view them on the **Deleted Workflows (Preview)** page.
++
+1. On the left of the screen, select **Deleted Workflows (Preview)**.
+
+1. On this page, you'll see a list of deleted workflows, a description of the workflow, what date it was deleted, and its permanent delete date. By default the permanent delete date for a workflow is always 30 days after it was originally deleted.
+
+ :::image type="content" source="media/delete-lifecycle-workflow/deleted-list.png" alt-text="Screenshot of a list of deleted workflows.":::
+
+1. To restore a deleted workflow, select the workflow you want to restore and select **Restore workflow**.
+
+1. To permanently delete a workflow immediately, you select the workflow you want to delete from the list, and select **Delete permanently**.
++
+
+
+## Delete a workflow using Microsoft Graph
+ You're also able to delete, view deleted, and restore deleted Lifecycle workflows using Microsoft Graph.
+
+Workflows can be deleted by running the following call:
+```http
+DELETE https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<id>
+```
+## View deleted workflows using Microsoft Graph
+You can view a list of deleted workflows by running the following call:
+```http
+GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/deletedItems/workflows
+```
+
+## Permanently delete a workflow using Microsoft Graph
+Deleted workflows can be permanently deleted by running the following call:
+```http
+DELETE https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/deletedItems/workflows/<id>
+```
+
+## Restore deleted workflows using Microsoft Graph
+
+Deleted workflows are available to be restored for 30 days before they're permanently deleted. To restore a deleted workflow, run the following API call:
+```http
+GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/deletedItems/workflows/<id>/restore
+```
+> [!NOTE]
+> Permanently deleted workflows are not able to be restored.
+
+## Next steps
+- [What are Lifecycle Workflows?](what-are-lifecycle-workflows.md)
+- [Manage Lifecycle Workflow Versions](manage-workflow-tasks.md)
active-directory How To Lifecycle Workflow Sync Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/how-to-lifecycle-workflow-sync-attributes.md
+
+ Title: 'How to synchronize attributes for Lifecycle workflows'
+description: Describes overview of Lifecycle workflow attributes.
++++++ Last updated : 01/20/2022++++
+# How to synchronize attributes for Lifecycle workflows
+Workflows, contain specific tasks, which can run automatically against users based on the specified execution conditions. Automatic workflow scheduling is supported based on the employeeHireDate and employeeLeaveDateTime user attributes in Azure AD.
+
+To take full advantage of Lifecycle Workflows, user provisioning should be automated, and the scheduling relevant attributes should be synchronized.
+
+## Scheduling relevant attributes
+The following table shows the scheduling (trigger) relevant attributes and the methods of synchronization that are supported.
+
+|Attribute|Type|Supported in HR Inbound Provisioning|Support in Azure AD Connect Cloud Sync|Support in Azure AD Connect Sync|
+|--|--|--|--|--|
+|employeeHireDate|DateTimeOffset|Yes|Yes|Yes|
+|employeeLeaveDateTime|DateTimeOffset|Not currently(manually setting supported)|Not currently(manually setting supported)|Not currently(manually setting supported)|
+
+These attributes **are not** automatically populated using such synchronization methods such as Azure AD Connect or Azure AD Connect cloud sync.
+
+> [!NOTE]
+> Currently, automatic synchronization of the employeeLeaveDateTime attribute for HR Inbound scenarios is not available. To take advantaged of leaver scenarios, you can set the employeeLeaveDateTime manually. Manually setting the attribute can be done in the portal or with Graph. For more information see [User profile in Azure](../fundamentals/active-directory-users-profile-azure-portal.md) and [Update user](/graph/api/user-update?view=graph-rest-1.0&tabs=http).
+
+This document explains how to set up synchronization from on-premises Azure AD Connect cloud sync and Azure AD Connect for the required attributes.
+
+>[!NOTE]
+> There's no corresponding EmployeeHireDate or EmployeeLeaveDateTime attribute in Active Directory. If you're importing from on-premises AD, you'll need to identify an attribute in AD that can be used. This attribute must be a string.
++
+## Understanding EmployeeHireDate and EmployeeLeaveDateTime formatting
+The EmployeeHireDate and EmployeeLeaveDateTime contain dates and times that must be formatted in a specific way. This means that you may need to use an expression to convert the value of your source attribute to a format that will be accepted by the EmployeeHireDate or EmployeeLeaveDateTime. The table below outlines the format that is expected and provides an example expression on how to convert the values.
+
+|Scenario|Expression/Format|Target|More Information|
+|--|--|--|--|
+|Workday to Active Directory User Provisioning|FormatDateTime([StatusHireDate], , "yyyy-MM-ddzzz", "yyyyMMddHHmmss.fZ")|On-premises AD string attribute|[Attribute mappings for Workday](../saas-apps/workday-inbound-tutorial.md#below-are-some-example-attribute-mappings-between-workday-and-active-directory-with-some-common-expressions)|
+|SuccessFactors to Active Directory User Provisioning|FormatDateTime([endDate], ,"M/d/yyyy hh:mm:ss tt"," yyyyMMddHHmmss.fZ ")|On-premises AD string attribute|[Attribute mappings for SAP Success Factors](../saas-apps/sap-successfactors-inbound-provisioning-tutorial.md)|
+|Custom import to Active Directory|Must be in the format "yyyyMMddHHmmss.fZ"|On-premises AD string attribute||
+|Microsoft Graph User API|Must be in the format "YYYY-MM-DDThh:mm:ssZ"|EmployeeHireDate and EmployeeLeaveDateTime||
+|Workday to Azure AD User Provisioning|Can use a direct mapping. No expression is needed but may be used to adjust the time portion of EmployeeHireDate and EmployeeLeaveDateTime|EmployeeHireDate and EmployeeLeaveDateTime||
+|SuccessFactors to Azure AD User Provisioning|Can use a direct mapping. No expression is needed but may be used to adjust the time portion of EmployeeHireDate and EmployeeLeaveDateTime|EmployeeHireDate and EmployeeLeaveDateTime||
+
+For more information on expressions, see [Reference for writing expressions for attribute mappings in Azure Active Directory](../app-provisioning/functions-for-customizing-application-data.md)
+
+The expression examples above use endDate for SAP and StatusHireDate for Workday. However, you may opt to use different attributes.
+
+For example, you might use StatusContinuesFirstDayOfWork instead of StatusHireDate for Workday. In this instance your expression would be:
+
+ `FormatDateTime([StatusContinuesFirstDayOfWork], , "yyyy-MM-ddzzz", "yyyyMMddHHmmss.fZ")`
++
+The following table has a list of suggested attributes and their scenario recommendations.
+
+|HR Attribute|HR System|Scenario|Azure AD attribute|
+|--|--|--|--|
+|StatusHireDate|Workday|Joiner|EmployeeHireDate|
+|StatusContinuousFirstDayOfWork|Workday|Joiner|EmployeeHireDate|
+StatusDateEnteredWorkforce|Workday|Joiner|EmployeeHireDate|
+StatusOriginalHireDate|Workday|Joiner|EmployeeHireDate|
+|StatusEndEmploymentDate|Workday|Leaver|EmployeeLeaveDateTime|
+|StatusResignationDate|Workday|Leaver|EmployeeLeaveDateTime|
+|StatusRetirementDate|Workday|Leaver|EmployeeLeaveDateTime|
+|StatusTerminationDate|Workday|Leaver|EmployeeLeaveDateTime|
+|startDate|SAP SF|Joiner|EmployeeHireDate|
+|firstDateWorked|SAP SF|Joiner|EmployeeHireDate|
+|lastDateWorked|SAP SF|Leaver|EmployeeLeaveDateTime|
+|endDate|SAP SF|Leaver|EmployeeLeaveDateTime|
+
+For more attributes, see the [Workday attribute reference](../app-provisioning/workday-attribute-reference.md) and [SAP SuccessFactors attribute reference](../app-provisioning/sap-successfactors-attribute-reference.md)
++
+## Importance of time
+To ensure timing accuracy of scheduled workflows itΓÇÖs curial to consider:
+
+- The time portion of the attribute must be set accordingly, for example the `employeeHireDate` should have a time at the beginning of the day like 1AM or 5AM and the `employeeLeaveDateTime` should have time at the end of the day like 9PM or 11PM
+ - Workflow won't run earlier than the time specified in the attribute, however the [tenant schedule (default 3h)](customize-workflow-schedule.md) may delay the workflow run. For instance, if you set the `employeeHireDate` to 8AM but the tenant schedule doesn't run until 9AM, the workflow won't be processed until then. If a new hire is starting at 8AM, you would want to set the time to something like (start time - tenant schedule) to ensure it had run before the employee arrives.
+- It's recommended, that if you're using temporary access pass (TAP), that you set the maximum lifetime to 24 hours. Doing this will help ensure that the TAP hasn't expired after being sent to an employee who may be in a different timezone. For more information, see [Configure Temporary Access Pass in Azure AD to register Passwordless authentication methods.](../authentication/howto-authentication-temporary-access-pass.md#enable-the-temporary-access-pass-policy)
+- When importing the data, you should understand if and how the source provides time zone information for your users to potentially make adjustments to ensure timing accuracy.
++
+## Create a custom synch rule in Azure AD Connect cloud sync for EmployeeHireDate
+ The following steps will guide you through creating a synchronization rule using cloud sync.
+ 1. In the Azure portal, select **Azure Active Directory**.
+ 2. Select **Azure AD Connect**.
+ 3. Select **Manage cloud sync**.
+ 4. Under **Configuration**, select your configuration.
+ 5. Select **Click to edit mappings**. This link opens the **Attribute mappings** screen.
+ 6. Select **Add attribute**.
+ 7. Fill in the following information:
+ - Mapping Type: Direct
+ - Source attribute: msDS-cloudExtensionAttribute1
+ - Default value: Leave blank
+ - Target attribute: employeeHireDate
+ - Apply this mapping: Always
+ 8. Select **Apply**.
+ 9. Back on the **Attribute mappings** screen, you should see your new attribute mapping.
+ 10. Select **Save schema**.
+
+For more information on attributes, see [Attribute mapping in Azure AD Connect cloud sync.](../cloud-sync/how-to-attribute-mapping.md)
+
+## How to create a custom synch rule in Azure AD Connect for EmployeeHireDate
+The following example will walk you through setting up a custom synchronization rule that synchronizes the Active Directory attribute to the employeeHireDate attribute in Azure AD.
+
+ 1. Open a PowerShell window as administrator and run `Set-ADSyncScheduler -SyncCycleEnabled $false`.
+ 2. Go to Start\Azure AD Connect\ and open the Synchronization Rules Editor
+ 3. Ensure the direction at the top is set to **Inbound**.
+ 4. Select **Add Rule.**
+ 5. On the **Create Inbound synchronization rule** screen, enter the following information and select **Next**.
+ - Name: In from AD - EmployeeHireDate
+ - Connected System: contoso.com
+ - Connected System Object Type: user
+ - Metaverse Object Type: person
+ - Precedence: 200
+ ![Screenshot of creating an inbound synchronization rule basics.](media/how-to-lifecycle-workflow-sync-attributes/create-inbound-rule.png)
+ 6. On the **Scoping filter** screen, select **Next.**
+ 7. On the **Join rules** screen, select **Next**.
+ 8. On the **Transformations** screen, Under **Add transformations,** enter the following information.
+ - FlowType: Direct
+ - Target Attribute: employeeHireDate
+ - Source: msDS-cloudExtensionAttribute1
+ ![Screenshot of creating inbound synchronization rule transformations.](media/how-to-lifecycle-workflow-sync-attributes/create-inbound-rule-transformations.png)
+ 9. Select **Add**.
+ 10. In the Synchronization Rules Editor, ensure the direction at the top is set to **Outbound**.
+ 11. Select **Add Rule.**
+ 12. On the **Create Outbound synchronization rule** screen, enter the following information and select **Next**.
+ - Name: Out to Azure AD - EmployeeHireDate
+ - Connected System: &lt;your tenant&gt;
+ - Connected System Object Type: user
+ - Metaverse Object Type: person
+ - Precedence: 201
+ 13. On the **Scoping filter** screen, select **Next.**
+ 14. On the **Join rules** screen, select **Next**.
+ 15. On the **Transformations** screen, Under **Add transformations,** enter the following information.
+ - FlowType: Direct
+ - Target Attribute: employeeHireDate
+ - Source: employeeHireDate
+ ![Screenshot of create outbound synchronization rule transformations.](media/how-to-lifecycle-workflow-sync-attributes/create-outbound-rule-transformations.png)
+ 16. Select **Add**.
+ 17. Close the Synchronization Rules Editor
++++++
+For more information, see [How to customize a synchronization rule](../hybrid/how-to-connect-create-custom-sync-rule.md) and [Make a change to the default configuration.](../hybrid/how-to-connect-sync-change-the-configuration.md)
+++
+## Next steps
+- [What are lifecycle workflows?](what-are-lifecycle-workflows.md)
+- [Create a custom workflow using the Azure portal](tutorial-onboard-custom-workflow-portal.md)
+- [Create a Lifecycle workflow](create-lifecycle-workflow.md)
active-directory Identity Governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/identity-governance-overview.md
For many organizations, identity lifecycle for employees is tied to the represen
Increasingly, scenarios require collaboration with people outside your organization. [Azure AD B2B](/azure/active-directory/b2b/) collaboration enables you to securely share your organization's applications and services with guest users and external partners from any organization, while maintaining control over your own corporate data. [Azure AD entitlement management](entitlement-management-overview.md) enables you to select which organization's users are allowed to request access and be added as B2B guests to your organization's directory, and ensures that these guests are removed when they no longer need access.
+Organizations are able to automate the identity lifecycle management process by using [Lifecycle Workflows](what-are-lifecycle-workflows.md). Workflows can be created to automatically run tasks for a user before they enter the organization, as they change states during their time in the organization, and as they leave the organization. For example, a workflow can be configured to send an email with a temporary password to a new user's manager, or a welcome email to the user on their first day.
+ ## Access lifecycle Organizations need a process to manage access beyond what was initially provisioned for a user when that user's identity was created. Furthermore, enterprise organizations need to be able to scale efficiently to be able to develop and enforce access policy and controls on an ongoing basis.
Typically, IT delegates access approval decisions to business decision makers.
Organizations can automate the access lifecycle process through technologies such as [dynamic groups](../enterprise-users/groups-dynamic-membership.md), coupled with user provisioning to [SaaS apps](../saas-apps/tutorial-list.md) or [apps integrated with SCIM](../app-provisioning/use-scim-to-provision-users-and-groups.md). Organizations can also control which [guest users have access to on-premises applications](../external-identities/hybrid-cloud-to-on-premises.md). These access rights can then be regularly reviewed using recurring [Azure AD access reviews](access-reviews-overview.md). [Azure AD entitlement management](entitlement-management-overview.md) also enables you to define how users request access across packages of group and team memberships, application roles, and SharePoint Online roles. For more information, see the [simplifying identity governance tasks with automation](#simplifying-identity-governance-tasks-with-automation) section below to select the appropriate Azure AD features for your access lifecycle automation scenarios.
+Lifecycle access can be automated using workflows. [Workflows can be created](create-lifecycle-workflow.md) to automatically add user to groups, where access to applications and resources are granted. Users can also be moved when their condition within the organization changes to different groups, and can even be removed entirely from all groups.
+ When a user attempts to access applications, Azure AD enforces [Conditional Access](../conditional-access/index.yml) policies. For example, Conditional Access policies can include displaying a [terms of use](../conditional-access/terms-of-use.md) and [ensuring the user has agreed to those terms](../conditional-access/require-tou.md) prior to being able to access an application. For more information, see [govern access to applications in your environment](identity-governance-applications-prepare.md). ## Privileged access lifecycle
In addition to the features listed above, additional Azure AD features frequentl
|Identity lifecycle (employees)|Admins can enable user account provisioning from Workday or SuccessFactors cloud HR, or on-premises HR.|[cloud HR to Azure AD user provisioning](../app-provisioning/plan-cloud-hr-provision.md)| |Identity lifecycle (guests)|Admins can enable self-service guest user onboarding from another Azure AD tenant, direct federation, One Time Passcode (OTP) or Google accounts. Guest users are automatically provisioned and deprovisioned subject to lifecycle policies.|[Entitlement management](entitlement-management-overview.md) using [B2B](../external-identities/what-is-b2b.md)| |Entitlement management|Resource owners can create access packages containing apps, Teams, Azure AD and Microsoft 365 groups, and SharePoint Online sites.|[Entitlement management](entitlement-management-overview.md)|
+|Lifecycle Workflows|Admins can enable the automation of the lifecycle process based user conditions.|[Lifecycle Workflows](what-are-lifecycle-workflows.md)|
|Access requests|End users can request group membership or application access. End users, including guests from other organizations, can request access to access packages.|[Entitlement management](entitlement-management-overview.md)| |Workflow|Resource owners can define the approvers and escalation approvers for access requests and approvers for role activation requests. |[Entitlement management](entitlement-management-overview.md) and [PIM](../privileged-identity-management/pim-configure.md)| |Policy and role management|Admin can define conditional access policies for run-time access to applications. Resource owners can define policies for user's access via access packages.|[Conditional access](../conditional-access/overview.md) and [Entitlement management](entitlement-management-overview.md) policies|
active-directory Lifecycle Workflow Extensibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-extensibility.md
+
+ Title: Workflow Extensibility - Azure Active Directory
+description: Conceptual article discussing workflow extensibility with Lifecycle Workflows
++++++ Last updated : 07/06/2022++++
+# Lifecycle Workflows Custom Task Extension (Preview)
++
+Lifecycle Workflows allow you to create workflows that can be triggered based on joiner, mover, or leaver scenarios. While Lifecycle Workflows provide several built-in tasks to automate common scenarios throughout the lifecycle of users, eventually you may reach the limits of these built-in tasks. With the extensibility feature, you'll be able to utilize the concept of custom task extensions to call-out to external systems as part of a workflow. By calling out to the external systems, you're able to accomplish things, which can extend the purpose of your workflows. When a user joins your organization you can have a workflow with a custom task extension that assigns a Teams number, or have a separate workflow that grants access to an email account for a manager when a user leaves. With the extensibility feature, Lifecycle Workflows currently support creating custom tasks extensions to call-out to [Azure Logic Apps](/azure/logic-apps/logic-apps-overview).
++
+## Prerequisite Logic App roles required for integration with the custom task extension
+
+When linking your Azure Logic App with the custom task extension task, there are certain permissions that must be completed before the link can be established.
+
+The roles on the Azure Logic App, which allows it to be compatible with the custom task extension, are as follows:
+
+- **Logic App contributor**
+- **Contributor**
+- **Owner**
+
+> [!NOTE]
+> The **Logic App Operator** role alone will not make an Azure Logic App compatible with the custom task extension. For more information on the required **Logic App contributor** role, see: [Logic App Contributor](/azure/role-based-access-control/built-in-roles#logic-app-contributor).
+
+## Custom task extension deployment scenarios
+
+When creating custom task extensions, the scenarios for how it will interact with Lifecycle Workflows can be one of two ways:
+
+ :::image type="content" source="media/lifecycle-workflow-extensibility/task-extension-deployment-scenarios.png" alt-text="Screenshot of custom task deployment scenarios.":::
+
+- **Launch and complete**- The Azure Logic App is started, and the following task execution immediately continues with no response expected from the Azure Logic App. This scenario is best suited if the Lifecycle workflow doesn't require any feedback (including status) from the Azure Logic App. With this scenario, as long as the workflow is started successfully, the workflow is viewed as a success.
+- **Launch and wait**- The Azure Logic App is started, and the following task's execution waits on the response from the Logic App. You enter a time duration for how long the custom task extension should wait for a response from the Azure Logic App. If no response is received within a customer defined duration window, the task will be considered failed.
+ :::image type="content" source="media/lifecycle-workflow-extensibility/custom-task-launch-wait.png" alt-text="Screenshot of custom task launch and wait task choice.":::
+
+## Custom task extension integration with Azure Logic Apps high-level steps
+
+The high-level steps for the Azure Logic Apps integration are as follows:
+
+> [!NOTE]
+> Creating a custom task extension and logic app through the workflows page in the Azure portal will automate most of these steps. For a guide on creating a custom task extension this way, see: [Trigger Logic Apps based on custom task extensions (Preview)](trigger-custom-task.md).
+
+- **Create a consumption-based Azure Logic App**: A consumption-based Azure Logic App that is used to be called to from the custom task extension.
+- **Configure the Azure Logic App so its compatible with Lifecycle workflows**: Configuring the consumption-based Azure Logic App so that it can be used with the custom task extension.
+- **Build your custom business logic within your Azure Logic App**: Set up your business logic within the Azure Logic App using Logic App designer.
+- **Create a lifecycle workflow customTaskExtension which holds necessary information about the Azure Logic App**: Creating a custom task extension that references the configured Azure Logic App.
+- **Update or create a Lifecycle workflow with the ΓÇ£Run a custom task extensionΓÇ¥ task, referencing your created customTaskExtension**: Adding the newly created custom task extension to a new workflow, or updating the information to an existing workflow.
+
+## Logic App parameters used by the custom task
+
+When creating a custom task extension from the Azure portal, you're able to create a Logic App, or link it to an existing one.
+
+The following information is supplied to the custom task from the Logic App:
+
+- Subscription
+- Resource group
+- Logic App name
++
+For a guide on supplying this information to a custom task extension via Microsoft Graph, see: [Configure a Logic App for Lifecycle Workflow use](configure-logic-app-lifecycle-workflows.md).
+
+## Next steps
+
+- [Trigger Logic Apps based on custom task extensions (Preview)](trigger-custom-task.md)
+- [Configure a Logic App for Lifecycle Workflow use (Preview)](configure-logic-app-lifecycle-workflows.md)
active-directory Lifecycle Workflow History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-history.md
+
+ Title: Lifecycle Workflow History
+description: Conceptual article about Lifecycle Workflows reporting and history capabilities
++++++ Last updated : 08/01/2022++++
+# Lifecycle Workflows history (Preview)
+++
+Workflows created using Lifecycle Workflows allow for the automation of lifecycle task for users no matter where they fall in the Joiner-Mover-Leaver (JML) model of their identity lifecycle in your organization. Making sure workflows are processed correctly is an important part of an organization's lifecycle management process. Workflows that aren't processed correctly can lead to many issues in terms of security and compliance. With Audit logs every action that Lifecycle Workflows complete are recorded. With history features, Lifecycle Workflows allow you to specify workflow events based on user, runs, or task summaries. This reporting feature allows you to quickly see what ran for who, and rather or not it was successful. In this article you'll learn the difference between auditing logs and 3 different type of history summaries you can query with Lifecycle Workflows. You'll also learn when you would use each when getting more information about how your workflows were utilized for users in your organization.
+++
+## Audit Logs
+
+Every time a workflow is processed, an event is logged. These events are stored in the **Audit Logs** section, and can be used to gain information about workflows for historical, and auditing, purposes.
++
+On the **Audit Log** page you're presented a sequential list, by date, of every action Lifecycle Workflows has taken. From this information you're able to filter based on the following parameters:
++
+|Filter |Description |
+|||
+|Date | You can filter a specific range for the audit logs from as short as 24 hours up to 30 days. |
+|Date option | You can filter by your tenant's local time, or by UTC. |
+|Service | The Lifecycle Workflow service. |
+|Category | Categories of the event being logged. Separated into <br><br> **All**- All events logged by Lifecycle Workflows.<br><br> **TaskManagement**- Task specific related events logged by Lifecycle Workflows. <br><br> **WorkflowManagement**- Events dealing with the workflow itself. |
+|Activity | You can filter based on specific activities, which are based on categories. |
+
+After filtering this information, you're also able to see other information in the log such as:
+
+- **Status**: Whether or not the logged event was successful or not.
+- **Status Reason**: If the event failed, a reason is given why.
+- **Target(s)**: Who the logged event ran for. Information given as their Azure Active Directory object ID.
+- **Initiated by (actor)**: Who did the event being logged. Information given by the user name.
+
+## Lifecycle Workflow History Summaries
+
+While the large set of information contained in audit logs can be useful for compliance reasons, for regular administration use it might be too much information. To make this large set of information processed easier to read, Lifecycle Workflows provide summaries for quick use. You can view these history summaries in three ways:
+
+- **Users summary**: Shows a summary of users processed by a workflow, and which tasks failed, successfully, and totally ran for each specific user.
+- **Runs summary**: Shows a summary of workflow runs in terms of the workflow. Successful, failed, and total task information when workflow runs are noted.
+- **Tasks summary**: Shows a summary of tasks processed by a workflow, and which tasks failed, successfully, and totally ran in the workflow.
+
+Summaries allow you to quickly gain details about how a workflow ran for itself, or users, without going into further details in logs. For a step by step guide on getting this information, see [Check the status of a workflow (Preview)](check-status-workflow.md)
+
+## Users Summary information
+
+User summaries allow you to view workflow information through the lens of users it has processed.
+++
+Within the user summary you're able to find the following information:
++
+|Parameter |Description |
+|||
+|Total Processed | The total number of users processed by a workflow during the selected time frame. |
+|Successful | The total number of successful users processed by a workflow during the selected time frame. |
+|Failed | The total number of failed users processed by a workflow during the selected time frame. |
+|Total tasks | The total number of tasks processed for users in a workflow during the selected time frame. |
+|Failed tasks | The total number of failed tasks processed for users in a workflow during the selected time frame. |
++
+User summaries allow you to filter based on:
+
+- **Date**: You can filter a specific range from as short as 24 hours up to 30 days of when workflow ran.
+- **Status**: You can filter a specific status of the user processed. The supported statuses are: **Completed**, **In Progress**, **Queued**, **Canceled**, **Completed with errors**, and **Failed**.
+- **Workflow execution type**: You can filter on workflow execution type such as **Scheduled** or **on-demand**
+- **Completed date**: You can filter a specific range from as short as 24 hours up to 30 days of when the user was processed in a workflow.
+
+For a complete guide on getting user processed summary information, see: [User workflow history using the Azure portal](check-status-workflow.md#user-workflow-history-using-the-azure-portal).
++
+## Runs Summary
+
+Runs summaries allow you to view workflow information through the lens of its run history
++
+Within the runs summary you're able to find the following information:
++
+|Parameter |Description |
+|||
+|Total Processed | The total number of workflows that have run. |
+|Successful | Workflows that successfully ran. |
+|Failed | Workflows that failed to run. |
+|Failed tasks | Workflows that ran with failed tasks. |
+
+Runs summaries allow you to filter based on:
+
+- **Date**: You can filter a specific range from as short as 24 hours up to 30 days of when workflow ran.
+- **Status**: You can filter a specific status of the workflow run. The supported statuses are: **Completed**, **In Progress**, **Queued**, **Canceled**, **Completed with errors**, and **Failed**.
+- **Workflow execution type**: You can filter on workflow execution type such as **Scheduled** or **On-demand**.
+- **Completed date**: You can filter a specific range from as short as 24 hours up to 30 days of when the workflow ran.
+
+For a complete guide on getting runs information, see: [Run workflow history using the Azure portal](check-status-workflow.md#run-workflow-history-using-the-azure-portal)
++
+## Tasks summary
+
+Task summaries allow you to view workflow information through the lens of its tasks.
++
+Within the tasks summary you're able to find the following information:
++
+|Parameter |Description |
+|||
+|Total Processed | The total number of tasks processed by a workflow. |
+|Successful | The number of successfully processed tasks by a workflow. |
+|Failed | The number of failed processed tasks by a workflow. |
+|Unprocessed | The number of unprocessed tasks by a workflow. |
+
+Task summaries allow you to filter based on:
+
+- **Date**: You can filter a specific range from as short as 24 hours up to 30 days of when workflow ran.
+- **Status**: You can filter a specific status of the workflow run. The supported statuses are: **Completed**, **In Progress**, **Queued**, **Canceled**, **Completed with errors**, and **Failed**.
+- **Completed date**: You can filter a specific range from as short as 24 hours up to 30 days of when the workflow ran.
+- **Tasks**: You can filter based on specific task names.
+
+Separating processing of the workflow from the tasks is important because, in a workflow, processing a user certain tasks could be successful, while others could fail. Whether or not a task runs after a failed task in a workflow depends on parameters such as enabling continue On Error, and their placement within the workflow. For more information, see [Common task parameters](lifecycle-workflow-tasks.md#common-task-parameters-preview).
+
+## Next steps
+
+- [Understanding Lifecycle Workflows](understanding-lifecycle-workflows.md)
+- [Lifecycle Workflow templates](lifecycle-workflow-templates.md)
+
active-directory Lifecycle Workflow Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-tasks.md
+
+ Title: Lifecycle Workflows tasks and definitions - Azure Active Directory
+description: This article guides a user on Workflow task definitions and task parameters.
++++++ Last updated : 03/23/2022++
+# Lifecycle Workflow built-in tasks (preview)
+
+Lifecycle Workflows come with many pre-configured tasks that are designed to automate common lifecycle management scenarios. These built-in tasks can be utilized to make customized workflows to suit your organization's needs. These tasks can be configured within seconds to create new workflows. These tasks also have categories based on the Joiner-Mover-Leaver model so that they can be easily placed into workflows based on need. In this article you'll get the complete list of tasks, information on common parameters each task has, and a list of unique parameters needed for each specific task.
++
+## Supported tasks (preview)
+
+Lifecycle Workflow's built-in tasks each include an identifier, known as **taskDefinitionID**, and can be used to create either new workflows from scratch, or inserted into workflow templates so that they fit the needs of your organization. For more information on templates available for use with Lifecycle Workflows, see: [Lifecycle Workflow Templates](lifecycle-workflow-templates.md).
+
+Lifecycle Workflows currently support the following tasks:
+
+|Task |taskDefinitionID |
+|||
+|[Send welcome email to new hire](lifecycle-workflow-tasks.md#send-welcome-email-to-new-hire) | 70b29d51-b59a-4773-9280-8841dfd3f2ea |
+|[Generate Temporary Access Password and send via email to user's manager](lifecycle-workflow-tasks.md#generate-temporary-access-password-and-send-via-email-to-users-manager) | 1b555e50-7f65-41d5-b514-5894a026d10d |
+|[Add user to group](lifecycle-workflow-tasks.md#add-user-to-group) | 22085229-5809-45e8-97fd-270d28d66910 |
+|[Add user to team](lifecycle-workflow-tasks.md#add-user-to-team) | e440ed8d-25a1-4618-84ce-091ed5be5594 |
+|[Enable user account](lifecycle-workflow-tasks.md#enable-user-account) | 6fc52c9d-398b-4305-9763-15f42c1676fc |
+|[Run a custom task extension](lifecycle-workflow-tasks.md#run-a-custom-task-extension) | 4262b724-8dba-4fad-afc3-43fcbb497a0e |
+|[Disable user account](lifecycle-workflow-tasks.md#disable-user-account) | 1dfdfcc7-52fa-4c2e-bf3a-e3919cc12950 |
+|[Remove user from group](lifecycle-workflow-tasks.md#remove-user-from-groups) | 1953a66c-751c-45e5-8bfe-01462c70da3c |
+|[Remove users from all groups](lifecycle-workflow-tasks.md#remove-users-from-all-groups) | b3a31406-2a15-4c9a-b25b-a658fa5f07fc |
+|[Remove user from teams](lifecycle-workflow-tasks.md#remove-user-from-teams) | 06aa7acb-01af-4824-8899-b14e5ed788d6 |
+|[Remove user from all teams](lifecycle-workflow-tasks.md#remove-users-from-all-teams) | 81f7b200-2816-4b3b-8c5d-dc556f07b024 |
+|[Remove all license assignments from user](lifecycle-workflow-tasks.md#remove-all-license-assignments-from-user) | 8fa97d28-3e52-4985-b3a9-a1126f9b8b4e |
+|[Delete user](lifecycle-workflow-tasks.md#delete-user) | 8d18588d-9ad3-4c0f-99d0-ec215f0e3dff |
+|[Send email to manager before user last day](lifecycle-workflow-tasks.md#send-email-to-manager-before-user-last-day) | 52853a3e-f4e5-4eb8-bb24-1ac09a1da935 |
+|[Send email on users last day](lifecycle-workflow-tasks.md#send-email-on-users-last-day) | 9c0a1eaf-5bda-4392-9d9e-6e155bb57411 |
+|[Send offboarding email to users manager after their last day](lifecycle-workflow-tasks.md#send-offboarding-email-to-users-manager-after-their-last-day) | 6f22ddd4-b3a5-47a4-a846-0d7c201a49ce |
++
+## Common task parameters (preview)
+
+Common task parameters are the non-unique parameters contained in every task. When adding tasks to a new workflow, or a workflow template, you can customize and configure these parameters so that they match your requirements.
++
+|Parameter |Definition |
+|||
+|category | A read-only string that identifies the category or categories of the task. Automatically determined when the taskDefinitionID is chosen. |
+|taskDefinitionId | A string referencing a taskDefinition which determines which task to run. |
+|isEnabled | A boolean value that denotes whether the task is set to run or not. If set to ΓÇ£true" then the task will run. Defaults to true. |
+|displayName | A unique string that identifies the task. |
+|description | A string that describes the purpose of the task for administrative use. (Optional) |
+|executionSequence | An integer that is read-only which states in what order the task will run in a workflow. For more information about executionSequence and workflow order, see: [Execution conditions](understanding-lifecycle-workflows.md#parts-of-a-workflow). |
+|continueOnError | A boolean value that determines if the failure of this task stops the subsequent workflows from running. |
+|arguments | Contains unique parameters relevant for the given task |
+++
+## Task details (preview)
+
+Below is each specific task, and detailed information such as parameters and prerequisites, required for them to run successfully. The parameters are noted as they appear both in the Azure portal, and within Microsoft Graph. For information about editing Lifecycle Workflow tasks in general, see: [Manage workflow Versions](manage-workflow-tasks.md).
++
+### Send welcome email to new hire
++
+Lifecycle Workflows allow you to automate the sending of welcome emails to new hires in your organization. You're able to customize the task name and description for this task in the Azure portal.
++
+The Azure AD prerequisite to run the **Send welcome email to new hire** task is:
+
+- A populated mail attribute for the user.
++
+For Microsoft Graph the parameters for the **Send welcome email to new hire** task are as follows:
+
+|Parameter |Definition |
+|||
+|category | joiner |
+|displayName | Send Welcome Email (Customizable by user) |
+|description | Send welcome email to new hire (Customizable by user) |
+|taskDefinitionId | 70b29d51-b59a-4773-9280-8841dfd3f2ea |
+++
+```Example for usage within the workflow
+{
+ "category": "joiner",
+ "description": "Send welcome email to new hire",
+ "displayName": "Send Welcome Email",
+ "isEnabled": true,
+ "continueOnError": true,
+ "taskDefinitionId": "70b29d51-b59a-4773-9280-8841dfd3f2ea",
+ "arguments": []
+}
+
+```
+
+### Generate Temporary Access Password and send via email to user's manager
+
+When a compatible user joins your organization, Lifecycle Workflows allow you to automatically generate a Temporary Access Password(TAP) and have it sent to the new user's manager.
+
+With this task in the Azure portal, you're able to give the task a name and description. You must also set the following:
+
+**Activation duration**- How long the password is active.
+**One time use**- If the password is one use only.
+
+
+The Azure AD prerequisites to run the **Generate Temporary Access Password and send via email to user's manager** task are:
+
+- A populated manager attribute for the user.
+- A populated manager's mail attribute for the user.
+- An enabled TAP tenant policy. For more information, see [Enable the Temporary Access Pass policy](../authentication/howto-authentication-temporary-access-pass.md#enable-the-temporary-access-pass-policy)
+
+
+> [!IMPORTANT]
+> A user having this task run for them in a workflow must also not have any other authentication methods, sign ins, or AAD role assignments for this task to work for them.
+
+For Microsoft Graph the parameters for the **Generate Temporary Access Password and send via email to user's manager** task are as follows:
+
+|Parameter |Definition |
+|||
+|category | joiner |
+|displayName | GenerateTAPAndSendEmail (Customizable by user) |
+|description | Generate Temporary Access Password and send via email to user's manager (Customizable by user) |
+|taskDefinitionId | 1b555e50-7f65-41d5-b514-5894a026d10d |
+|arguments | Argument contains the name parameter "tapLifetimeInMinutes", which is the lifetime of the temporaryAccessPass in minutes starting at startDateTime. Minimum 10, Maximum 43200 (equivalent to 30 days). The argument also contains the tapIsUsableOnce parameter, which determines whether the password is limited to a one time use. If true, the pass can be used once; if false, the pass can be used multiple times within the temporaryAccessPass lifetime. |
++
+```Example for usage within the workflow
+{
+ "category": "joiner",
+ "description": "Generate Temporary Access Password and send via email to user's manager",
+ "displayName": "GenerateTAPAndSendEmail",
+ "isEnabled": true,
+ "continueOnError": true,
+ "taskDefinitionId": "1b555e50-7f65-41d5-b514-5894a026d10d",
+ "arguments": [
+ {
+ "name": "tapLifetimeMinutes",
+ "value": "60"
+ },
+ {
+ "name": "tapIsUsableOnce",
+ "value": "true"
+ }
+ ]
+}
+
+```
+
+> [!NOTE]
+> The employee hire date is the same as the startDateTime used for the tapLifetimeInMinutes parameter.
++
+### Add user to group
+
+Allows users to be added to a cloud-only group. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md).
+
+You're able to customize the task name and description for this task.
++
+For Microsoft Graph the parameters for the **Add user to group** task are as follows:
+
+|Parameter |Definition |
+|||
+|category | joiner,leaver |
+|displayName | AddUserToGroup (Customizable by user) |
+|description | Add user to group (Customizable by user) |
+|taskDefinitionId | 22085229-5809-45e8-97fd-270d28d66910 |
+|arguments | Argument contains a name parameter that is the "groupID", and a value parameter which is the group ID of the group you are adding the user to. |
++
+```Example for usage within the workflow
+{
+ "category": "joiner,leaver",
+ "description": "Add user to group",
+ "displayName": "AddUserToGroup",
+ "isEnabled": true,
+ "continueOnError": true,
+ "taskDefinitionId": "22085229-5809-45e8-97fd-270d28d66910",
+ "arguments": [
+ {
+ "name": "groupID",
+ "value": "0732f92d-6eb5-4560-80a4-4bf242a7d501"
+ }
+ ]
+}
+
+```
++
+### Add user to team
+
+You're able to add a user to an existing static team. You're able to customize the task name and description for this task.
++
+For Microsoft Graph the parameters for the **Add user to team** task are as follows:
+
+|Parameter |Definition |
+|||
+|category | joiner,leaver |
+|displayName | AddUserToTeam (Customizable by user) |
+|description | Add user to team (Customizable by user) |
+|taskDefinitionId | e440ed8d-25a1-4618-84ce-091ed5be5594 |
+|argument | Argument contains a name parameter that is the "teamID", and a value parameter which is the team ID of the existing team you are adding a user to. |
+++
+```Example for usage within the workflow
+{
+ "category": "joiner,leaver",
+ "description": "Add user to team",
+ "displayName": "AddUserToTeam",
+ "isEnabled": true,
+ "continueOnError": true,
+ "taskDefinitionId": "e440ed8d-25a1-4618-84ce-091ed5be5594",
+ "arguments": [
+ {
+ "name": "teamID",
+ "value": "e3cc382a-c4b6-4a8c-b26d-a9a3855421bd"
+ }
+ ]
+}
+
+```
+
+### Enable user account
+
+Allows cloud-only user accounts to be enabled. You're able to customize the task name and description for this task in the Azure portal.
+++
+For Microsoft Graph the parameters for the **Enable user account** task are as follows:
+
+|Parameter |Definition |
+|||
+|category | joiner,leaver |
+|displayName | EnableUserAccount (Customizable by user) |
+|description | Enable user account (Customizable by user) |
+|taskDefinitionId | 6fc52c9d-398b-4305-9763-15f42c1676fc |
+++
+```Example for usage within the workflow
+ {
+ "category": "joiner,leaver",
+ "description": "Enable user account",
+ "displayName": "EnableUserAccount",
+ "isEnabled": true,
+ "continueOnError": true,
+ "taskDefinitionId": "6fc52c9d-398b-4305-9763-15f42c1676fc",
+ "arguments": []
+}
+
+```
+
+### Run a Custom Task Extension
+
+Workflows can be configured to launch a custom task extension. You're able to customize the task name and description for this task using the Azure portal.
++
+The Azure AD prerequisite to run the **Run a Custom Task Extension** task is:
+
+- A Logic App that is compatible with the custom task extension. For more information, see: [Lifecycle workflow extensibility](lifecycle-workflow-extensibility.md).
+
+For Microsoft Graph the parameters for the **Run a Custom Task Extension** task are as follows:
+
+|Parameter |Definition |
+|||
+|category | joiner,leaver |
+|displayName | Run a Custom Task Extension (Customizable by user) |
+|description | Run a custom Task Extension (Customizable by user) |
+|taskDefinitionId | "d79d1fcc-16be-490c-a865-f4533b1639ee |
+|argument | Argument contains a name parameter that is the "LogicAppURL", and a value parameter which is the Logic App HTTP trigger. |
++++
+```Example for usage within the workflow
+{
+ "category": "joiner,leaver",
+ "description": "Run a Custom Task Extension to call-out to an external system.",
+ "displayName": "Run a Custom Task Extension",
+ "isEnabled": true,
+ "continueOnError": true,
+ "taskDefinitionId": "d79d1fcc-16be-490c-a865-f4533b1639ee",
+ "arguments": [
+ {
+ "name": "CustomTaskExtensionID",
+ "value": ""<ID of your Custom Task Extension>""
+ }
+ ]
+}
+
+```
+
+For more information on setting up a Logic app to run with Lifecycle Workflows, see:[Trigger Logic Apps with custom Lifecycle Workflow tasks](trigger-custom-task.md).
+
+### Disable user account
+
+Allows cloud-only user accounts to be disabled. You're able to customize the task name and description for this task in the Azure portal.
+++
+For Microsoft Graph the parameters for the **Disable user account** task are as follows:
+
+|Parameter |Definition |
+|||
+|category | joiner,leaver |
+|displayName | DisableUserAccount (Customizable by user) |
+|description | Disable user account (Customizable by user) |
+|taskDefinitionId | 1dfdfcc7-52fa-4c2e-bf3a-e3919cc12950 |
++
+```Example for usage within the workflow
+{
+ "category": "joiner,leaver",
+ "description": "Disable user account",
+ "displayName": "DisableUserAccount",
+ "isEnabled": true,
+ "continueOnError": true,
+ "taskDefinitionId": "1dfdfcc7-52fa-4c2e-bf3a-e3919cc12950",
+ "arguments": []
+}
+
+```
+
+### Remove user from groups
+
+Allows you to remove a user from cloud-only groups. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md).
+
+You're able to customize the task name and description for this task in the Azure portal.
+++
+For Microsoft Graph the parameters for the **Remove user from groups** task are as follows:
+
+|Parameter |Definition |
+|||
+|category | leaver |
+|displayName | Remove user from selected groups (Customizable by user) |
+|description | Remove user from membership of selected Azure AD groups (Customizable by user) |
+|taskDefinitionId | 1953a66c-751c-45e5-8bfe-01462c70da3c |
+|argument | Argument contains a name parameter that is the "groupID", and a value parameter which is the group Id(s) of the group or groups you are removing the user from. |
+++
+```Example for usage within the workflow
+{
+ "category": "leaver",
+ "continueOnError": true,
+ "displayName": "Remove user from selected groups",
+ "description": "Remove user from membership of selected Azure AD groups",
+ "isEnabled": true,
+ "continueOnError": true,
+ "taskDefinitionId": "1953a66c-751c-45e5-8bfe-01462c70da3c",
+ "arguments": [
+ {
+ "name": "groupID",
+ "value": "GroupId1, GroupId2, GroupId3, ..."
+ }
+ ]
+}
+
+```
+
+### Remove users from all groups
+
+Allows users to be removed from every cloud-only group they are a member of. To control access to on-premises applications and resources, you need to enable group writeback. For more information, see [Azure AD Connect group writeback](../hybrid/how-to-connect-group-writeback-v2.md).
++
+You're able to customize the task name and description for this task in the Azure portal.
+
+ :::image type="content" source="media/lifecycle-workflow-task/remove-all-groups-task.png" alt-text="Screenshot of Workflows task: remove user from all groups.":::
++
+For Microsoft Graph the parameters for the **Remove users from all groups** task are as follows:
+
+|Parameter |Definition |
+|||
+|category | leaver |
+|displayName | Remove user from all groups (Customizable by user) |
+|description | Remove user from all Azure AD groups memberships (Customizable by user) |
+|taskDefinitionId | b3a31406-2a15-4c9a-b25b-a658fa5f07fc |
+++
+```Example for usage within the workflow
+{
+ "category": "leaver",
+ "continueOnError": true,
+ "displayName": "Remove user from all groups",
+ "description": "Remove user from all Azure AD groups memberships",
+ "isEnabled": true,
+ "continueOnError": true,
+ "taskDefinitionId": "b3a31406-2a15-4c9a-b25b-a658fa5f07fc",
+ "arguments": []
+}
+
+```
+
+### Remove User from Teams
+
+Allows a user to be removed from one or multiple static teams. You're able to customize the task name and description for this task in the Azure portal.
+
+For Microsoft Graph the parameters for the **Remove User from Teams** task are as follows:
+
+|Parameter |Definition |
+|||
+|category | joiner,leaver |
+|displayName | Remove user from selected Teams (Customizable by user) |
+|description | Remove user from membership of selected Teams (Customizable by user) |
+|taskDefinitionId | 06aa7acb-01af-4824-8899-b14e5ed788d6 |
+|arguments | Argument contains a name parameter that is the "teamID", and a value parameter which is the Teams ID of the Teams you are removing the user from. |
++
+```Example for usage within the workflow
+{
+ "category": "joiner,leaver",
+ "continueOnError": true,
+ "displayName": "Remove user from selected Teams",
+ "description": "Remove user from membership of selected Teams",
+ "isEnabled": true,
+ "continueOnError": true,
+ "taskDefinitionId": "06aa7acb-01af-4824-8899-b14e5ed788d6",
+ "arguments": [
+ {
+ "name": "teamID",
+ "value": "TeamId1, TeamId2, TeamId3, ..."
+ }
+ ]
+}
+
+```
+
+### Remove users from all teams
+
+Allows users to be removed from every static team they are a member of. You're able to customize the task name and description for this task in the Azure portal.
+
+For Microsoft Graph the parameters for the **Remove users from all teams** task are as follows:
+
+|Parameter |Definition |
+|||
+|category | leaver |
+|displayName | Remove user from all Teams memberships (Customizable by user) |
+|description | Remove user from all Teams (Customizable by user) |
+|taskDefinitionId | 81f7b200-2816-4b3b-8c5d-dc556f07b024 |
+++
+```Example for usage within the workflow
+{
+ "category": "leaver",
+ "continueOnError": true,
+ "description": "Remove user from all Teams",
+ "displayName": "Remove user from all Teams memberships",
+ "isEnabled": true,
+ "continueOnError": true,
+ "taskDefinitionId": "81f7b200-2816-4b3b-8c5d-dc556f07b024",
+ "arguments": []
+}
+
+```
+
+### Remove all license assignments from User
+
+Allows all direct license assignments to be removed from a user. For group-based license assignments, you would run a task to remove the user from the group the license assignment is part of.
+
+You're able to customize the task name and description for this task in the Azure portal.
+
+For Microsoft Graph the parameters for the **Remove all license assignment from user** task are as follows:
+
+|Parameter |Definition |
+|||
+|category | leaver |
+|displayName | Remove all licenses for user (Customizable by user) |
+|description | Remove all licenses assigned to the user (Customizable by user) |
+|taskDefinitionId | 8fa97d28-3e52-4985-b3a9-a1126f9b8b4e |
+++
+```Example for usage within the workflow
+{
+ "category": "leaver",
+ "continueOnError": true,
+ "displayName": "Remove all licenses for user",
+ "description": "Remove all licenses assigned to the user",
+ "isEnabled": true,
+ "continueOnError": true,
+ "taskDefinitionId": "8fa97d28-3e52-4985-b3a9-a1126f9b8b4e",
+ "arguments": []
+}
+
+```
+
+### Delete User
+
+Allows cloud-only user accounts to be deleted. You're able to customize the task name and description for this task in the Azure portal.
++
+For Microsoft Graph the parameters for the **Delete User** task are as follows:
+
+|Parameter |Definition |
+|||
+|category | leaver |
+|displayName | Delete user account (Customizable by user) |
+|description | Delete user account in Azure AD (Customizable by user) |
+|taskDefinitionId | 8d18588d-9ad3-4c0f-99d0-ec215f0e3dff |
+++
+```Example for usage within the workflow
+{
+ "category": "leaver",
+ "continueOnError": true,
+ "displayName": "Delete user account",
+ "description": "Delete user account in Azure AD",
+ "isEnabled": true,
+ "continueOnError": true,
+ "taskDefinitionId": "8d18588d-9ad3-4c0f-99d0-ec215f0e3dff",
+ "arguments": []
+}
+
+```
+
+## Send email to manager before user last day
+
+Allows an email to be sent to a user's manager before their last day. You're able to customize the task name and the description for this task in the Azure portal.
++
+The Azure AD prerequisite to run the **Send email before user last day** task are:
+
+- A populated manager attribute for the user.
+- A populated manager's mail attribute for the user.
+
+For Microsoft Graph the parameters for the **Send email before user last day** task are as follows:
+
+|Parameter |Definition |
+|||
+|category | leaver |
+|displayName | Send email before userΓÇÖs last day (Customizable by user) |
+|description | Send offboarding email to userΓÇÖs manager before the last day of work (Customizable by user) |
+|taskDefinitionId | 52853a3e-f4e5-4eb8-bb24-1ac09a1da935 |
+
+```Example for usage within the workflow
+{
+ "category": "leaver",
+ "continueOnError": true,
+ "displayName": "Send email before userΓÇÖs last day",
+ "description": "Send offboarding email to userΓÇÖs manager before the last day of work",
+ "isEnabled": true,
+ "continueOnError": true,
+ "taskDefinitionId": "52853a3e-f4e5-4eb8-bb24-1ac09a1da935",
+ "arguments": []
+}
+
+```
+
+## Send email on users last day
+
+Allows an email to be sent to a user's manager on their last day. You're able to customize the task name and the description for this task in the Azure portal.
+
+The Azure AD prerequisite to run the **Send email on user last day** task are:
+
+- A populated manager attribute for the user.
+- A populated manager's mail attribute for the user.
+
+For Microsoft Graph the parameters for the **Send email on user last day** task are as follows:
+
+|Parameter |Definition |
+|||
+|category | leaver |
+|displayName | Send email on userΓÇÖs last day (Customizable by user) |
+|description | Send offboarding email to userΓÇÖs manager on the last day of work (Customizable by user) |
+|taskDefinitionId | 9c0a1eaf-5bda-4392-9d9e-6e155bb57411 |
+
+```Example for usage within the workflow
+{
+ "category": "leaver",
+ "continueOnError": true,
+ "displayName": "Send email on userΓÇÖs last day",
+ "description": "Send offboarding email to userΓÇÖs manager on the last day of work",
+ "isEnabled": true,
+ "continueOnError": true,
+ "taskDefinitionId": "9c0a1eaf-5bda-4392-9d9e-6e155bb57411",
+ "arguments": []
+}
+
+```
+
+## Send offboarding email to users manager after their last day
+
+Allows an email containing offboarding information to be sent to the user's manager after their last day. You're able to customize the task name and description for this task in the Azure portal.
+
+The Azure AD prerequisite to run the **Send offboarding email to users manager after their last day** task are:
+
+- A populated manager attribute for the user.
+- A populated manager's mail attribute for the user.
++
+For Microsoft Graph the parameters for the **Send offboarding email to users manager after their last day** task are as follows:
+
+|Parameter |Definition |
+|||
+|category | leaver |
+|displayName | Send offboarding email to userΓÇÖs manager after the last day of work (Customizable by user) |
+|description | Remove user from all Teams (Customizable by user) |
+|taskDefinitionId | 6f22ddd4-b3a5-47a4-a846-0d7c201a49ce |
+
+```Example for usage within the workflow
+{
+ "category": "leaver",
+ "continueOnError": true,
+ "displayName": "Send offboarding email to userΓÇÖs manager after the last day of work",
+ "description": "Send email after userΓÇÖs last day",
+ "isEnabled": true,
+ "continueOnError": true,
+ "taskDefinitionId": "6f22ddd4-b3a5-47a4-a846-0d7c201a49ce",
+ "arguments": []
+}
+
+```
+
+## Next steps
+
+- [Manage lifecycle workflows properties](manage-workflow-properties.md)
+- [Manage lifecycle workflow versions](delete-lifecycle-workflow.md)
active-directory Lifecycle Workflow Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-templates.md
+
+ Title: Workflow Templates and categories - Azure Active Directory
+description: Conceptual article discussing workflow templates and categories with Lifecycle Workflows
++++++ Last updated : 07/06/2022+++
+# Lifecycle Workflows templates (Preview)
++
+Lifecycle Workflows allows you to automate the lifecycle management process for your organization by creating workflows that contain both built-in tasks, and custom task extensions. These workflows, and the tasks within them, all fall into categories based on the Joiner-Mover-Leaver(JML) model of lifecycle management. To make this process even more efficient, Lifecycle Workflows also provide you templates, which you can use to accelerate the set up, creation, and configuration of common lifecycle management processes. You can create workflows based on these templates as is, or you can customize them even further to match the requirements for users within your organization. In this article you'll get the complete list of workflow templates, common template parameters, default template parameters for specific templates, and the list of compatible tasks for each template. For full task definitions, see [Lifecycle Workflow tasks and definitions](lifecycle-workflow-tasks.md).
++
+## Lifecycle Workflow Templates
+
+Lifecycle Workflows currently have six built-in templates you can use or customize:
++
+The list of templates are as follows:
+
+- [Onboard pre-hire employee](lifecycle-workflow-templates.md#onboard-pre-hire-employee)
+- [Onboard new hire employee](lifecycle-workflow-templates.md#onboard-new-hire-employee)
+- [Real-time employee termination](lifecycle-workflow-templates.md#real-time-employee-termination)
+- [Pre-Offboarding of an employee](lifecycle-workflow-templates.md#pre-offboarding-of-an-employee)
+- [Offboard an employee](lifecycle-workflow-templates.md#offboard-an-employee)
+- [Post-Offboarding of an employee](lifecycle-workflow-templates.md#post-offboarding-of-an-employee)
+
+For a complete guide on creating a new workflow from a template, see: [Tutorial: On-boarding users to your organization using Lifecycle workflows with Azure portal](tutorial-onboard-custom-workflow-portal.md).
+
+### Onboard pre-hire employee
+
+ The **Onboard pre-hire employee** template is designed to configure tasks that must be completed before an employee's start date.
++
+The default specific parameters and properties for the **Onboard pre-hire employee** template are as follows:
+++
+|parameter |description |Customizable |
+||||
+|Category | Joiner | ❌ |
+|Trigger Type | Trigger and Scope Based | ❌ |
+|Days from event | -7 | ✔️ |
+|Event timing | Before | ❌ |
+|Event User attribute | EmployeeHireDate | ❌ |
+|Scope type | Rule based | ❌ |
+|Execution conditions | (department eq 'Marketing') | ✔️ |
+|Tasks | **Generate TAP And Send Email** | ✔️ |
++
+### Onboard new hire employee
+
+The **Onboard new-hire employee** template is designed to configure tasks that will be completed on an employee's start date.
++
+The default specific parameters for the **Onboard new hire employee** template are as follows:
++
+|parameter |description |Customizable |
+||||
+|Category | Joiner | ❌ |
+|Trigger Type | Trigger and Scope Based | ❌ |
+|Days from event | 0 | ✔️ |
+|Event timing | On | ❌ |
+|Event User attribute | EmployeeHireDate | ❌ |
+|Scope type | Rule based | ❌ |
+|Execution conditions | (department eq 'Marketing') | ✔️ |
+|Tasks | **Add User To Group**, **Enable User Account**, **Send Welcome Email** | ✔️ |
+++
+### Real-time employee termination
+
+The **Real-time employee termination** template is designed to configure tasks that will be completed immediately when an employee is terminated.
++
+The default specific parameters for the **Real-time employee termination** template are as follows:
++
+|parameter |description |Customizable |
+||||
+|Category | Leaver | ❌ |
+|Trigger Type | On-demand | ❌ |
+|Tasks | **Remove user from all groups**, **Delete User Account**, **Remove user from all Teams** | ✔️ |
+
+> [!NOTE]
+> As this template is designed to run on-demand, no execution condition is present.
+++
+### Pre-Offboarding of an employee
+
+The **Pre-Offboarding of an employee** template is designed to configure tasks that will be completed before an employee's last day of work.
++
+The default specific parameters for the **Pre-Offboarding of an employee** template are as follows:
++
+|parameter |description |Customizable |
+||||
+|Category | Leaver | ❌ |
+|Trigger Type | Trigger and Scope Based | ❌ |
+|Days from event | 7 | ✔️ |
+|Event timing | Before | ❌ |
+|Event User attribute | employeeLeaveDateTime | ❌ |
+|Scope type | Rule based | ❌ |
+|Execution condition | None | ✔️ |
+|Tasks | **Remove user from selected groups**, **Remove user from selected Teams** | ✔️ |
++++
+### Offboard an employee
+
+The **Offboard an employee** template is designed to configure tasks that will be completed on an employee's last day of work.
++
+The default specific parameters for the **Offboard an employee** template are as follows:
++
+|parameter |description |Customizable |
+||||
+|Category | Leaver | ❌ |
+|Trigger Type | Trigger and Scope Based | ❌ |
+|Days from event | 0 | ✔️ |
+|Event timing | On | ❌ |
+|Event User attribute | employeeLeaveDateTime | ❌ |
+|Scope type | Rule based | ❌ |
+|Execution condition | (department eq 'Marketing') | ✔️ |
+|Tasks | **Disable User Account**, **Remove user from all groups**, **Remove user from all Teams** | ✔️ |
++
+### Post-Offboarding of an employee
+
+The **Post-Offboarding of an employee** template is designed to configure tasks that will be completed after an employee's last day of work.
++
+The default specific parameters for the **Post-Offboarding of an employee** template are as follows:
+
+|parameter |description |Customizable |
+||||
+|Category | Leaver | ❌ |
+|Trigger Type | Trigger and Scope Based | ❌ |
+|Days from event | 7 | ✔️ |
+|Event timing | After | ❌ |
+|Event User attribute | employeeLeaveDateTime | ❌ |
+|Scope type | Rule based | ❌ |
+|Execution condition | (department eq 'Marketing') | ✔️ |
+|Tasks | **Remove all licenses for user**, **Remove user from all Teams**, **Delete User Account** | ✔️ |
++++
+## Next steps
+
+- [Lifecycle Workflow tasks and definitions](lifecycle-workflow-tasks.md)
+- [Create a Lifecycle workflow](create-lifecycle-workflow.md)
++
active-directory Lifecycle Workflow Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflow-versioning.md
+
+ Title: Workflow Versioning - Azure Active Directory
+description: An article discussing Lifecycle workflow versioning and history
++++++ Last updated : 07/06/2022+++
+# Lifecycle Workflows Versioning
+++
+Workflows created using Lifecycle Workflows can be updated as needed to satisfy organizational requirements in terms of auditing the lifecycle of users in your organization. To manage updates in workflows, Lifecycle Workflows introduce the concept of workflow versioning. Workflow versions are new versions of existing workflows, triggered by updating execution conditions or its tasks. Workflow versions can change the actions or even scope of an existing workflow. Understanding how workflow versioning is handled during the workflow update process allows you to strategically set up workflows so that workflows tasks, and conditions, are always relevant for users processed by a workflow.
++
+## Versioning benefits
+
+Versioning with Lifecycle Workflows provides many benefits over the alternative of creating a new workflow for each use case. These benefits show up in its ability to improve the reporting process for both troubleshooting, and record keeping, capabilities in the following ways:
+
+- **Long-term retention**- Versioning allows for longer retention of workflow information than by only using the audit logs. While the audit logs only store information from the previous 30 days, with versioning you're able to keep track of workflow details from creation.
+- **Traceability**- Allows tracking of which specific version of a workflow processed a user.
+
+## Workflow properties and versions
+
+While updates to workflows can trigger the creation of a new version, this isn't always the case. There are parameters of workflows, known as basic properties, that can be updated without a new version of the workflow being created. The list of these parameters are as follows:
+
+- displayName
+- description
+- isEnabled
+- IsSchedulingEnabled
++
+You'll find these corresponding parameters in the Azure portal under the **Properties** section of the workflow you're updating.
+
+For a step by step guide on updating these properties using both the Azure portal and the API via Microsoft Graph, see: [Manage workflow properties](manage-workflow-properties.md).
+
+Properties that will trigger the creation of a new version are as follows:
+
+- tasks
+- executionConditions
+
++
+While new versions of these workflows are made as soon as you make the updates in the Azure portal, making a new version of a workflow using the API with Microsoft Graph requires running the workflow creation call again with the changes included. For a step by step guide for updating either tasks, or execution conditions, see: [Manage Workflow Versions](manage-workflow-tasks.md).
+
+> [!NOTE]
+> If the workflow is on-demand, the configure information associated with execution conditions will not be present.
+
+## What details are contained in workflow version history
+
+Unlike with changing basic properties of a workflow, newly created workflow versions can be vastly different from previous versions. Tasks can be added or removed, and who the workflow runs for can be different. Due to the vast changes that can happen to a workflow between versions, version details are also there to give detailed information about not only the current version of the workflow, but also its previous iterations.
+
+Details contained in version information as shown in the Azure portal:
+++
+Detailed **Version information** are as follows:
++
+|parameter |description |
+|||
+|Version Number | An integer denoting which version of the workflow the information is for. Sequentially goes up with each new workflow version. |
+|Last modified date | The last time the workflow was updated. For previous versions of workflows, the last modified date will always be the time the next version was created. |
+|Last modified by | Who last modified this workflow version. |
+|Created date | The date and time for when a workflow version was created. |
+|Created by | Who created this specific version of the workflow. |
+|Name | Name of the workflow at this version. |
+|Description | Description of the workflow at this version. |
+|Category | Category of the workflow. |
+|Execution Conditions | Defines for who and when the workflow runs in this version. |
+|Tasks | The tasks present in this workflow version. If viewing through the API, you're also able to see task arguments. For specific task definitions, see: [Lifecycle Workflow tasks and definitions](lifecycle-workflow-tasks.md) |
+++
+## Next steps
+
+- [Manage workflow Properties (Preview)](manage-workflow-properties.md)
+- [Manage workflow versions (Preview)](manage-workflow-tasks.md)
+
active-directory Lifecycle Workflows Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflows-deployment.md
+
+ Title: Plan a Lifecycle Workflow deployment
+description: Planning guide for a successful Lifecycle Workflow deployment in Azure AD.
+
+documentationCenter: ''
++
+editor:
++
+ na
++ Last updated : 04/16/2021+++++
+# Plan a Lifecycle Workflow deployment
+
+[Lifecycle Workflows](what-are-lifecycle-workflows.md) help your organization to manage Azure AD users by increasing automation. With Lifecycle Workflows, you can:
+
+- **Extend** your HR-driven provisioning process with other workflows that simplify and automate tasks.
+- **Centralize** your workflow process so you can easily create and manage workflows all in one location.
+- **Troubleshoot** workflow scenarios with the Workflow history and Audit logs with minimal effort.
+- **Manage** user lifecycle at scale. As your organization grows, the need for other resources to manage user lifecycles is minimalized.
+- **Reduce** or remove manual tasks that were done in the past with automated Lifecycle Workflows
+- **Apply** logic apps to extend workflows for more complex scenarios using your existing Logic apps
+
+Lifecycle Workflows are an [Azure AD Identity Governance](identity-governance-overview.md) capability. The other capabilities are [entitlement management](entitlement-management-overview.md), [access reviews](access-reviews-overview.md),[Privileged Identity Management (PIM)](../privileged-identity-management/pim-configure.md), and [terms of use](../conditional-access/terms-of-use.md). Together, they help you address these questions:
+
+ - Which users should have access to which resources?
+ - What are those users doing with that access?
+ - Is there effective organizational control for managing access?
+ - Can auditors verify that the controls are working?
+ - Are users ready to go on day one or do they have access removed in a timely manner?
+
+Planning your Lifecycle Workflow deployment is essential to make sure you achieve your desired governance strategy for users in your organization.
+
+For more information on deployment plans, see [Azure AD deployment plans](../fundamentals/active-directory-deployment-plans.md)
+
+## Licenses
+++
+>[!Note]
+>Be aware that if your license expires, any workflows that you have created will stop working.
+>
+>Workflows that are in progress when a license expires will continue to exectue, but no new ones will be processed.
+
+### Plan the Lifecycle Workflow deployment project
+
+Consider your organizational needs to determine the strategy for deploying Lifecycle Workflows in your environment.
+
+### Engage the right stakeholders
+
+When technology projects fail, they typically do so because of mismatched expectations on impact, outcomes, and responsibilities. To avoid these pitfalls, [ensure that you're engaging the right stakeholders](../fundamentals/active-directory-deployment-plans.md) and that project roles are clear.
+
+For Lifecycle Workflows, you'll likely include representatives from the following teams within your organization:
+
+- **IT administration** manages your IT infrastructure and administers your cloud investments and software as a service (SaaS) apps. This team:
+
+ * Reviews Lifecycle Workflows to infrastructure and apps, including Microsoft 365 and Azure AD.
+ * Schedules and runs Lifecycle Workflows on users.
+ * Ensures that programmatic Lifecycle Workflows, via GRAPH or extensibility, are governed and reviewed.
++
+- **Security Owner** ensures that the plan will meet the security requirements of your organization. This team:
+ - Ensure Lifecycle Workflows meet organizational security policies
+
+ - **Compliance manager** ensures that the organization follows internal policy and complies with regulations. This team:
+
+ * Requests or schedules new Lifecycle Workflow reviews.
+ * Assesses processes and procedures for reviewing Lifecycle Workflows, which include documentation and record keeping for compliance.
+ * Reviews results of past reviews for most critical resources.
+- **HR Representative** - Assists with attribute mapping and population in HR provisioning scenarios. This team:
+ * Helps determine attributes that will be used to populate employeeHireDate and employeeLeaveDateTime.
+ * Ensures source attributes are populated and have values
+ * Identifies and suggests alternate attributes that could be mapped to employeeHireDate and employeeLeaveDateTime
+
+- **Development teams** build and maintain applications for your organization. This team:
+ * Develops custom workflows using GRAPH
+ * Integrates Lifecycle Workflows with Logic Apps via extensibility.
+++
+### Plan communications
+
+Communication is critical to the success of any new business process. Proactively communicate to users how and when their experience will change. Tell them how to gain support if they experience issues.
+
+### Communicate changes in accountability
+
+Lifecycle Workflows support shifting responsibility of manual processes to business owners. Decoupling these processes from the IT department drives more accuracy and automation. This shift is a cultural change in the resource owner's accountability and responsibility. Proactively communicate this change and ensure resource owners are trained and able to use the insights to make good decisions.
+++
+## Introduction to Lifecycle Workflows
+
+This section introduces Lifecycle Workflow concepts you should know before you plan your deployment.
++++
+## Prerequisites to deploying Lifecycle Workflows
+The following is important information about your organization and the technologies that need to be in place prior to deploying Lifecycle Workflows. Ensure that you can answer yes to each of the items before attempting to deploy Lifecycle Workflows.
+
+|Item|Description|Documentation|
+|--|--|--|
+|Inbound Provisioning|You have a process to create user accounts for employees in Azure AD such as HR inbound, SuccessFactors, or MIM.<br><br> Alternatively you have a process to create user accounts in Active Directory and those accounts are provisioned to Azure AD.|[Workday to Active Directory](../saas-apps/workday-inbound-tutorial.md)<br><br>[Workday to Azure AD](../saas-apps/workday-inbound-tutorial.md)<br><br>[SuccessFactors to Active Directory](../saas-apps/sap-successfactors-inbound-provisioning-tutorial.md)</br></br>[SuccessFactors to Azure AD](../saas-apps/sap-successfactors-inbound-provisioning-cloud-only-tutorial.md)<br><br>[Azure AD Connect](../hybrid/whatis-azure-ad-connect-v2.md)<br><br>[Azure AD Connect cloud sync](../cloud-sync/what-is-cloud-sync.md)|
+|Attribute synchronization|The accounts in Azure AD have the employeeHireDate and employeeLeaveDateTime attributes populated. The values may be populated when the accounts are created from an HR system or synchronized from AD using Azure AD Connect or cloud sync. You have additional attributes, that will be used to determine the scope, such as department, populated or the ability to populate, with data.|[How to synchronize attributes for Lifecycle Workflows](how-to-lifecycle-workflow-sync-attributes.md)
+
+## Understanding parts of a workflow
+Before you begin planning a Lifecycle Workflow deployment, you should become familiar with the parts of workflow and the terminology around Lifecycle Workflows.
+
+The [Understanding Lifecycle Workflows](understanding-lifecycle-workflows.md) document, uses the portal to explain the parts of a workflow. The [Developer API reference Lifecycle Workflows](lifecycle-workflows-developer-reference.md) document, uses a GRAPH example to explain the parts of a workflow.
+
+You can use this document to become familiar with the parts of workflow prior to deploying them.
+
+## Limitations and constraints
+The following table provides information that you need to be aware of as you create and deploy Lifecycle workflows.
+
+|Item|Description|
+|--|--|
+|Workflows|50 workflow limit per tenant|
+|Number of custom tasks|limit of 25 per workflow|
+|Value range for offsetInDays|Between -60 and 60 days|
+|Workflow execution schedule|Default every 3 hours - can be set to run anywhere from 1 to 24 hours|
+|Custom task extensions|Limit of 100|
+|On-demand user limit|You can run an on-demand workflow against a maximum of 10 users|
+|Extensibility callback timeout limit|Min 3 minutes - Maximum 5 hours|
+
+The following is additional information you should be aware of.
+
+ - You cannot enable the schedule for the Real-Time Leaver scenario. This is by design.
++++
+## Lifecycle workflow creation checklist
+The following table provides a quick checklist of steps you can use when designing and planning your workflows.
+
+|Step|Description|
+|--|--|
+|[Determine your scenario](#determine-your-scenario)|Determine what scenario you're addressing with a workflow|
+|[Determine the execution conditions](#determine-the-execution-conditions)|Determine who and when the workflow will run|
+|[Review the tasks](#review-the-tasks)|Review and add additional tasks to the workflow|
+|[Create your workflow](#create-your-workflow)|Create your workflow after planning and design.|
+|[Plan a pilot](#plan-a-pilot)|Plan to pilot, run, and test your workflow.|
+
+## Determine your scenario
+Before building a Lifecycle Workflow in the portal, you should determine which scenario or scenarios you wish to deploy. You can use the table below to see a current list of the available scenarios. These are based on the templates that are available in the portal and list the task associated with each one.
+
+|Scenario|Pre-defined Tasks|
+|--|--|
+|Onboard pre-hire employee| Generate TAP and Send Email|
+|Onboard new hire employee|Enable User Account</br>Send Welcome Email</br>Add User To Groups|
+|Real-time employee termination|Remove user from all groups</br>Remove user from all Teams</br>Delete User Account|
+|Pre-Offboarding of an employee|Remove user from selected groups</br>Remove user from selected Teams|
+|Offboard an employee|Disable User Account</br>Remove user from all groups</br>Remove user from all Teams|
+|Post-Offboarding of an employee|Remove all licenses for user</br>Remove user from all Teams</br>Delete User Account|
+
+For more information on the built-in templates, see [Lifecycle Workflow templates.](lifecycle-workflow-templates.md)
++
+## Determine the execution conditions
+Now that you've determined your scenarios, you need to look at what users in your organization the scenarios will apply to.
+
+An Execution condition is the part of a workflow that defines the scope of **who** and the trigger of **when** a workflow will be performed.
+
+The [scope](understanding-lifecycle-workflows.md#configure-scope) determines who the workflow runs against. This is defined by a rule that will filter users based on a condition. For example, the rule, `"rule": "(department eq 'sales')"` will run the task only on users who are members of the sales department.
+
+The [trigger](understanding-lifecycle-workflows.md#trigger-details) determines when the workflow will run. This can either be, on-demand, which is immediate, or time based. Most of the pre-defined templates in the portal are time based.
+
+### Attribute information
+The scope of a workflow uses attributes under the rule section. You can add the following extra conditionals to further refine **who** the tasks are applied to.
+ - And
+ - And not
+ - Or
+ - Or not
+
+You can also choose from the numerous user attributes as well.
+
+[![Screenshot of lifecycle workflow rule](media/lifecycle-workflows-deployment/attribute-1.png)](media/lifecycle-workflows-deployment/attribute-1.png#lightbox)
+
+However before selecting an attribute to use in your execution condition, you need to ensure that the attribute is either populated with data or that you can begin populating it with the required data.
+
+Not all of these attributes are populated by default so you should verify with your HR administrator or IT administrators when using HR inbound cloud only provisioning, Azure AD Connect, or cloud sync.
+
+### Time information
+The following is some important information regarding time zones that you should be aware of when designing workflows.
+- Workday and SAP SF will always send the time in Coordinated Universal Time or UTC.
+- if you're in a single time zone it's recommended that you hardcode the time portion to something that works for you. An example would be 5am for new hire scenarios and 10pm for last day of work scenarios.
+- It's recommended, that if you're using temporary access pass (TAP), that you set the maximum lifetime to 24 hours. Doing this will help ensure that the TAP hasn't expired after being sent to an employee who may be in a different timezone. For more information, see [Configure Temporary Access Pass in Azure AD to register Passwordless authentication methods.](../authentication/howto-authentication-temporary-access-pass.md#enable-the-temporary-access-pass-policy)
+
+For more information, see [How to synchronize attributes for Lifecycle Workflows](../governance/how-to-lifecycle-workflow-sync-attributes.md)
+
+## Review the tasks
+Now that we've determined the scenario and the who and when, you should consider whether the pre-defined tasks are sufficient or are you going to need additional tasks. The table below has a list of the pre-defined tasks that are currently in the portal. Use this table to determine if you want to add more tasks.
+
+|Task|Description|Relevant Scenarios|
+|--|--|--|
+|Add user to groups|Add user to selected groups| Joiner - Leaver|
+|Add user to selected teams| Add user to Teams| Joiner - Leaver|
+|Delete User Account| Delete user account in Azure AD| Leaver|
+|Disable User Account| Disable user account in the directory| Joiner - Leaver|
+|Enable User Account| Enable user account in the directory| Joiner - Leaver|
+|Generate TAP and Send Email| Generate Temporary Access Pass and send via email to user's manager| Joiner|
+|Remove all licenses of user| Remove all licenses assigned to the user| Leaver|
+|Remove user from all groups| Remove user from all Azure AD group memberships| Leaver|
+|Remove user from all Teams| Remove user from all Teams memberships| Leaver|
+|Remove user from selected groups| Remove user from membership of selected Azure AD groups| Joiner - Leaver|
+|Remove user from selected Teams| Remove user from membership of selected Teams| Joiner - Leaver|
+|Run a Custom Task Extension| Run a Custom Task Extension to callout to an external system| Joiner - Leaver|
+|Send email after user's last day| Send offboarding email to user's manager after the last day of work| Leaver|
+|Send email before user's last day| Send offboarding email to user's manager before the last day of work| Leaver|
+|Send email on user's last day| Send offboarding email to user's manager on the last day of work| Leaver|
+|Send Welcome Email| Send welcome email to new hire| Joiner|
++
+For more information on tasks, see [Lifecycle Workflow tasks](lifecycle-workflow-tasks.md).
+
+### Group and team tasks
+If you're using a group or team task, the workflow will need you to specify the group or groups. In the screenshot below, you'll see the yellow triangle on the task indicating that it's missing information.
+
+ [![Screenshot of onboard new hire.](media/lifecycle-workflows-deployment/group-1.png)](media/lifecycle-workflows-deployment/group-1.png#lightbox)
+
+By clicking on the task, you'll be presented with a navigation bar to add or remove groups. Select the "x groups selected" link to add groups.
+
+ [![Screenshot of add groups.](media/lifecycle-workflows-deployment/group-2.png)](media/lifecycle-workflows-deployment/group-2.png#lightbox)
+
+### Custom task extensions
+Lifecycle Workflows allow you to create workflows that can be triggered based on joiner, mover, or leaver scenarios. While Lifecycle Workflows provide several built-in tasks to automate common scenarios throughout the lifecycle of users, eventually you may reach the limits of these built-in tasks. With the extensibility feature, you'll be able to utilize the concept of custom task extensions to call-out to external systems as part of a Lifecycle Workflow.
+
+When creating custom task extensions, the scenarios for how it will interact with Lifecycle Workflows can be one of three ways:
+
+- **Fire-and-forget scenario**- The Logic App is started, and the sequential task execution immediately continues with no response expected from the Logic App.
+- **Sequential task execution waiting for response from the Logic App** - The Logic app is started, and the sequential task execution waits on the response from the Logic App.
+- **Sequential task execution waiting for the response of a 3rd party system**- The Logic app is started, and the sequential task execution waits on the response from a 3rd party system that triggers the Logic App to tell the Custom Task extension whether or not it ran successfully.
+- For more information on custom extensions, see [Lifecycle Workflow extensibility (Preview)](lifecycle-workflow-extensibility.md)
+
+## Create your workflow
+Now that you have design and planned your workflow, you can create it in the portal. For detailed information on creating a workflow, see [Create a Lifecycle workflow.](create-lifecycle-workflow.md)
++
+## Plan a pilot
+
+We encourage customers to initially pilot Lifecycle Workflows with a small group of users or a single test user. Piloting can help you adjust processes and communications as needed. It can help you increase users' and reviewers' ability to meet security and compliance requirements.
+
+In your pilot, we recommend that you:
+
+* Start with Lifecycle Workflows where the results are applied to a small subset of users.
+* Monitor audit logs to ensure all events are properly audited.
+
+For more information, see [Best practices for a pilot.](../fundamentals/active-directory-deployment-plans.md).
+++
+#### Test and run the workflow
+Once you've created a workflow, you should test it by running the workflow [on-demand](on-demand-workflow.md)
+
+Using the on-demand feature will allow you to test and evaluate whether the Lifecycle Workflow is working as intended.
+
+Once you have completed testing, you can either rework the Lifecycle Workflow or get ready for a broader distribution.
+
+### Audit logs
+You can also get more information from the audit logs. These logs can be accessed in the portal under Azure Active Directory/monitoring. For more information, see [Audit logs in Azure AD](../reports-monitoring/concept-audit-logs.md) and [Lifecycle workflow history.](lifecycle-workflow-history.md)
+++
+#### Example Lifecycle Workflow plan
+
+|Stage|Description|
+| - | - |
+|Determine the scenario| A pre-hire workflow that sends email to new manager. |
+|Determine the execution conditions|The workflow will run on new employees in the sales department, two(2) days before the employeeHireDate.|
+|Review the tasks.|We'll use the pre-defined tasks in the workflow. No extra tasks will be added.|
+|Create the workflow in the portal|Use the pre-defined template for new hire in the portal.|
+|Enable and test the workflow| Use the on-demand feature to test the workflow on one user.|
+|Review the test results|Review the test results and ensure the Lifecycle Workflow is working as intended.|
+|Roll out the workflow to a broader audience|Communicate with stakeholders, letting them know that is going live and that HR will no longer need to send an email to the hiring manager.
+
+## Next steps
+
+Learn about the following related technologies:
+
+* [How to synchronize attributes for Lifecycle Workflows](how-to-lifecycle-workflow-sync-attributes.md)
+* [Understanding Lifecycle Workflows](understanding-lifecycle-workflows.md)
+* [Lifecycle Workflow templates.](lifecycle-workflow-templates.md)
active-directory Lifecycle Workflows Developer Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/lifecycle-workflows-developer-reference.md
+
+ Title: 'Developer API reference lifecycle workflows- Azure Active Directory'
+description: Provides an API reference guide for Developers using Lifecycle Workflows.
++++++ Last updated : 01/20/2022+++++
+# Developer API reference lifecycle workflows
+The following reference doc provides an overview of how a lifecycle workflow is constructed. The examples here are using workflows that are created using GRAPH and not in the portal. The concepts are however the same. For information using the portal, see [Understanding lifecycle workflows](understanding-lifecycle-workflows.md)
+
+## Parts of a workflow
+Lifecycle Workflows enables you to use workflows for managing the lifecycle of your Azure AD users. In order to create a workflow, you must either specify pre-defined or custom information. Pre-defined workflows can be selected in the Azure AD portal.
+
+A workflow can be broken down in to three main parts.
+ - General information
+ - task information
+ - execution conditions
+
+ [![Screenshot of parts of a workflow.](media/understanding-lifecycle-workflows/workflow-1.png)](media/understanding-lifecycle-workflows/workflow-1.png#lightbox)
+
+### General information
+This portion of a workflow covers basic information such as display name and a description of what the workflow does.
+
+This portion of the workflow supports the following information.
+
+| Parameters | Description |
+|: |::|
+|displayName|The name of the workflow|
+|description|A description of the workflow|
+|isEnabled|Boolean that determines whether the workflow or a task with in a workflow is enabled|
+
+### Task information
+The task section describes the actions that will be taken when a workflow is executed. The actual task is defined by the taskDefinitionID.
+
+Lets examine the tasks section of a sample workflow.
+
+```Request body
+ "tasks":[
+ {
+ "isEnabled":true,
+ "taskDefinitionId":"1b555e50-7f65-41d5-b514-5894a026d10d",
+ "displayName":"Generate TAP And Send Email",
+ "description":"Generate Temporary Access Password and send via email to user's manager",
+ "arguments":[
+ {
+ "name": "tapLifetimeMinutes",
+ "value": "480"
+ },
+ {
+ "name": "tapIsUsableOnce",
+ "value": "true"
+ }
+ ]
+ }
+ ],
+```
++
+This task uses 1b555e50-7f65-41d5-b514-5894a026d10d, which is the taskDefinitionID for [Generate Temporary Access Password and send via email to user's manager](lifecycle-workflow-tasks.md#generate-temporary-access-password-and-send-via-email-to-users-manager). This is a pre-defined task created by Microsoft and will send a user's manager an email that contains a temporary access pass. This task requires the following more arguments.
+
+|Parameter |Definition |
+|||
+|tapLifetimeMinutes | The lifetime of the temporaryAccessPass in minutes starting at startDateTime. Minimum 10, Maximum 43200 (equivalent to 30 days). |
+|tapIsUsableOnce | Determines whether the pass is limited to a one time use. If true, the pass can be used once; if false, the pass can be used multiple times within the temporaryAccessPass lifetime. |
+
+This portion of the workflow supports the following parameters. The arguments section will be based on the actual task defined by the taskDefinitionID.
+
+| Parameters | Description |
+|: |::|
+|isEnabled|Boolean that determines whether the workflow or a task with in a workflow is enabled|
+|tasks|The actions that the workflow will take when it's executed by the extensible lifecycle manager|
+|taskDefinitionID|The unique ID corresponding to a supported task|
+|arguments|Used to specify the activation duration of the TAP and toggle between one-time use or multiple uses|
++
+For additional information on pre-defined tasks, see [Lifecycle Workflow tasks and definitions](lifecycle-workflow-tasks.md)
+++
+### Execution conditions
+
+The execution condition section of a workflow sets up
+ - Who(scope) the workflow runs against
+ - When(trigger) the workflow runs
+
+Lets examine the execution conditions of a sample workflow.
+
+```Request body
+{
+ "executionConditions": {
+ "@odata.type": "microsoft.graph.identityGovernance.triggerAndScopeBasedConditions",
+ "scope": {
+ "@odata.type": "microsoft.graph.identityGovernance.ruleBasedSubjectSet",
+ "rule": "(department eq 'sales')"
+ },
+ "trigger": {
+ "@odata.type": "microsoft.graph.identityGovernance.timeBasedAttributeTrigger",
+ "timeBasedAttribute": "employeeHireDate",
+ "offsetInDays": -2
+ }
+ }
+}
+```
+
+The first portion `microsoft.graph.identityGovernance.triggerAndScopeBasedConditions` tells the workflow execution that there are two settings that comprise the execution condition:
+ - Scope, which determines the subject set for a workflow execution.
+ - Trigger, which determines when a workflow will be executed
+
+The `triggerAndScopeBasedConditions` method is an extension of `microsoft.graph.identityGovernance.workflowExecutionConditions` which is the base type for execution settings for a workflow.
+
+#### Scope
+
+Now, lets look at the first property, `scope`. The scope uses `microsoft.graph.identityGovernance.ruleBasedSubjectSet`.
+
+The `ruleBasedSubjectSet` is a scope based on a filter rule for identifying in-scope subjects. This determines who the workflow runs against.
+
+The `ruleBasedSubjectSet` method is an extension of `microsoft.graph.subjectSet`.
+
+The `ruleBasedSubjectSet` has the following properties:
+
+| Property | Description |
+|: |::|
+|rule|Filter rule for the scope where the syntax is based on the filter query.|
+
+So in our example above, we see the property of our scope is `"rule": "(department eq 'sales')"`.
+
+This means that `ruleBasedSubjectSet` will filter based on the rule property we set, which is that the department attribute of a user equals sales.
++
+ #### Trigger
+
+Now, lets look at the second property, `trigger`. The trigger uses `microsoft.graph.identityGovernance.timeBasedAttributeTrigger`.
+
+The `timeBasedAttributeTrigger` is a trigger based on a time-based attribute for initiating workflow execution. The combination of scope and trigger conditions determine when a workflow is executed and on which identities.
+
+The `timeBasedAttributeTrigger` method is an extension of `microsoft.graph.identityGovernance.workflowExecutionTrigger` which is the base type for execution settings for a workflow.
+
+The `timeBasedAttributeTrigger` has the following properties:
+
+| Property | Description |
+|: |::|
+|timeBasedAttribute|Determines which time-based identity property to reference.|
+|offsetInDays|How many days before or after the time-based attribute specified. For example, if the attribute is employeeHireDate and offsetInDays is -1, then the workflow should trigger 1 day before employee hire date.|
+
+So in our example above, we see the properties of our trigger are `timeBasedAttribute": "employeeHireDate` and `offsetInDays": -2`.
+
+This means that our workflow will trigger two days before the value specified in the employeeHireDate attribute.
+
+#### Summary
+So now, when we put both the scope and trigger together, we get an execution condition that will:
+ - Carry out the tasks defined in the workflow, two days before the users employeeHireDate and only for the users in the sales department.
+
+### Workflow parameter reference
+The following table is a summary of the parameters of a workflow. You can use this a reference for general workflow information or when creating and customizing workflows.
+
+| Parameters | Description |
+|: |::|
+|displayName|The name of the workflow|
+|description|A description of the workflow|
+|isEnabled|Boolean that determines whether the workflow or a task with in a workflow is enabled|
+|tasks|The actions that the workflow will take when it's executed by the extensible lifecycle manager|
+|taskDefinitionID|The unique ID corresponding to a supported task|
+|arguments|Used to specify the activation duration of the TAP and toggle between one-time use or multiple uses|
+|executionConditions|Defines for who (scope) and when (trigger) the workflow runs.|
+|scope|Defines who the workflow should run against|
+|trigger|Defines when the workflow should run|
++
+### Workflow example
+
+The following is a full example of a workflow. It is in the form of a POST API call that will create a pre-hire workflow, which will generate a TAP and send it via email to the user's manager.
+
+ ```http
+ POST https://graph.microsoft.com/beta/identityGovernance/lifecycleManagement/workflows
+ ```
+
+```Request body
+{
+ "displayName":"Onboard pre-hire employee",
+ "description":"Configure pre-hire tasks for onboarding employees before their first day",
+ "isEnabled":false,
+ "tasks":[
+ {
+ "isEnabled":true,
+ "taskDefinitionId":"1b555e50-7f65-41d5-b514-5894a026d10d",
+ "displayName":"Generate TAP And Send Email",
+ "description":"Generate Temporary Access Password and send via email to user's manager",
+ "arguments":[
+ {
+ "name": "tapLifetimeMinutes",
+ "value": "480"
+ },
+ {
+ "name": "tapIsUsableOnce",
+ "value": "true"
+ }
+ ]
+ }
+ ],
+ "executionConditions": {
+ "@odata.type": "microsoft.graph.identityGovernance.triggerAndScopeBasedConditions",
+ "scope": {
+ "@odata.type": "microsoft.graph.identityGovernance.ruleBasedSubjectSet",
+ "rule": "(department eq 'sales')"
+ },
+ "trigger": {
+ "@odata.type": "microsoft.graph.identityGovernance.timeBasedAttributeTrigger",
+ "timeBasedAttribute": "employeeHireDate",
+ "offsetInDays": -2
+ }
+ }
+}
+```
+## Next steps
+- [Create a custom workflow using the Azure portal](tutorial-onboard-custom-workflow-portal.md)
+- [Create a Lifecycle workflow](create-lifecycle-workflow.md)
active-directory Manage Lifecycle Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/manage-lifecycle-workflows.md
+
+ Title: Manage lifecycle with Lifecycle workflows - Azure Active Directory
+description: Learn how to manage user lifecycles with Lifecycle Workflows
+
+documentationcenter: ''
++
+editor: markwahl-msft
++
+ na
++ Last updated : 01/24/2021+++++
+# Manage user lifecycle with Lifecycle Workflows (preview)
+With Lifecycle Workflows, you can easily ensure that users have the appropriate entitlements no matter where they fall under the Joiner-Mover-Leaver(JML) scenario. Before a new hire's start date you can add them to a group. You can generate a temporary password that is sent to their manager to help speed up the onboarding process. You can enable a user account when they join on their hire date, and send a welcome email to them. When a user is moving to a different group you can remove them from that group, and add them to a new one. When a user leaves, you can also delete user accounts.
+
+## Prerequisites
++
+The following **Delegated permissions** and **Application permissions** are required for access to Lifecycle Workflows:
+
+> [!IMPORTANT]
+> The Microsoft Graph API permissions shown below are currently hidden from user interfaces such as Graph Explorer and Azure ADΓÇÖs API permissions UI for app registrations. In such cases you can fall back to Entitlement Managements permissions which also work for Lifecycle Workflows (ΓÇ£EntitlementManagement.Read.AllΓÇ¥ and ΓÇ£EntitlementManagement.ReadWrite.AllΓÇ¥). The Entitlement Management permissions will stop working with Lifecycle Workflows in future versions of the preview.
+
+|Column1 |Display String |Description |Admin Consent Required |
+|||||
+|LifecycleManagement.Read.All | Read all Lifecycle workflows, tasks, user states| Allows the app to list and read all workflows, tasks, user states related to lifecycle workflows on behalf of the signed-in user.| Yes
+|LifecycleManagement.ReadWrite.All | Read and write all lifecycle workflows, tasks, user states.| Allows the app to create, update, list, read and delete all workflows, tasks, user states related to lifecycle workflows on behalf of the signed-in user.| Yes
++++++
+## Language determination within email notifications
+
+When sending email notifications, Lifecycle Workflows can automatically set the language that is displayed. For language priority, Lifecycle Workflows follow the following hierarchy:
+- The user **preferredLanguage** property in the user object takes highest priority.
+- The tenant **preferredLanguage** attribute takes next priority.
+If neither can be determined, Lifecycle Workflows will default the language to English.
+
+## Supported languages in Lifecycle Workflows
++
+|Culture Code |Language |
+|||
+|en-us | English (United States) |
+|ja-jp | Japanese (Japan) |
+|de-de | German (Germany) |
+|fr-fr | French (France) |
+|pt-br | Portuguese (Brazil) |
+|zh-cn | Chinese (Simplified, China) |
+|zh-tw | Chinese (Simplified, Taiwan) |
+|es-es | Spanish (Spain, International Sort) |
+|ko-kr | Korean (Korea) |
+|it-it | Italian (Italy) |
+|nl-nl | Dutch (Netherlands) |
+|ru-ru | Russian (Russia) |
+|cs-cz | Czech (Czech Republic) |
+|pl-pl | Polish (Poland) |
+|tr-tr | Turkish (Turkey) |
+|da-dk | Danish (Denmark) |
+|en-gb | English (United Kingdom) |
+|hu-hu | Hungarian (Hungary) |
+|nb-no | Norwegian Bokmål (Norway) |
+|pt-pt | Portuguese (Portugal) |
+|sv-se | Swedish (Sweden) |
+
+## Supported user and query parameters
+
+Lifecycle Workflows support a rich set of user properties that are available on the user profile in Azure AD. Lifecycle Workflows also support many of the advanced query capabilities available in Graph API. This allows you, for example, to filter on the user properties when managing user execution conditions and making API calls. For more information about currently supported user properties, and query parameters, see: [User properties](/graph/aad-advanced-queries?tabs=http#user-properties)
++
+## Limits and constraints
+
+|Item |Limit |
+|||
+|Custom Workflows | 50 |
+|Number of custom tasks | 25 per workflow |
+|Value range for offsetInDays | Between -60 and 60 days |
+|Default Workflow execution schedule | Every 3 hours |
++
+## Next steps
+- [What are Lifecycle Workflows?](what-are-lifecycle-workflows.md)
+- [Create Lifecycle workflows](create-lifecycle-workflow.md)
active-directory Manage Workflow Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/manage-workflow-properties.md
+
+ Title: Manage workflow properties - Azure Active Directory
+description: This article guides a user to editing a workflow's properties using Lifecycle Workflows
++++++ Last updated : 02/15/2022++++
+# Manage workflow properties (preview)
+
+Managing workflows can be accomplished in one of two ways.
+ - Updating the basic properties of a workflow without creating a new version of it
+ - Creating a new version of the updated workflow.
+
+You can update the following basic information without creating a new workflow.
+ - display name
+ - description
+ - whether or not it is enabled.
+
+If you change any other parameters, a new version is required to be created as outlined in the [Managing workflow versions](manage-workflow-tasks.md) article.
+
+If done via the Azure portal, the new version is created automatically. If done using Microsoft Graph, you will have to manually create a new version of the workflow. For more information, see [Edit the properties of a workflow using Microsoft Graph](#edit-the-properties-of-a-workflow-using-microsoft-graph).
+
+## Edit the properties of a workflow using the Azure portal
+
+To edit the properties of a workflow using the Azure portal, you'll do the following steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Select **Azure Active Directory** and then select **Identity Governance**.
+
+1. On the left menu, select **Lifecycle workflows (Preview)**.
+
+1. On the left menu, select **Workflows (Preview)**.
+
+1. Here you'll see a list of all of your current workflows. Select the workflow that you want to edit.
+
+ :::image type="content" source="media/manage-workflow-properties/manage-list.png" alt-text="Screenshot of the manage workflow list.":::
+
+6. To change the display name or description, select **Properties (Preview)**.
+
+ :::image type="content" source="media/manage-workflow-properties/manage-properties.png" alt-text="Screenshot of the manage basic properties screen.":::
+
+7. Update the display name or description how you want.
+> [!NOTE]
+> Display names can not be the same as other existing workflows. They must have their own unique name.
+
+8. Select **save**.
++
+## Edit the properties of a workflow using Microsoft Graph
+
+To view the list of current workflows you'll run the following API call:
+
+```http
+GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/
+```
+
+Lifecycle workflows can have their basic information such as "displayName", "description", and "isEnabled" edited by running this patch call and body.
+
+```http
+PATCH https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<id>
+Content-type: application/json
+
+{
+ "displayName":"<Unique workflow name string>",
+ "description":"<workflow description>",
+ "isEnabled":<ΓÇ£trueΓÇ¥ or ΓÇ£falseΓÇ¥>,
+}
+
+```
++++++
+## Next steps
+
+- [Manage workflow versions](manage-workflow-tasks.md)
+- [Check status of a workflows](check-status-workflow.md)
active-directory Manage Workflow Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/manage-workflow-tasks.md
+
+ Title: Manage workflow versions - Azure Active Directory
+description: This article guides a user on managing workflow versions with Lifecycle Workflows
++++++ Last updated : 04/06/2022++++
+# Manage workflow versions (Preview)
+
+Workflows created with Lifecycle Workflows are able to grow and change with the needs of your organization. Workflows exist as versions from creation. When making changes to other than basic information, you create a new version of the workflow. For more information, see [Manage a workflow's properties](manage-workflow-properties.md).
+
+Changing a workflow's tasks or execution conditions requires the creation of a new version of that workflow. Tasks within workflows can be added, reordered, and removed at will. Updating a workflow's tasks or execution conditions within the Azure portal will trigger the creation of a new version of the workflow automatically. Making these updates in Microsoft Graph will require the new workflow version to be created manually.
++
+## Edit the tasks of a workflow using the Azure portal
++
+Tasks within workflows can be added, edited, reordered, and removed at will. To edit the tasks of a workflow using the Azure portal, you'll complete the following steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Select **Azure Active Directory** and then select **Identity Governance**.
+
+1. In the left menu, select **Lifecycle workflows (Preview)**.
+
+1. In the left menu, select **workflows (Preview)**.
+
+1. On the left side of the screen, select **Tasks (Preview)**.
+
+1. You can add a task to the workflow by selecting the **Add task** button.
+
+ :::image type="content" source="media/manage-workflow-tasks/manage-tasks.png" alt-text="Screenshot of adding a task to a workflow." lightbox="media/manage-workflow-tasks/manage-tasks.png":::
+
+1. You can enable and disable tasks as needed by using the **Enable** and **Disable** buttons.
+
+1. You can reorder the order in which tasks are executed in the workflow by selecting the **Reorder** button.
+
+ :::image type="content" source="media/manage-workflow-tasks/manage-tasks-reorder.png" alt-text="Screenshot of reordering tasks in a workflow.":::
+
+1. You can remove a task from a workflow by using the **Remove** button.
+
+1. After making changes, select **save** to capture changes to the tasks.
++
+## Edit the execution conditions of a workflow using the Azure portal
+
+To edit the execution conditions of a workflow using the Azure portal, you'll do the following steps:
++
+1. On the left menu of Lifecycle Workflows, select **Workflows (Preview)**.
+
+1. On the left side of the screen, select **Execution conditions (Preview)**.
+ :::image type="content" source="media/manage-workflow-tasks/execution-conditions-details.png" alt-text="Screenshot of the execution condition details of a workflow." lightbox="media/manage-workflow-tasks/execution-conditions-details.png":::
+
+1. On this screen you are presented with **Trigger details**. Here we have a trigger type and attribute details. In the template you can edit the attribute details to define when a workflow is run in relation to the attribute value measured in days. This attribute value can be from 0 to 60 days.
+
+
+1. Select the **Scope** tab.
+ :::image type="content" source="media/manage-workflow-tasks/execution-conditions-scope.png" alt-text="Screenshot of the execution scope page of a workflow." lightbox="media/manage-workflow-tasks/execution-conditions-scope.png":::
+
+1. On this screen you can define rules for who the workflow will run. In the template **Scope type** is set as Rule-Based, and you define the rule using expressions on user properties. For more information on supported user properties. see: [supported queries on user properties](/graph/aad-advanced-queries#user-properties).
+
+1. After making changes, select **save** to capture changes to the execution conditions.
++
+## See versions of a workflow using the Azure portal
+
+1. On the left menu of Lifecycle Workflows, select **Workflows (Preview)**.
+
+1. On this page, you see a list of all of your current workflows. Select the workflow that you want to see versions of.
+
+1. On the left side of the screen, select **Versions (Preview)**.
+
+ :::image type="content" source="media/manage-workflow-tasks/manage-versions.png" alt-text="Screenshot of versions of a workflow." lightbox="media/manage-workflow-tasks/manage-versions.png":::
+
+1. On this page you see a list of the workflow versions.
+
+ :::image type="content" source="media/manage-workflow-tasks/manage-versions-list.png" alt-text="Screenshot of managing version list of lifecycle workflows." lightbox="media/manage-workflow-tasks/manage-versions-list.png":::
++
+## Create a new version of an existing workflow using Microsoft Graph
+
+As stated above, creating a new version of a workflow is required to change any parameter that isn't "displayName", "description", or "isEnabled". Unlike in the Azure portal, to create a new version of a workflow using Microsoft Graph requires some additional steps. First, run the API call with the changes to the body of the workflow you want to update by doing the following:
+
+- Get the body of the workflow you want to create a new version of by running the API call:
+ ```http
+ GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow id>
+ ```
+- Copy the body of the returned workflow excluding the **id**, **"odata.context**, and **tasks@odata.context** portions of the returned workflow body.
+- Make the changes in tasks and execution conditions you want for the new version of the workflow.
+- Run the following **createNewVersion** API call along with the updated body of the workflow. The workflow body is wrapped in a **Workflow:{}** block.
+ ```http
+ POST https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<id>/createNewVersion
+ Content-type: application/json
+
+ {
+ "workflow": {
+ "displayName":"New version of a workflow",
+ "description":"This is a new created version of a workflow",
+ "isEnabled":"true",
+ "tasks":[
+ {
+ "isEnabled":"true",
+ "taskTemplateId":"70b29d51-b59a-4773-9280-8841dfd3f2ea",
+ "displayName":"Send welcome email to new hire",
+ "description":"Sends welcome email to a new hire",
+ "executionSequence": 1,
+ "arguments":[]
+ },
+ {
+ "isEnabled":"true",
+ "taskTemplateId":"22085229-5809-45e8-97fd-270d28d66910",
+ "displayName":"Add user to group",
+ "description":"Adds user to a group.",
+ "executionSequence": 2,
+ "arguments":[
+ {
+ "name":"groupID",
+ "value":"<group id value>"
+ }
+ ]
+ }
+ ],
+ "executionConditions": {
+ "@odata.type": "microsoft.graph.identityGovernance.triggerAndScopeBasedConditions",
+ "scope": {
+ "@odata.type": "microsoft.graph.identityGovernance.ruleBasedSubjectSet",
+ "rule": "(department eq 'sales')"
+ },
+ "trigger": {
+ "@odata.type": "microsoft.graph.identityGovernance.timeBasedAttributeTrigger",
+ "timeBasedAttribute": "employeeHireDate",
+ "offsetInDays": -2
+ }
+ }
+ }
+ ```
+
+
+### List workflow versions using Microsoft Graph
+
+Once a new version of a workflow is created, you can always find other versions by running the following call:
+```http
+GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<id>/versions
+```
+Or to get a specific version:
+
+```http
+GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<id>/versions/<version number>
+```
+
+### Reorder Tasks in a workflow using Microsoft Graph
+
+If you want to reorder tasks in a workflow, so that certain tasks run before others, you'll follow these steps:
+ 1. Use a GET call to return the body of the workflow in which you want to reorder the tasks.
+ ```http
+ GET https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<workflow id>
+ ```
+ 1. Copy the body of the workflow and paste it in the body section for the new API call.
+
+ 1. Tasks are run in the order they appear within the workflow. To update the task copy the one you want to run first in the workflow body, and place it above the tasks you want to run after it in the workflow.
+
+ 1. Run the **createNewVersion** API call.
+
++
+## Next steps
++
+- [Check status of a workflows](check-status-workflow.md)
+- [Customize workflow schedule](customize-workflow-schedule.md)
active-directory On Demand Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/on-demand-workflow.md
+
+ Title: Run a workflow on-demand - Azure Active Directory
+description: This article guides a user to running a workflow on demand using Lifecycle Workflows
++++++ Last updated : 03/04/2022+++++
+# Run a workflow on-demand (Preview)
+
+While most workflows by default are scheduled to run every 3 hours, workflows created using Lifecycle Workflows can also run on-demand so that they can be applied to specific users whenever you see fit. A workflow can be run on demand for any user and doesn't take into account whether or not a user meets the workflow's execution conditions. Workflows created in the Azure portal are disabled by default. Running a workflow on-demand allows you to run workflows that can't be run on schedule currently such as leaver workflows. It also allows you to test workflows before their scheduled run. You can test the workflow on a smaller group of users before enabling it for a broader audience.
+
+>[!NOTE]
+>Be aware that you currently cannot run a workflow on-demand if it is set to disabled, which is the default state of newly created workflows using the Azure portal. You need to set the workflow to enabled to use the on-demand feature.
+
+## Run a workflow on-demand in the Azure portal
+
+Use the following steps to run a workflow on-demand.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Select **Azure Active Directory** and then select **Identity Governance**.
+
+1. On the left menu, select **Lifecycle workflows (Preview)**.
+
+1. select **Workflows (Preview)**
+
+1. On the workflow screen, select the specific workflow you want to run.
+
+ :::image type="content" source="media/on-demand-workflow/on-demand-list.png" alt-text="Screenshot of a list of Lifecycle Workflows workflows to run on-demand.":::
+
+1. Select **Run on demand**.
+
+1. On the **select users** tab, select **add users**.
+
+1. On the add users screen, select the users you want to run the on demand workflow for.
+
+ :::image type="content" source="media/on-demand-workflow/on-demand-add-users.png" alt-text="Screenshot of add users for on-demand workflow.":::
+
+1. Select **Add**
+
+1. Confirm your choices and select **Run workflow**.
+
+ :::image type="content" source="media/on-demand-workflow/on-demand-run.png" alt-text="Screenshot of a workflow being run on-demand.":::
+
+## Run a workflow on-demand using Microsoft Graph
+
+Running a workflow on-demand using Microsoft Graph requires users to manually be added by their user ID with a POST call.
+
+To run a workflow on-demand in Microsoft Graph, use the following request and body:
+```http
+POST https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<id>/activate
+Content-type: application/json
+```
+
+```Request body
+{
+ "subjects":[
+ {"id":"<userid>"},
+ {"id":"<userid>"}
+ ]
+}
+
+```
++
+## Next steps
+
+- [Customize the schedule of workflows](customize-workflow-schedule.md)
+- [Delete a Lifecycle workflow](delete-lifecycle-workflow.md)
active-directory Trigger Custom Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/trigger-custom-task.md
+
+ Title: Trigger Logic Apps based on custom task extensions
+description: Trigger Logic Apps based on custom task extensions
++++++ Last updated : 07/05/2022++++
+# Trigger Logic Apps based on custom task extensions (preview)
+
+Lifecycle Workflows can be used to trigger custom tasks via an extension to Azure Logic Apps. This can be used to extend the capabilities of Lifecycle Workflow beyond the built-in tasks. The steps for triggering a Logic App based on a custom task extension are as follows:
+
+- Create a custom task extension.
+- Select which behavior you want the custom task extension to take.
+- Link your custom task extension to a new or existing Azure Logic App.
+- Add the custom task to a workflow.
+
+For more information about Lifecycle Workflows extensibility, see: [Workflow Extensibility](lifecycle-workflow-extensibility.md).
++
+## Create a custom task extension with a new Azure Logic App
+
+To use a custom task extension in your workflow, first a custom task extension must be created to be linked with an Azure Logic App. You're able to create a Logic App at the same time you're creating a custom task extension. To do this, you'll complete these steps:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Select **Azure Active Directory** and then select **Identity Governance**.
+
+1. In the left menu, select **Lifecycle Workflows (Preview)**.
+
+1. In the left menu, select **Workflows (Preview)**.
+
+1. On the workflows screen, select **custom task extension**.
+ :::image type="content" source="media/trigger-custom-task/custom-task-extension-select.png" alt-text="Screenshot of selecting a custom task extension from a workflow overview page.":::
+1. On the custom task extensions page, select **create custom task extension**.
+ :::image type="content" source="media/trigger-custom-task/create-custom-task-extension.png" alt-text="Screenshot for creating a custom task extension selection.":::
+1. On the basics page you, give a display name and description for the custom task extension and select **Next**.
+ :::image type="content" source="media/trigger-custom-task/custom-task-extension-basics.png" alt-text="Screenshot of the basics section for creating a custom task extension.":::
+1. On the **Task behavior** page, you specify how the custom task extension will behave after executing the Azure Logic App and select **Next**.
+ :::image type="content" source="media/trigger-custom-task/custom-task-extension-behavior.png" alt-text="Screenshot for choose task behavior for custom task extension.":::
+ > [!NOTE]
+ > For more information about custom task extension behavior, see: [Lifecycle Workflow extensibility](lifecycle-workflow-extensibility.md)
+1. On the **Logic App details** page, you select **Create new Logic App**, and specify the subscription and resource group where it will be located. You'll also give the new Azure Logic App a name.
+ :::image type="content" source="media/trigger-custom-task/custom-task-extension-new-logic-app.png" alt-text="screen showing to create new logic app for custom task extension.":::
+1. If deployed successfully, you'll get confirmation on the **Logic App details** page immediately, and then you can select **Next**.
+
+1. On the **Review** page, you can review the details of the custom task extension and the Azure Logic App you've created. Select **Create** if the details match what you desire for the custom task extension.
+
+
+## Configure a custom task extension with an existing Azure Logic App
+
+You can also link a custom task extension to an existing Azure Logic App. To do this, you'd complete the following steps:
+
+> [!IMPORTANT]
+> A Logic App must be configured to be compatible with the custom task extension. For more information, see [Configure a Logic App for Lifecycle Workflow use](configure-logic-app-lifecycle-workflows.md)
+
+1. In the left menu, select **Lifecycle workflows (Preview)**.
+
+1. In the left menu, select **Workflows (Preview)**.
+
+1. On the workflows screen, select **custom task extension**.
+
+1. On the **Logic App details** page, you select **Choose an existing Logic App**, and specify the subscription and resource group where the Azure Logic App is located and select **Next**.
+ :::image type="content" source="media/trigger-custom-task/custom-task-extension-existing-logic-app.png" alt-text="Screenshot for selecting an existing logic app with custom task extension.":::
+1. You can Review information about the updated custom task extension and the existing Logic App linked to it. Select **Create** if the details match what you desire for the custom task extension.
++
+## Add your custom task extension to a workflow
+
+After you've created your custom task extension, you can now add it to a workflow. Unlike some tasks, which can only be added to workflow templates that match its category, a custom task extension can be added to any template you choose to make a custom workflow from.
+
+To Add a custom task extension to a workflow, you'd do the following steps:
+
+1. In the left menu, select **Lifecycle workflows (Preview)**.
+
+1. In the left menu, select **Workflows (Preview)**.
+
+1. Select the workflow that you want to add the custom task extension to.
+
+1. On the workflow screen, select **Tasks**.
+
+1. On the tasks screen, select **Add task**.
+
+1. In the **Select tasks** drop down, select **Run a Custom Task Extension**, and select **Add**.
+
+1. On the custom task extension page, you can give the task a name and description. You can also choose from a list of configured custom task extensions to use.
+ :::image type="content" source="media/trigger-custom-task/add-custom-task-extension.png" alt-text="Screenshot showing to add a custom task extension to workflow.":::
+1. When finished, select **Save**.
+
+## Next steps
+
+- [Lifecycle workflow extensibility (Preview)](lifecycle-workflow-extensibility.md)
+- [Manage Workflow Versions](manage-workflow-tasks.md)
active-directory Tutorial Offboard Custom Workflow Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-offboard-custom-workflow-graph.md
+
+ Title: 'Execute employee offboarding tasks in real-time on their last day of work with Microsoft Graph (preview)'
+description: Tutorial for off-boarding users from an organization using Lifecycle workflows with Microsoft Graph (preview).
+++++++ Last updated : 08/18/2022++++
+# Execute employee offboarding tasks in real-time on their last day of work with Microsoft Graph (preview)
+
+This tutorial provides a step-by-step guide on how to execute a real-time employee termination with Lifecycle workflows using the GRAPH API.
+
+This off-boarding scenario will run a workflow on-demand and accomplish the following tasks:
+
+1. Remove user from all groups
+2. Remove user from all Teams
+3. Delete user account
+
+You may learn more about running a workflow on-demand [here](on-demand-workflow.md).
+
+## Before you begin
+
+As part of the prerequisites for completing this tutorial, you will need an account that has group and Teams memberships that can be deleted during the tutorial. For more comprehensive instructions on how to complete these prerequisite steps, you may refer to the [Preparing user accounts for Lifecycle workflows tutorial](tutorial-prepare-azure-ad-user-accounts.md).
+
+The leaver scenario can be broken down into the following:
+- **Prerequisite:** Create a user account that represents an employee leaving your organization
+- **Prerequisite:** Prepare the user account with groups and Teams memberships
+- Create the lifecycle management workflow
+- Run the workflow on-demand
+- Verify that the workflow was successfully executed
++
+## Create a leaver workflow on-demand using Graph API
+
+Before introducing the API call to create this workflow, you may want to review some of the parameters that are required for this workflow creation.
+
+|Parameter |Description |
+|||
+|category | A string that identifies the category of the workflow. String is "joiner", "mover", or "leaver and can support multiple strings. Category of workflow must also contain the category of its tasks. For full task definitions, see: [Lifecycle workflow tasks and definitions](lifecycle-workflow-tasks.md) |
+|displayName | A unique string that identifies the workflow. |
+|description | A string that describes the purpose of the workflow for administrative use. (Optional) |
+|isEnabled | A boolean value that denotes whether the workflow is set to run or not. If set to ΓÇ£true" then the workflow will run. |
+|isSchedulingEnabled | A Boolean value that denotes whether scheduling is enabled or not. Unlike isEnbaled, a workflow can still be run on demand if this value is set to false. |
+|executionConditions | An argument that contains: <br><br>A time-based attribute and an integer parameter defining when a workflow will run between -60 and 60 <br><br>A scope attribute defining who the workflow runs for. |
+|tasks | An argument in a workflow that has a unique displayName and a description. <br><br> It defines the specific tasks to be executed in the workflow. <br><br>The specified task is outlined by the taskDefinitionID and its parameters. For a list of supported tasks, and their corresponding IDs, see [Supported Task Definitions](lifecycle-workflow-tasks.md). |
+
+For the purpose of this tutorial, there are three tasks that will be introduced in this workflow:
+
+### Remove user from all groups task
+
+```Example
+"tasks":[
+ {
+ "continueOnError": true,
+ "displayName": "Remove user from all groups",
+ "description": "Remove user from all Azure AD groups memberships",
+ "isEnabled": true,
+ "taskDefinitionId": "b3a31406-2a15-4c9a-b25b-a658fa5f07fc",
+ "arguments": []
+ }
+ ]
+```
+
+> [!NOTE]
+> The task does not support removing users from Privileged Access Groups, Dynamic Groups, and synchronized Groups.
+
+### Remove user from all Teams task
+
+```Example
+"tasks":[
+ {
+ "continueOnError": true,
+ "description": "Remove user from all Teams",
+ "displayName": "Remove user from all Teams memberships",
+ "isEnabled": true,
+ "taskDefinitionId": "81f7b200-2816-4b3b-8c5d-dc556f07b024",
+ "arguments": []
+ }
+ ]
+```
+### Delete user task
+
+```Example
+"tasks":[
+ {
+ "continueOnError": true,
+ "displayName": "Delete user account",
+ "description": "Delete user account in Azure AD",
+ "isEnabled": true,
+ "taskDefinitionId": "8d18588d-9ad3-4c0f-99d0-ec215f0e3dff",
+ "arguments": []
+ }
+ ]
+```
+### Leaver workflow on-demand
+
+The following POST API call will create a leaver workflow that can be executed on-demand for real-time employee terminations.
+
+ ```http
+POST https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows
+Content-type: application/json
+
+{
+ "category": "Leaver",
+ "displayName": "Real-time employee termination",
+ "description": "Execute real-time termination tasks for employees on their last day of work",
+ "isEnabled": true,
+ "isSchedulingEnabled": false,
+ "executionConditions":{
+ "@odata.type":"#microsoft.graph.identityGovernance.onDemandExecutionOnly"
+ },
+ "tasks": [
+ {
+ "continueOnError": false,
+ "description": "Remove user from all Azure AD groups memberships",
+ "displayName": "Remove user from all groups",
+ "executionSequence": 1,
+ "isEnabled": true,
+ "taskDefinitionId": "b3a31406-2a15-4c9a-b25b-a658fa5f07fc",
+ "arguments": []
+ },
+ {
+ "continueOnError": false,
+ "description": "Remove user from all Teams memberships",
+ "displayName": "Remove user from all Teams",
+ "executionSequence": 2,
+ "isEnabled": true,
+ "taskDefinitionId": "81f7b200-2816-4b3b-8c5d-dc556f07b024",
+ "arguments": []
+ },
+ {
+ "continueOnError": false,
+ "description": "Delete user account in Azure AD",
+ "displayName": "Delete User Account",
+ "executionSequence": 3,
+ "isEnabled": true,
+ "taskDefinitionId": "8d18588d-9ad3-4c0f-99d0-ec215f0e3dff",
+ "arguments": []
+ }
+ ]
+}
+```
+
+## Run the workflow
+
+Now that the workflow is created, it will automatically run the workflow every 3 hours. Lifecycle workflows will check every 3 hours for users in the associated execution condition and execute the configured tasks for those users. However, for the tutorial, we would like to run it immediately. To run a workflow immediately, we can use the on-demand feature.
+
+>[!NOTE]
+>Be aware that you currently cannot run a workflow on-demand if it is set to disabled. You need to set the workflow to enabled to use the on-demand feature.
+
+To run a workflow on-demand for users using the GRAPH API do the following steps:
+
+1. Open [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer).
+2. Make sure the top is still set to **POST**, and **beta** and `https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<id>/activate` is in the box. Change `<id>` to the ID of workflows.
+ 3. Copy the code below in to the **Request body**
+ 4. Replace `<userid>` in the code below with the value of the user's ID.
+ 5. Select **Run query**
+ ```json
+ {
+ "subjects":[
+ {"id":"<userid>"}
+
+ ]
+}
+
+```
+
+## Check tasks and workflow status
+
+At any time, you may monitor the status of the workflows and the tasks. As a reminder, there are three different data pivots, users runs, and tasks which are currently available in public preview. You may learn more in the how-to guide [Check the status of a workflow (preview)](check-status-workflow.md). In the course of this tutorial, we will look at the status using the user focused reports.
+
+To begin, you will just need the ID of the workflow and the date range for which you want to see the summary of the status. You may obtain the workflow ID from the response code of the POST API call that was used to create the workflow.
+
+This example will show you how to list the userProcessingResults for the last 7 days.
+
+```http
+GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow_id>/userProcessingResults
+```
+Furthermore, it is possible to get a summary of the userProcessingResults to get a quicker overview of large amounts of data, but for this a time span must be specified.
+
+```http
+GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow id>/userProcessingResults/summary(startDateTime=2022-05-01T00:00:00Z,endDateTime=2022-05-30T00:00:00Z)
+```
+You may also check the full details about the tasks of a given userProcessingResults. You will need to provide the workflow ID of the workflow, as well as the userProcessingResult ID. You may obtain the userProcessingResult ID from the response of the userProcessingResults GET call above.
+
+```http
+GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow_id>/userProcessingResults/<userProcessingResult_id>/taskProcessingResults
+```
+
+## Next steps
+- [Preparing user accounts for Lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md)
+- [Execute employee offboarding tasks in real-time on their last day of work with Azure portal (preview)](tutorial-offboard-custom-workflow-portal.md)
active-directory Tutorial Offboard Custom Workflow Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-offboard-custom-workflow-portal.md
+
+ Title: 'Execute employee offboarding tasks in real-time on their last day of work with Azure portal (preview)'
+description: Tutorial for off-boarding users from an organization using Lifecycle workflows with Azure portal (preview).
+++++++ Last updated : 08/18/2022+++++
+# Execute employee offboarding tasks in real-time on their last day of work with Azure portal (preview)
+
+This tutorial provides a step-by-step guide on how to execute a real-time employee termination with Lifecycle workflows using the Azure portal.
+
+This off-boarding scenario will run a workflow on-demand and accomplish the following tasks:
+
+1. Remove user from all groups
+2. Remove user from all Teams
+3. Delete user account
+
+You may learn more about running a workflow on-demand [here](on-demand-workflow.md).
+
+## Before you begin
+
+As part of the prerequisites for completing this tutorial, you will need an account that has group and Teams memberships and that can be deleted during the tutorial. For more comprehensive instructions on how to complete these prerequisite steps, you may refer to the [Preparing user accounts for Lifecycle workflows tutorial](tutorial-prepare-azure-ad-user-accounts.md).
+
+The leaver scenario can be broken down into the following:
+- **Prerequisite:** Create a user account that represents an employee leaving your organization
+- **Prerequisite:** Prepare the user account with groups and Teams memberships
+- Create the lifecycle management workflow
+- Run the workflow on-demand
+- Verify that the workflow was successfully executed
+
+## Create a workflow using leaver template
+Use the following steps to create a leaver on-demand workflow that will execute a real-time employee termination with Lifecycle workflows using the Azure portal.
+
+ 1. Sign in to Azure portal
+ 2. On the right, select **Azure Active Directory**.
+ 3. Select **Identity Governance**.
+ 4. Select **Lifecycle workflows (Preview)**.
+ 5. On the **Overview (Preview)** page, select **New workflow**.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/new-workflow.png" alt-text="Screenshot of selecting new workflow." lightbox="media/tutorial-lifecycle-workflows/new-workflow.png":::
+
+ 6. From the templates, select **Select** under **Real-time employee termination**.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/select-template.png" alt-text="Screenshot of selecting template leaver workflow." lightbox="media/tutorial-lifecycle-workflows/select-template.png":::
+
+ 7. Next, you will configure the basic information about the workflow. Select **Next:Review tasks** when you are done with this step.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-leaver.png" alt-text="Screenshot of review template tasks." lightbox="media/tutorial-lifecycle-workflows/real-time-leaver.png":::
+
+ 8. On the following page, you may inspect the tasks if desired but no additional configuration is needed. Select **Next: Select users** when you are finished.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-tasks.png" alt-text="Screenshot of template tasks." lightbox="media/tutorial-lifecycle-workflows/real-time-tasks.png":::
+
+ 9. For the user selection, select **Select users**. This allows you to select users for which the workflow will be executed immediately after creation. Regardless of the selection, you can run the workflow on-demand later at any time as needed.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-users.png" alt-text="Select real time leaver template users." lightbox="media/tutorial-lifecycle-workflows/real-time-users.png":::
+
+ 10. Next, select on **+Add users** to designate the users to be executed on this workflow.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-add-users.png" alt-text="Screenshot of real time leaver add users." lightbox="media/tutorial-lifecycle-workflows/real-time-add-users.png":::
+
+ 11. A panel with the list of available users will pop-up on the right side of the screen. Select **Select** when you are done with your selection.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-user-list.png" alt-text="Screenshot of real time leaver template selected users." lightbox="media/tutorial-lifecycle-workflows/real-time-user-list.png":::
+
+ 12. Select **Next: Review and create** when you are satisfied with your selection.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-review-users.png" alt-text="Screenshot of reviewing template users." lightbox="media/tutorial-lifecycle-workflows/real-time-review-users.png":::
+
+ 13. On the review blade, verify the information is correct and select **Create**.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/real-time-create.png" alt-text="Screenshot of creating real time leaver workflow." lightbox="media/tutorial-lifecycle-workflows/real-time-create.png":::
+
+## Run the workflow
+Now that the workflow is created, it will automatically run the workflow every 3 hours. Lifecycle workflows will check every 3 hours for users in the associated execution condition and execute the configured tasks for those users. However, for the tutorial, we would like to run it immediately. To run a workflow immediately, we can use the on-demand feature.
+
+>[!NOTE]
+>Be aware that you currently cannot run a workflow on-demand if it is set to disabled. You need to set the workflow to enabled to use the on-demand feature.
+
+To run a workflow on-demand, for users using the Azure portal, do the following steps:
+
+ 1. On the workflow screen, select the specific workflow you want to run.
+ 2. Select **Run on demand**.
+ 3. On the **select users** tab, select **add users**.
+ 4. Add a user.
+ 5. Select **Run workflow**.
+
+## Check tasks and workflow status
+
+At any time, you may monitor the status of the workflows and the tasks. As a reminder, there are three different data pivots, users runs, and tasks which are currently available in public preview. You may learn more in the how-to guide [Check the status of a workflow (preview)](check-status-workflow.md). In the course of this tutorial, we will look at the status using the user focused reports.
+
+ 1. To begin, select the **Workflow history (Preview)** tab on the left to view the user summary and associated workflow tasks and statuses.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/workflow-history-real-time.png" alt-text="Screenshot of real time history overview." lightbox="media/tutorial-lifecycle-workflows/workflow-history-real-time.png":::
+
+1. Once the **Workflow history (Preview)** tab has been selected, you will land on the workflow history page as shown.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/user-summary-real-time.png" alt-text="Screenshot of real time workflow history." lightbox="media/tutorial-lifecycle-workflows/user-summary-real-time.png":::
+
+1. Next, you may select **Total tasks** for the user Jane Smith to view the total number of tasks created and their statuses. In this example, there are three total tasks assigned to the user Jane Smith.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/total-tasks-real-time.png" alt-text="Screenshot of total tasks for real time workflow." lightbox="media/tutorial-lifecycle-workflows/total-tasks-real-time.png":::
+
+1. To add an extra layer of granularity, you may select **Failed tasks** for the user Wade Warren to view the total number of failed tasks assigned to the user Wade Warren.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/failed-tasks-real-time.png" alt-text="Screenshot of failed tasks for real time workflow." lightbox="media/tutorial-lifecycle-workflows/failed-tasks-real-time.png":::
+
+1. Similarly, you may select **Unprocessed tasks** for the user Wade Warren to view the total number of unprocessed or canceled tasks assigned to the user Wade Warren.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/canceled-tasks-real-time.png" alt-text="Screenshot of unprocessed tasks for real time workflow." lightbox="media/tutorial-lifecycle-workflows/canceled-tasks-real-time.png":::
+
+## Next steps
+- [Preparing user accounts for Lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md)
+- [Execute employee offboarding tasks in real-time on their last day of work with Microsoft Graph (preview)](tutorial-offboard-custom-workflow-graph.md)
active-directory Tutorial Onboard Custom Workflow Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-onboard-custom-workflow-graph.md
+
+ Title: 'Automate employee onboarding tasks before their first day of work with Microsoft Graph (preview)'
+description: Tutorial for onboarding users to an organization using Lifecycle workflows with Microsoft Graph (preview).
+++++++ Last updated : 08/18/2022++++
+# Automate employee onboarding tasks before their first day of work with Microsoft Graph (preview)
+
+This tutorial provides a step-by-step guide on how to automate pre-hire tasks with Lifecycle workflows using the GRAPH API.
+
+This pre-hire scenario will generate a temporary password for our new employee and send it via email to the user's new manager.
+
+## Before you begin
+
+Two accounts are required for the tutorial, one account for the new hire and another account that acts as the manager of the new hire. The new hire account must have the following attributes set:
+- employeeHireDate must be set to today
+- department must be set to sales
+- manager attribute must be set, and the manager account should have a mailbox to receive an email.
+
+For more comprehensive instructions on how to complete these prerequisite steps, you may refer to the [Preparing user accounts for Lifecycle workflows tutorial](tutorial-prepare-azure-ad-user-accounts.md). The [TAP policy](/azure/active-directory/authentication/howto-authentication-temporary-access-pass#enable-the-temporary-access-pass-policy) must also be enabled to run this tutorial.
+
+Detailed breakdown of the relevant attributes:
+
+ | Attribute | Description |Set on|
+ |: |::|--|
+ |mail|Used to notify manager of the new employees temporary access pass|Both|
+ |manager|This attribute that is used by the lifecycle workflow|Employee|
+ |employeeHireDate|Used to trigger the workflow|Both|
+ |department|Used to provide the scope for the workflow|Both|
+
+The pre-hire scenario can be broken down into the following:
+ - **Prerequisite:** Create two user accounts, one to represent an employee and one to represent a manager
+ - **Prerequisite:** Edit the manager attribute for this scenario using Microsoft Graph Explorer
+ - **Prerequisite:** Enabling and using Temporary Access Pass (TAP)
+ - Creating the lifecycle management workflow
+ - Triggering the workflow
+ - Verifying the workflow was successfully executed
+
+## Create a pre-hire workflow using Graph API
+
+Now that the pre-hire workflow attributes have been updated and correctly populated, a pre-hire workflow can then be created to generate a Temporary Access Pass (TAP) and send it via email to a user's manager. Before introducing the API call to create this workflow, you may want to review some of the parameters that are required for this workflow creation.
+
+|Parameter |Description |
+|||
+|category | A string that identifies the category of the workflow. String is "joiner", "mover", or "leaver and can support multiple strings. Category of workflow must also contain the category of its tasks. For full task definitions, see: [Lifecycle workflow tasks and definitions](lifecycle-workflow-tasks.md) |
+|displayName | A unique string that identifies the workflow. |
+|description | A string that describes the purpose of the workflow for administrative use. (Optional) |
+|isEnabled | A boolean value that denotes whether the workflow is set to run or not. If set to ΓÇ£true" then the workflow will run. |
+|isSchedulingEnabled | A Boolean value that denotes whether scheduling is enabled or not. Unlike isEnbaled, a workflow can still be run on demand if this value is set to false. |
+|executionConditions | An argument that contains: <br><br> A time-based attribute and an integer parameter defining when a workflow will run between -60 and 60 <br><br>a scope attribute defining who the workflow runs for. |
+|tasks | An argument in a workflow that has a unique displayName and a description. <br><br> It defines the specific tasks to be executed in the workflow. The specified task is outlined by the taskDefinitionID and its parameters. For a list of supported tasks, and their corresponding IDs, see [Supported Task Definitions](lifecycle-workflow-tasks.md). |
+
+The following POST API call will create a pre-hire workflow that will generate a TAP and send it via email to the user's manager.
+
+ ```http
+ POST https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows
+Content-type: application/json
+
+{
+ "displayName":"Onboard pre-hire employee",
+ "description":"Configure pre-hire tasks for onboarding employees before their first day",
+ "isEnabled":true,
+ "isSchedulingEnabled": false,
+ "executionConditions": {
+ "@odata.type": "microsoft.graph.identityGovernance.triggerAndScopeBasedConditions",
+ "scope": {
+ "@odata.type": "microsoft.graph.identityGovernance.ruleBasedSubjectSet",
+ "rule": "(department eq 'sales')"
+ },
+ "trigger": {
+ "@odata.type": "microsoft.graph.identityGovernance.timeBasedAttributeTrigger",
+ "timeBasedAttribute": "employeeHireDate",
+ "offsetInDays": -2
+ }
+ },
+ "tasks":[
+ {
+ "isEnabled":true,
+ "category": "Joiner",
+ "taskDefinitionId":"1b555e50-7f65-41d5-b514-5894a026d10d",
+ "displayName":"Generate TAP And Send Email",
+ "description":"Generate Temporary Access Pass and send via email to user's manager",
+ "arguments":[
+ {
+ "name": "tapLifetimeMinutes",
+ "value": "480"
+ },
+ {
+ "name": "tapIsUsableOnce",
+ "value": "true"
+ }
+ ]
+ }
+ ]
+}
+```
+
+## Run the workflow
+Now that the workflow is created, it will automatically run the workflow every 3 hours. Lifecycle workflows will check every 3 hours for users in the associated execution condition and execute the configured tasks for those users. However, for the tutorial, we would like to run it immediately. To run a workflow immediately, we can use the on-demand feature.
+
+>[!NOTE]
+>Be aware that you currently cannot run a workflow on-demand if it is set to disabled. You need to set the workflow to enabled to use the on-demand feature.
+
+To run a workflow on-demand for users using the GRAPH API do the following steps:
+
+1. Open [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer).
+2. Make sure the top is still set to **POST**, and **beta** and `https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<id>/activate` is in the box. Change `<id>` to the ID of workflows.
+ 3. Copy the code below in to the **Request body**
+ 4. Replace `<userid>` in the code below with the value of the user's ID.
+ 5. Select **Run query**
+ ```json
+ {
+ "subjects":[
+ {"id":"<userid>"}
+
+ ]
+}
+
+```
+
+## Check tasks and workflow status
+
+At any time, you may monitor the status of the workflows and the tasks. As a reminder, there are three different data pivots, users runs, and tasks which are currently available in public preview. You may learn more in the how-to guide [Check the status of a workflow (preview)](check-status-workflow.md). In the course of this tutorial, we will look at the status using the user focused reports.
+
+To begin, you will just need the ID of the workflow and the date range for which you want to see the summary of the status. You may obtain the workflow ID from the response code of the POST API call that was used to create the workflow.
+
+This example will show you how to list the userProcessingResults for the last 7 days.
+
+```http
+GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow id>/userProcessingResults
+```
+Furthermore, it is possible to get a summary of the userProcessingResults to get a quicker overview of large amounts of data, but for this a time span must be specified.
+
+```http
+GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow id>/userProcessingResults/summary(startDateTime=2022-05-01T00:00:00Z,endDateTime=2022-05-30T00:00:00Z)
+```
+You may also check the full details about the tasks of a given userProcessingResults. You will need to provide the workflow ID of the workflow, as well as the userProcessingResult ID. You may obtain the userProcessingResult ID from the response of the userProcessingResults GET call above.
+
+```http
+GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow_id>/userProcessingResults/<userProcessingResult_id>/taskProcessingResults
+```
+
+## Enable the workflow schedule
+
+After running your workflow on-demand and checking that everything is working fine, you may want to enable the workflow schedule. To enable the workflow schedule, you may run the following PATCH call.
+
+```http
+PATCH https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<id>
+Content-type: application/json
+
+{
+ "displayName":"Onboard pre-hire employee",
+ "description":"Configure pre-hire tasks for onboarding employees before their first day",
+ "isEnabled": true,
+ "isSchedulingEnabled": true
+}
+
+```
+
+## Next steps
+- [Preparing user accounts for Lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md)
+- [Automate employee onboarding tasks before their first day of work with Azure portal (preview)](tutorial-onboard-custom-workflow-portal.md)
active-directory Tutorial Onboard Custom Workflow Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-onboard-custom-workflow-portal.md
+
+ Title: 'Automate employee onboarding tasks before their first day of work with Azure portal (preview)'
+description: Tutorial for onboarding users to an organization using Lifecycle workflows with Azure portal (preview).
+++++++ Last updated : 08/18/2022+++++
+# Automate employee onboarding tasks before their first day of work with Azure portal (preview)
+
+This tutorial provides a step-by-step guide on how to automate pre-hire tasks with Lifecycle workflows using the Azure portal.
+
+This pre-hire scenario will generate a temporary access pass for our new employee and send it via email to the user's new manager.
++
+## Before you begin
+
+Two accounts are required for this tutorial, one account for the new hire and another account that acts as the manager of the new hire. The new hire account must have the following attributes set:
+
+- employeeHireDate must be set to today
+- department must be set to sales
+- manager attribute must be set, and the manager account should have a mailbox to receive an email
+
+For more comprehensive instructions on how to complete these prerequisite steps, you may refer to the [Preparing user accounts for Lifecycle workflows tutorial](tutorial-prepare-azure-ad-user-accounts.md). The [TAP policy](/azure/active-directory/authentication/howto-authentication-temporary-access-pass#enable-the-temporary-access-pass-policy) must also be enabled to run this tutorial.
+
+Detailed breakdown of the relevant attributes:
+
+ | Attribute | Description |Set on|
+ |: |::|--|
+ |mail|Used to notify manager of the new employees temporary access pass|Both|
+ |manager|This attribute that is used by the lifecycle workflow|Employee|
+ |employeeHireDate|Used to trigger the workflow|Employee|
+ |department|Used to provide the scope for the workflow|Employee|
+
+The pre-hire scenario can be broken down into the following:
+ - **Prerequisite:** Create two user accounts, one to represent an employee and one to represent a manager
+ - **Prerequisite:** Editing the attributes required for this scenario in the portal
+ - **Prerequisite:** Edit the attributes for this scenario using Microsoft Graph Explorer
+ - **Prerequisite:** Enabling and using Temporary Access Pass (TAP)
+ - Creating the lifecycle management workflow
+ - Triggering the workflow
+ - Verifying the workflow was successfully executed
+
+## Create a workflow using pre-hire template
+Use the following steps to create a pre-hire workflow that will generate a TAP and send it via email to the user's manager using the Azure portal.
+
+ 1. Sign in to Azure portal
+ 2. On the right, select **Azure Active Directory**.
+ 3. Select **Identity Governance**.
+ 4. Select **Lifecycle workflows (Preview)**.
+ 5. On the **Overview (Preview)** page, select **New workflow**.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/new-workflow.png" alt-text="Screenshot of selecting a new workflow." lightbox="media/tutorial-lifecycle-workflows/new-workflow.png":::
+
+ 6. From the templates, select **select** under **Onboard pre-hire employee**.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/select-template.png" alt-text="Screenshot of selecting workflow template." lightbox="media/tutorial-lifecycle-workflows/select-template.png":::
+
+ 7. Next, you will configure the basic information about the workflow. This information includes when the workflow will trigger, known as **Days from event**. So in this case, the workflow will trigger two days before the employee's hire date. On the onboard pre-hire employee screen, add the following settings and then select **Next: Configure Scope**.
+
+ :::image type="content" source="media/tutorial-lifecycle-workflows/configure-scope.png" alt-text="Screenshot of selecting a configuration scope." lightbox="media/tutorial-lifecycle-workflows/configure-scope.png":::
+
+ 8. Next, you will configure the scope. The scope determines which users this workflow will run against. In this case, it will be on all users in the Sales department. On the configure scope screen, under **Rule** add the following settings and then select **Next: Review tasks**
+
+ :::image type="content" source="media/tutorial-lifecycle-workflows/review-tasks.png" alt-text="Screenshot of selecting review tasks." lightbox="media/tutorial-lifecycle-workflows/review-tasks.png":::
+
+ 9. On the following page, you may inspect the task if desired but no additional configuration is needed. Select **Next: Review + Create** when you are finished.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/onboard-review-create.png" alt-text="Screenshot of reviewing an on-board workflow." lightbox="media/tutorial-lifecycle-workflows/onboard-review-create.png":::
+
+ 10. On the review blade, verify the information is correct and select **Create**.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/onboard-create.png" alt-text="Screenshot of creating an onboard workflow." lightbox="media/tutorial-lifecycle-workflows/onboard-create.png":::
+
+
+## Run the workflow
+Now that the workflow is created, it will automatically run the workflow every 3 hours. Lifecycle workflows will check every 3 hours for users in the associated execution condition and execute the configured tasks for those users. However, for the tutorial, we would like to run it immediately. To run a workflow immediately, we can use the on-demand feature.
+
+>[!NOTE]
+>Be aware that you currently cannot run a workflow on-demand if it is set to disabled. You need to set the workflow to enabled to use the on-demand feature.
+
+To run a workflow on-demand, for users using the Azure portal, do the following steps:
+
+ 1. On the workflow screen, select the specific workflow you want to run.
+ 2. Select **Run on demand**.
+ 3. On the **select users** tab, select **add users**.
+ 4. Add a user.
+ 5. Select **Run workflow**.
++
+## Check tasks and workflow status
+
+At any time, you may monitor the status of the workflows and the tasks. As a reminder, there are three different data pivots, users runs, and tasks which are currently available in public preview. You may learn more in the how-to guide [Check the status of a workflow (preview)](check-status-workflow.md). In the course of this tutorial, we will look at the status using the user focused reports.
+
+ 1. To begin, select the **Workflow history (Preview)** tab on the left to view the user summary and associated workflow tasks and statuses.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/workflow-history.png" alt-text="Screenshot of workflow History status." lightbox="media/tutorial-lifecycle-workflows/workflow-history.png":::
+
+1. Once the **Workflow history (Preview)** tab has been selected, you will land on the workflow history page as shown.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/user-summary.png" alt-text="Screenshot of workflow history overview" lightbox="media/tutorial-lifecycle-workflows/user-summary.png":::
+
+1. Next, you may select **Total tasks** for the user Jane Smith to view the total number of tasks created and their statuses. In this example, there are three total tasks assigned to the user Jane Smith.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/total-tasks.png" alt-text="Screenshot of workflow total task summary." lightbox="media/tutorial-lifecycle-workflows/total-tasks.png":::
+
+1. To add an extra layer of granularity, you may select **Failed tasks** for the user Jeff Smith to view the total number of failed tasks assigned to the user Jeff Smith.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/failed-tasks.png" alt-text="Screenshot of workflow failed tasks." lightbox="media/tutorial-lifecycle-workflows/failed-tasks.png":::
+
+1. Similarly, you may select **Unprocessed tasks** for the user Jeff Smith to view the total number of unprocessed or canceled tasks assigned to the user Jeff Smith.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/canceled-tasks.png" alt-text="Screenshot of workflow unprocessed tasks summary." lightbox="media/tutorial-lifecycle-workflows/canceled-tasks.png":::
+
+## Enable the workflow schedule
+
+After running your workflow on-demand and checking that everything is working fine, you may want to enable the workflow schedule. To enable the workflow schedule, you may select the **Enable Schedule** checkbox on the Properties (Preview) page.
++
+## Next steps
+- [Tutorial: Preparing user accounts for Lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md)
+- [Automate employee onboarding tasks before their first day of work with Microsoft Graph (preview)](tutorial-onboard-custom-workflow-graph.md)
active-directory Tutorial Prepare Azure Ad User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-prepare-azure-ad-user-accounts.md
+
+ Title: 'Tutorial: Preparing user accounts for Lifecycle workflows (preview)'
+description: Tutorial for preparing user accounts for Lifecycle workflows (preview).
+++++++ Last updated : 06/13/2022++++
+# Preparing user accounts for Lifecycle workflows tutorials (Preview)
+
+For the on-boarding and off-boarding tutorials you'll need accounts for which the workflows will be executed, the following section will help you prepare these accounts, if you already have test accounts that meet the following requirements you can proceed directly to the on-boarding and off-boarding tutorials. Two accounts are required for the on-boarding tutorials, one account for the new hire and another account that acts as the manager of the new hire. The new hire account must have the following attributes set:
+
+- employeeHireDate must be set to today
+- department must be set to sales
+- manager attribute must be set, and the manager account should have a mailbox to receive an email
+
+The off-boarding tutorials only require one account that has group and Teams memberships, but the account will be deleted during the tutorial.
+
+## Prerequisites
+
+- An Azure AD tenant
+- A global administrator account for the Azure AD tenant. This account will be used to create our users and workflows.
+
+## Before you begin
+
+In most cases, users are going to be provisioned to Azure AD either from an on-premises solution (Azure AD Connect, Cloud sync, etc.) or with an HR solution. These users will have the attributes and values populated at the time of creation. Setting up the infrastructure to provision users is outside the scope of this tutorial. For information, see [Tutorial: Basic Active Directory environment](../cloud-sync/tutorial-basic-ad-azure.md) and [Tutorial: Integrate a single forest with a single Azure AD tenant](../cloud-sync/tutorial-single-forest.md)
+
+## Create users in Azure AD
+
+We'll use the Graph Explorer to quickly create two users needed to execute the Lifecycle Workflows in the tutorials. One user will represent our new employee and the second will represent the new employee's manager.
+
+You'll need to edit the POST and replace the &lt;your tenant name here&gt; portion with the name of your tenant. For example: $UPN_manager = "bsimon@&lt;your tenant name here&gt;" to $UPN_manager = "bsimon@contoso.onmicrosoft.com".
+
+>[!NOTE]
+>Be aware that a workflow will not trigger when the employee hire date (Days from event) is prior to the workflow creation date. You must set a employeeHiredate in the future by design. The dates used in this tutorial are a snapshot in time. Therefore, you should change the dates accordingly to accommodate for this situation.
+
+First we'll create our employee, Melva Prince.
+
+ 1. Now navigate to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer).
+ 2. Sign-in to Graph Explorer with the global administrator account for your tenant.
+ 3. At the top, change **GET** to **POST** and add `https://graph.microsoft.com/v1.0/users/` to the box.
+ 4. Copy the code below in to the **Request body**
+ 5. Replace `<your tenant here>` in the code below with the value of your Azure AD tenant.
+ 6. Select **Run query**
+ 7. Copy the ID that is returned in the results. This will be used later to assign a manager.
+
+ ```HTTP
+ {
+ "accountEnabled": true,
+ "displayName": "Melva Prince",
+ "mailNickname": "mprince",
+ "department": "sales",
+ "mail": "mpricne@<your tenant name here>",
+ "employeeHireDate": "2022-04-15T22:10:00Z"
+ "userPrincipalName": "mprince@<your tenant name here>",
+ "passwordProfile" : {
+ "forceChangePasswordNextSignIn": true,
+ "password": "xWwvJ]6NMw+bWH-d"
+ }
+ }
+ ```
+ :::image type="content" source="media/tutorial-lifecycle-workflows/graph-post-user.png" alt-text="Screenshot of POST create Melva in graph explorer." lightbox="media/tutorial-lifecycle-workflows/graph-post-user.png":::
+
+Next, we'll create Britta Simon. This is the account that will be used as our manager.
+
+ 1. Still in [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer).
+ 2. Make sure the top is still set to **POST** and `https://graph.microsoft.com/v1.0/users/` is in the box.
+ 3. Copy the code below in to the **Request body**
+ 4. Replace `<your tenant here>` in the code below with the value of your Azure AD tenant.
+ 5. Select **Run query**
+ 6. Copy the ID that is returned in the results. This will be used later to assign a manager.
+ ```HTTP
+ {
+ "accountEnabled": true,
+ "displayName": "Britta Simon",
+ "mailNickname": "bsimon",
+ "department": "sales",
+ "mail": "bsimon@<your tenant name here>",
+ "employeeHireDate": "2021-01-15T22:10:00Z"
+ "userPrincipalName": "bsimon@<your tenant name here>",
+ "passwordProfile" : {
+ "forceChangePasswordNextSignIn": true,
+ "password": "xWwvJ]6NMw+bWH-d"
+ }
+ }
+ ```
+
+>[!NOTE]
+> You need to change the &lt;your tenant name here&gt; section of the code to match your Azure AD tenant.
+
+As an alternative, the following PowerShell script may also be used to quickly create two users needed execute a lifecycle workflow. One user will represent our new employee and the second will represent the new employee's manager.
+
+>[!IMPORTANT]
+>The following PowerShell script is provided to quickly create the two users required for this tutorial. These users can also be created manually by signing in to the Azure portal as a global administrator and creating them.
+
+In order to create this step, save the PowerShell script below to a location on a machine that has access to Azure.
+
+Next, you need to edit the script and replace the &lt;your tenant name here&gt; portion with the name of your tenant. For example: $UPN_manager = "bsimon@&lt;your tenant name here&gt;" to $UPN_manager = "bsimon@contoso.onmicrosoft.com".
+
+You need to do perform this action for both $UPN_employee and $UPN_manager
+
+After editing the script, save it and follow the steps below.
+
+ 1. Open a Windows PowerShell command prompt, with Administrative privileges, from a machine that has access to the Azure portal.
+2. Navigate to the saved PowerShell script location and run it.
+3. If prompted select **Yes to all** when installing the Azure AD module.
+4. When prompted, sign in to the Azure portal with a global administrator for your Azure AD tenant.
+
+```powershell
+#
+# DISCLAIMER:
+# Copyright (c) Microsoft Corporation. All rights reserved. This
+# script is made available to you without any express, implied or
+# statutory warranty, not even the implied warranty of
+# merchantability or fitness for a particular purpose, or the
+# warranty of title or non-infringement. The entire risk of the
+# use or the results from the use of this script remains with you.
+#
+#
+#
+#
+#Declare variables
+
+$Displayname_employee = "Melva Prince"
+$UPN_employee = "mprince<your tenant name here>"
+$Name_employee = "mprince"
+$Password_employee = "Pass1w0rd"
+$EmployeeHireDate_employee = "04/10/2022"
+$Department_employee = "Sales"
+$Displayname_manager = "Britta Simon"
+$Name_manager = "bsimon"
+$Password_manager = "Pass1w0rd"
+$Department = "Sales"
+$UPN_manager = "bsimon@<your tenant name here>"
+
+Install-Module -Name AzureAD
+Connect-AzureAD -Confirm
+
+$PasswordProfile = New-Object -TypeName Microsoft.Open.AzureAD.Model.PasswordProfile
+$PasswordProfile.Password = "<Password>"
+New-AzureADUser -DisplayName $Displayname_manager -PasswordProfile $PasswordProfile -UserPrincipalName $UPN_manager -AccountEnabled $true -MailNickName $Name_manager -Department $Department
+New-AzureADUser -DisplayName $Displayname_employee -PasswordProfile $PasswordProfile -UserPrincipalName $UPN_employee -AccountEnabled $true -MailNickName $Name_employee -Department $Department
+```
+
+Once your user(s) has been successfully created in Azure AD, you may proceed to follow the Lifecycle workflow tutorials for your workflow creation.
+
+## Additional steps for pre-hire scenario
+
+There are some additional steps that you should be aware of when testing either the [On-boarding users to your organization using Lifecycle workflows with Azure portal (preview)](tutorial-onboard-custom-workflow-portal.md) tutorial or the [On-boarding users to your organization using Lifecycle workflows with Microsoft Graph (preview)](tutorial-onboard-custom-workflow-graph.md) tutorial.
+
+### Edit the users attributes using the Azure portal
+Some of the attributes required for the pre-hire onboarding tutorial are exposed through the Azure portal and can be set there.
+
+ These attributes are:
+
+| Attribute | Description |Set on|
+|: |::|--|
+|mail|Used to notify manager of the new employees temporary access pass|Manager|
+|manager|This attribute that is used by the lifecycle workflow|Employee|
+
+For the tutorial, the **mail** attribute only needs to be set on the manager account and the **manager** attribute set on the employee account. Use the following steps below.
+
+ 1. Sign in to Azure portal.
+ 2. On the right, select **Azure Active Directory**.
+ 3. Select **Users**.
+ 4. Select **Melva Prince**.
+ 5. At the top, select **Edit**.
+ 6. Under manager, select **Change** and Select **Britta Simon**.
+ 7. At the top, select **Save**.
+ 8. Go back to users and select **Britta Simon**.
+ 9. At the top, select **Edit**.
+ 10. Under **Email**, enter a valid email address.
+ 11. select **Save**.
+
+### Edit employeeHireDate
+The employeeHireDate attribute is new to Azure AD. It isn't exposed through the UI and must be updated using Graph. To edit this attribute, we can use [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer).
+
+>[!NOTE]
+>Be aware that a workflow will not trigger when the employee hire date (Days from event) is prior to the workflow creation date. You must set an employeeHiredate in the future by design. The dates used in this tutorial are a snapshot in time. Therefore, you should change the dates accordingly to accommodate for this situation.
+
+In order to do this, we must get the object ID for our user Melva Prince.
+
+ 1. Sign in to [Azure portal](https://portal.azure.com).
+ 2. On the right, select **Azure Active Directory**.
+ 3. Select **Users**.
+ 4. Select **Melva Prince**.
+ 5. Select the copy sign next to the **Object ID**.
+ 6. Now navigate to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer).
+ 7. Sign-in to Graph Explorer with the global administrator account for your tenant.
+ 8. At the top, change **GET** to **PATCH** and add `https://graph.microsoft.com/v1.0/users/<id>` to the box. Replace `<id>` with the value we copied above.
+ 9. Copy the following in to the **Request body** and select **Run query**
+ ```Example
+ {
+ "employeeHireDate": "2022-04-15T22:10:00Z"
+ }
+ ```
+ :::image type="content" source="media/tutorial-lifecycle-workflows/update-1.png" alt-text="Screenshot of the PATCH employeeHireDate." lightbox="media/tutorial-lifecycle-workflows/update-1.png":::
+
+ 10. Verify the change by changing **PATCH** back to **GET** and **v1.0** to **beta**. select **Run query**. You should see the attributes for Melva set.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/update-3.png" alt-text="Screenshot of the GET employeeHireDate." lightbox="media/tutorial-lifecycle-workflows/update-3.png":::
+
+### Edit the manager attribute on the employee account
+The manager attribute is used for email notification tasks. It's used by the lifecycle workflow to email the manager a temporary password for the new employee. Use the following steps to ensure your Azure AD users have a value for the manager attribute.
+
+1. Still in [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer).
+2. Make sure the top is still set to **PUT** and `https://graph.microsoft.com/v1.0/users/<id>/manager/$ref` is in the box. Change `<id>` to the ID of Melva Prince.
+ 3. Copy the code below in to the **Request body**
+ 4. Replace `<managerid>` in the code below with the value of Britta Simons ID.
+ 5. Select **Run query**
+ ```Example
+ {
+ "@odata.id": "https://graph.microsoft.com/v1.0/users/<managerid>"
+ }
+ ```
+
+ :::image type="content" source="media/tutorial-lifecycle-workflows/graph-add-manager.png" alt-text="Screenshot of Adding a manager in Graph explorer." lightbox="media/tutorial-lifecycle-workflows/graph-add-manager.png":::
+
+ 6. Now, we can verify that the manager has been set correctly by changing the **PUT** to **GET**.
+ 7. Make sure `https://graph.microsoft.com/v1.0/users/<id>/manager/` is in the box. The `<id>` is still that of Melva Prince.
+ 8. Select **Run query**. You should see Britta Simon returned in the Response.
+
+ :::image type="content" source="media/tutorial-lifecycle-workflows/graph-get-manager.png" alt-text="Screenshot of getting a manager in Graph explorer." lightbox="media/tutorial-lifecycle-workflows/graph-get-manager.png":::
+
+For more information about updating manager information for a user in Graph API, see [assign manager](/graph/api/user-post-manager?view=graph-rest-1.0&tabs=http) documentation. You can also set this attribute in the Azure Admin center. For more information, see [add or change profile information](/azure/active-directory/fundamentals/active-directory-users-profile-azure-portal?context=azure/active-directory/users-groups-roles/context/ugr-context).
+
+### Enabling the Temporary Access Pass (TAP)
+A Temporary Access Pass is a time-limited pass issued by an admin that satisfies strong authentication requirements.
+
+In this scenario, we'll use this feature of Azure AD to generate a temporary access pass for our new employee. It will then be mailed to the employee's manager.
+
+To use this feature, it must be enabled on our Azure AD tenant. To do this, use the following steps.
+
+1. Sign in to the Azure portal as a Global admin and select **Azure Active Directory** > **Security** > **Authentication methods** > **Temporary Access Pass**
+2. Select **Yes** to enable the policy and add Britta Simon and select which users have the policy applied, and any **General** settings.
+
+## Additional steps for leaver scenario
+
+There are some additional steps that you should be aware of when testing either the Off-boarding users from your organization using Lifecycle workflows with Azure portal (preview) tutorial or the Off-boarding users from your organization using Lifecycle workflows with Microsoft Graph (preview) tutorial.
+
+### Set up user with groups and Teams with team membership
+
+A user with groups and Teams memberships is required before you begin the tutorials for leaver scenario.
++
+## Next steps
+- [On-boarding users to your organization using Lifecycle workflows with Azure portal (preview)](tutorial-onboard-custom-workflow-portal.md)
+- [On-boarding users to your organization using Lifecycle workflows with Microsoft Graph (preview)](tutorial-onboard-custom-workflow-graph.md)
+- [Tutorial: Off-boarding users from your organization using Lifecycle workflows with Azure portal (preview)](tutorial-offboard-custom-workflow-portal.md)
+- [Tutorial: Off-boarding users from your organization using Lifecycle workflows with Microsoft Graph (preview)](tutorial-offboard-custom-workflow-graph.md)
active-directory Tutorial Scheduled Leaver Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-scheduled-leaver-graph.md
+
+ Title: Automate employee offboarding tasks after their last day of work with Microsoft Graph (preview)
+description: Tutorial for post off-boarding users from an organization using Lifecycle workflows with Microsoft Graph (preview).
+++++++ Last updated : 08/18/2022++++
+# Automate employee offboarding tasks after their last day of work with Microsoft Graph (preview)
+
+This tutorial provides a step-by-step guide on how to configure off-boarding tasks for employees after their last day of work with Lifecycle workflows using the GRAPH API.
+
+This post off-boarding scenario will run a scheduled workflow and accomplish the following tasks:
+
+1. Remove all licenses for user
+2. Remove user from all Teams
+3. Delete user account
+
+## Before you begin
+
+As part of the prerequisites for completing this tutorial, you will need an account that has licenses and Teams memberships that can be deleted during the tutorial. For more comprehensive instructions on how to complete these prerequisite steps, you may refer to the [Preparing user accounts for Lifecycle workflows tutorial](tutorial-prepare-azure-ad-user-accounts.md).
+
+The scheduled leaver scenario can be broken down into the following:
+- **Prerequisite:** Create a user account that represents an employee leaving your organization
+- **Prerequisite:** Prepare the user account with licenses and Teams memberships
+- Create the lifecycle management workflow
+- Run the scheduled workflow after last day of work
+- Verify that the workflow was successfully executed
+
+## Create a scheduled leaver workflow using Graph API
+
+Before introducing the API call to create this workflow, you may want to review some of the parameters that are required for this workflow creation.
+
+|Parameter |Description |
+|||
+|category | A string that identifies the category of the workflow. String is "joiner", "mover", or "leaver and can support multiple strings. Category of workflow must also contain the category of its tasks. For full task definitions, see: [Lifecycle workflow tasks and definitions](lifecycle-workflow-tasks.md) |
+|displayName | A unique string that identifies the workflow. |
+|description | A string that describes the purpose of the workflow for administrative use. (Optional) |
+|isEnabled | A boolean value that denotes whether the workflow is set to run or not. If set to ΓÇ£true" then the workflow will run. |
+|isSchedulingEnabled | A Boolean value that denotes whether scheduling is enabled or not. Unlike isEnabled, a workflow can still be run on demand if this value is set to false. |
+|executionConditions | An argument that contains: <br><br>a time-based attribute and an integer parameter defining when a workflow will run between -60 and 60 <br><br>A scope attribute defining who the workflow runs for. |
+|tasks | An argument in a workflow that has a unique displayName and a description. <br><br> It defines the specific tasks to be executed in the workflow. The specified task is outlined by the taskDefinitionID and its parameters. For a list of supported tasks, and their corresponding IDs, see [Supported Task Definitions](lifecycle-workflow-tasks.md). |
+
+For the purpose of this tutorial, there are three tasks that will be introduced in this workflow:
+
+### Remove all licenses for user
+
+```Example
+"tasks":[
+ {
+ "category": "leaver",
+ "description": "Remove all licenses assigned to the user",
+ "displayName": "Remove all licenses for user",
+ "id": "8fa97d28-3e52-4985-b3a9-a1126f9b8b4e",
+ "version": 1,
+ "parameters": []
+ }
+ ]
+```
+### Remove user from all Teams task
+
+```Example
+"tasks":[
+ {
+ "category": "leaver",
+ "description": "Remove user from all Teams memberships",
+ "displayName": "Remove user from all Teams",
+ "id": "81f7b200-2816-4b3b-8c5d-dc556f07b024",
+ "version": 1,
+ "parameters": []
+ }
+ ]
+```
+### Delete user account
+
+```Example
+"tasks":[
+ {
+ "category": "leaver",
+ "description": "Delete user account in Azure AD",
+ "displayName": "Delete User Account",
+ "id": "8d18588d-9ad3-4c0f-99d0-ec215f0e3dff",
+ "version": 1,
+ "parameters": []
+ }
+ ]
+```
+### Scheduled leaver workflow
+
+The following POST API call will create a scheduled leaver workflow to configure off-boarding tasks for employees after their last day of work.
+
+```http
+POST https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows
+Content-type: application/json
+
+{
+ "category": "leaver",
+ "displayName": "Post-Offboarding of an employee",
+ "description": "Configure offboarding tasks for employees after their last day of work",
+ "isEnabled": true,
+ "isSchedulingEnabled": false,
+ "executionConditions": {
+ "@odata.type": "#microsoft.graph.identityGovernance.triggerAndScopeBasedConditions",
+ "scope": {
+ "@odata.type": "#microsoft.graph.identityGovernance.ruleBasedSubjectSet",
+ "rule": "department eq 'Marketing'"
+ },
+ "trigger": {
+ "@odata.type": "#microsoft.graph.identityGovernance.timeBasedAttributeTrigger",
+ "timeBasedAttribute": "employeeLeaveDateTime",
+ "offsetInDays": 7
+ }
+ },
+ "tasks": [
+ {
+ "category": "leaver",
+ "continueOnError": false,
+ "description": "Remove all licenses assigned to the user",
+ "displayName": "Remove all licenses for user",
+ "executionSequence": 1,
+ "isEnabled": true,
+ "taskDefinitionId": "8fa97d28-3e52-4985-b3a9-a1126f9b8b4e",
+ "arguments": []
+ },
+ {
+ "category": "leaver",
+ "continueOnError": false,
+ "description": "Remove user from all Teams memberships",
+ "displayName": "Remove user from all Teams",
+ "executionSequence": 2,
+ "isEnabled": true,
+ "taskDefinitionId": "81f7b200-2816-4b3b-8c5d-dc556f07b024",
+ "arguments": []
+ },
+ {
+ "category": "leaver",
+ "continueOnError": false,
+ "description": "Delete user account in Azure AD",
+ "displayName": "Delete User Account",
+ "executionSequence": 3,
+ "isEnabled": true,
+ "taskDefinitionId": "8d18588d-9ad3-4c0f-99d0-ec215f0e3dff",
+ "arguments": []
+ }
+ ]
+}
+```
+
+## Run the workflow
+Now that the workflow is created, it will automatically run the workflow every 3 hours. Lifecycle workflows will check every 3 hours for users in the associated execution condition and execute the configured tasks for those users. However, for the tutorial, we would like to run it immediately. To run a workflow immediately, we can use the on-demand feature.
+
+>[!NOTE]
+>Be aware that you currently cannot run a workflow on-demand if it is set to disabled. You need to set the workflow to enabled to use the on-demand feature.
+
+To run a workflow on-demand for users using the GRAPH API do the following steps:
+
+1. Open [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer).
+2. Make sure the top is still set to **POST**, and **beta** and `https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<id>/activate` is in the box. Change `<id>` to the ID of workflows.
+ 3. Copy the code below in to the **Request body**
+ 4. Replace `<userid>` in the code below with the value of the user's ID.
+ 5. Select **Run query**
+ ```json
+ {
+ "subjects":[
+ {"id":"<userid>"}
+
+ ]
+}
+
+```
+
+## Check tasks and workflow status
+
+At any time, you may monitor the status of the workflows and the tasks. As a reminder, there are three different data pivots, users runs, and tasks which are currently available in public preview. You may learn more in the how-to guide [Check the status of a workflow (preview)](check-status-workflow.md). In the course of this tutorial, we will look at the status using the user focused reports.
+
+To begin, you will just need the ID of the workflow and the date range for which you want to see the summary of the status. You may obtain the workflow ID from the response code of the POST API call that was used to create the workflow.
+
+This example will show you how to list the userProcessingResults for the last 7 days.
+
+```http
+GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow id>/userProcessingResults
+```
+Furthermore, it is possible to get a summary of the userProcessingResults to get a quicker overview of large amounts of data, but for this a time span must be specified.
+
+```http
+GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow id>/userProcessingResults/summary(startDateTime=2022-05-01T00:00:00Z,endDateTime=2022-05-30T00:00:00Z)
+```
+You may also check the full details about the tasks of a given userProcessingResults. You will need to provide the workflow ID of the workflow, as well as the userProcessingResult ID. You may obtain the userProcessingResult ID from the response of the userProcessingResults GET call above.
+
+```http
+GET https://graph.microsoft.com/beta/identityGovernance/LifecycleWorkflows/workflows/<workflow_id>/userProcessingResults/<userProcessingResult_id>/taskProcessingResults
+```
+## Enable the workflow schedule
+
+After running your workflow on-demand and checking that everything is working fine, you may want to enable the workflow schedule. To enable the workflow schedule, you may run the following PATCH call.
+
+```http
+PATCH https://graph.microsoft.com/beta/identityGovernance/lifecycleWorkflows/workflows/<id>
+Content-type: application/json
+
+{
+ "displayName":"Post-Offboarding of an employee",
+ "description":"Configure offboarding tasks for employees after their last day of work",
+ "isEnabled": true,
+ "isSchedulingEnabled": true
+}
+
+```
+
+## Next steps
+- [Preparing user accounts for Lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md)
+- [Automate employee offboarding tasks after their last day of work with Azure portal (preview)](tutorial-scheduled-leaver-portal.md)
active-directory Tutorial Scheduled Leaver Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/tutorial-scheduled-leaver-portal.md
+
+ Title: Automate employee offboarding tasks after their last day of work with Azure portal (preview)
+description: Tutorial for post off-boarding users from an organization using Lifecycle workflows with Azure portal (preview).
+++++++ Last updated : 08/18/2022++++
+# Automate employee offboarding tasks after their last day of work with Azure portal (preview)
+
+This tutorial provides a step-by-step guide on how to configure off-boarding tasks for employees after their last day of work with Lifecycle workflows using the Azure portal.
+
+This post off-boarding scenario will run a scheduled workflow and accomplish the following tasks:
+
+1. Remove all licenses for user
+2. Remove user from all Teams
+3. Delete user account
+
+## Before you begin
+
+As part of the prerequisites for completing this tutorial, you will need an account that has licenses and Teams memberships that can be deleted during the tutorial. For more comprehensive instructions on how to complete these prerequisite steps, you may refer to the [Preparing user accounts for Lifecycle workflows tutorial](tutorial-prepare-azure-ad-user-accounts.md).
+
+The scheduled leaver scenario can be broken down into the following:
+- **Prerequisite:** Create a user account that represents an employee leaving your organization
+- **Prerequisite:** Prepare the user account with licenses and Teams memberships
+- Create the lifecycle management workflow
+- Run the scheduled workflow after last day of work
+- Verify that the workflow was successfully executed
+
+## Create a workflow using scheduled leaver template
+Use the following steps to create a scheduled leaver workflow that will configure off-boarding tasks for employees after their last day of work with Lifecycle workflows using the Azure portal.
+
+ 1. Sign in to Azure portal
+ 2. On the right, select **Azure Active Directory**.
+ 3. Select **Identity Governance**.
+ 4. Select **Lifecycle workflows (Preview)**.
+ 5. On the **Overview (Preview)** page, select **New workflow**.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/new-workflow.png" alt-text="Screenshot of selecting a new workflow." lightbox="media/tutorial-lifecycle-workflows/new-workflow.png":::
+
+ 6. From the templates, select **Select** under **Post-offboarding of an employee**.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/select-leaver-template.png" alt-text="Screenshot of selecting a leaver workflow." lightbox="media/tutorial-lifecycle-workflows/select-leaver-template.png":::
+
+ 7. Next, you will configure the basic information about the workflow. This information includes when the workflow will trigger, known as **Days from event**. So in this case, the workflow will trigger seven days after the employee's leave date. On the post-offboarding of an employee screen, add the following settings and then select **Next: Configure Scope**.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/leaver-basics.png" alt-text="Screenshot of leaver template basics information for a workflow." lightbox="media/tutorial-lifecycle-workflows/leaver-basics.png":::
+
+ 8. Next, you will configure the scope. The scope determines which users this workflow will run against. In this case, it will be on all users in the Marketing department. On the configure scope screen, under **Rule** add the following and then select **Next: Review tasks**.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/leaver-scope.png" alt-text="Screenshot of reviewing scope details for a leaver workflow." lightbox="media/tutorial-lifecycle-workflows/leaver-scope.png":::
+
+ 9. On the following page, you may inspect the tasks if desired but no additional configuration is needed. Select **Next: Select users** when you are finished.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/review-leaver-tasks.png" alt-text="Screenshot of leaver workflow tasks." lightbox="media/tutorial-lifecycle-workflows/review-leaver-tasks.png":::
+
+10. On the review blade, verify the information is correct and select **Create**.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/create-leaver-workflow.png" alt-text="Screenshot of a leaver workflow being created." lightbox="media/tutorial-lifecycle-workflows/create-leaver-workflow.png":::
+
+>[!NOTE]
+> Select **Create** with the **Enable schedule** box unchecked to run the workflow on-demand. You may enable this setting later after checking the tasks and workflow status.
+
+## Run the workflow
+Now that the workflow is created, it will automatically run the workflow every 3 hours. Lifecycle workflows will check every 3 hours for users in the associated execution condition and execute the configured tasks for those users. However, for the tutorial, we would like to run it immediately. To run a workflow immediately, we can use the on-demand feature.
+
+>[!NOTE]
+>Be aware that you currently cannot run a workflow on-demand if it is set to disabled. You need to set the workflow to enabled to use the on-demand feature.
+
+To run a workflow on-demand, for users using the Azure portal, do the following steps:
+
+ 1. On the workflow screen, select the specific workflow you want to run.
+ 2. Select **Run on demand**.
+ 3. On the **select users** tab, select **add users**.
+ 4. Add a user.
+ 5. Select **Run workflow**.
+
+
+## Check tasks and workflow status
+
+At any time, you may monitor the status of the workflows and the tasks. As a reminder, there are three different data pivots, users runs, and tasks which are currently available in public preview. You may learn more in the how-to guide [Check the status of a workflow (preview)](check-status-workflow.md). In the course of this tutorial, we will look at the status using the user focused reports.
+
+ 1. To begin, select the **Workflow history (Preview)** tab on the left to view the user summary and associated workflow tasks and statuses.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/workflow-history-post-offboard.png" alt-text="Screenshot of the workflow history summary." lightbox="media/tutorial-lifecycle-workflows/workflow-history-post-offboard.png":::
+
+1. Once the **Workflow history (Preview)** tab has been selected, you will land on the workflow history page as shown.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/user-summary-post-offboard.png" alt-text="Screenshot of the workflow history overview." lightbox="media/tutorial-lifecycle-workflows/user-summary-post-offboard.png":::
+
+1. Next, you may select **Total tasks** for the user Jane Smith to view the total number of tasks created and their statuses. In this example, there are three total tasks assigned to the user Jane Smith.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/total-tasks-post-offboard.png" alt-text="Screenshot of workflow's total tasks." lightbox="media/tutorial-lifecycle-workflows/total-tasks-post-offboard.png":::
+
+1. To add an extra layer of granularity, you may select **Failed tasks** for the user Wade Warren to view the total number of failed tasks assigned to the user Wade Warren.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/failed-tasks-post-offboard.png" alt-text="Screenshot of workflow failed tasks." lightbox="media/tutorial-lifecycle-workflows/failed-tasks-post-offboard.png":::
+
+1. Similarly, you may select **Unprocessed tasks** for the user Wade Warren to view the total number of unprocessed or canceled tasks assigned to the user Wade Warren.
+ :::image type="content" source="media/tutorial-lifecycle-workflows/canceled-tasks-post-offboard.png" alt-text="Screenshot of workflow unprocessed tasks." lightbox="media/tutorial-lifecycle-workflows/canceled-tasks-post-offboard.png":::
+
+## Enable the workflow schedule
+
+After running your workflow on-demand and checking that everything is working fine, you may want to enable the workflow schedule. To enable the workflow schedule, you may select the **Enable Schedule** checkbox on the Properties (Preview) page.
+
+ :::image type="content" source="media/tutorial-lifecycle-workflows/enable-schedule.png" alt-text="Screenshot of workflow enabled schedule." lightbox="media/tutorial-lifecycle-workflows/enable-schedule.png":::
+
+## Next steps
+- [Preparing user accounts for Lifecycle workflows (preview)](tutorial-prepare-azure-ad-user-accounts.md)
+- [Automate employee offboarding tasks after their last day of work with Microsoft Graph (preview)](tutorial-scheduled-leaver-graph.md)
+++++++
active-directory Understanding Lifecycle Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/understanding-lifecycle-workflows.md
+
+ Title: 'Understanding lifecycle workflows- Azure Active Directory'
+description: Describes an overview of Lifecycle workflows and the various parts.
++++++ Last updated : 01/20/2022++++
+# Understanding lifecycle workflows
+
+The following reference document provides an overview of a workflow created using Lifecycle Workflows. Lifecycle Workflows allow you to create workflows that automate common tasks associated with user lifecycle in organizations. Lifecycle Workflows automate tasks based on the joiner-mover-leaver cycle of lifecycle management, and splits tasks for users up into categories of where they are in the lifecycle of an organization. These categories extend into templates where they can be quickly customized to suit the needs of users in your organization. For more information, see: [What are Lifecycle Workflows?](what-are-lifecycle-workflows.md).
+
+ [![Diagram of a lifecycle workflow](media/understanding-lifecycle-workflows/workflow-2.png)](media/understanding-lifecycle-workflows/workflow-2.png#lightbox)
+
+## Licenses and Permissions
+++
+|Column1 |Display String |Description |Admin Consent Required |
+|||||
+|LifecycleWorkflows.Read.All | Read all Lifecycle workflows, tasks, user states| Allows the app to list and read all workflows, tasks, user states related to lifecycle workflows on behalf of the signed-in user.| Yes
+|LifecycleWorkflows.ReadWrite.All | Read and write all lifecycle workflows, tasks, user states.| Allows the app to create, update, list, read and delete all workflows, tasks, user states related to lifecycle workflows on behalf of the signed-in user.| Yes
+
+## Parts of a workflow
+A workflow can be broken down in to the following three main parts.
+
+|Workflow part|Description|
+|--|--|
+|General information|This portion of a workflow covers basic information such as display name and a description of what the workflow does.|
+|Tasks|Tasks are the actions that will be taken when a workflow is executed.|
+|Execution conditions| The execution condition section of a workflow sets up<br><br>- Who(scope) the workflow runs against <br><br>- When(trigger) the workflow runs|
+
+## Templates
+Creating a workflow via the portal requires the use of a template. A Lifecycle Workflow template is a framework that is used for pre-defined tasks and helps automate the creation of a workflow.
+
+ [![Understanding workflow template diagram.](media/understanding-lifecycle-workflows/workflow-3.png)](media/understanding-lifecycle-workflows/workflow-3.png#lightbox)
+
+The template will define the task that is to be used and then guide you through the creation of the workflow. The template provides input for description information and execution condition information.
+
+>[!NOTE]
+>Depending on the template you select, the options that will be available may vary. This document uses the **Onboarding pre-hire employee** template to illustrate the parts of a workflow.
+
+For more information, see [Lifecycle workflow templates.](lifecycle-workflow-templates.md)
+
+## Workflow basics
+
+After selecting a template, on the basics screen:
+ - Provide the information that will be used in the description portion of the workflow.
+ - The trigger, defines when of the execution condition.
+
+ [![Basics of a workflow.](media/understanding-lifecycle-workflows/workflow-4.png)](media/understanding-lifecycle-workflows/workflow-4.png#lightbox)
+
+### Workflow details
+Under the workflow details section, you can provide the following information:
+
+ |Name|Description|
+ |--|--|
+ |Name|The name of the workflow.|
+ |Description|A brief description that describes the workflow.|
+
+### Trigger details
+Under the trigger details section, you can provide the following information.
+
+ |Name|Description|
+ |--|--|
+ |Days for event|The number of days before or after the date specified in the **Event user attribute**.|
+
+This section defines **when** the workflow will run. Currently, there are two supported types of triggers:
+
+- Trigger and scope based - runs the task on all users in scope once the workflow is triggered.
+- On-demand - can be run immediately. Typically used for real-time employee terminations.
+
+## Configure scope
+After you define the basics tab, on the configure scope screen:
+ - Provide the information that will be used in the execution condition, to determine who the workflow will run against.
+ - Add more expressions to create more complex filtering.
+
+The configure scope section determines **who** the workflow will run against.
+
+ [![Screenshot showing the rule section](media/understanding-lifecycle-workflows/workflow-5.png)](media/understanding-lifecycle-workflows/workflow-5.png#lightbox)
+
+You can add extra expressions using **And/Or** to create complex conditionals, and apply the workflow more granularly across your organization.
+
+ [![Extra expressions.](media/understanding-lifecycle-workflows/workflow-8.png)](media/understanding-lifecycle-workflows/workflow-8.png#lightbox)
+
+For more information, see [Create a lifecycle workflow.](create-lifecycle-workflow.md)
++
+## Review tasks
+After defining the scope the review tasks screen will allow you to:
+ - Verify that the correct template was selected, and the tasks associated with the workflow are correct.
+ - Add more tasks other than the ones in the template.
+
+[![Screenshot showing the review tasks screen.](media/understanding-lifecycle-workflows/workflow-6.png)](media/understanding-lifecycle-workflows/workflow-6.png#lightbox)
+
+You can use the **Add task** button to add extra tasks for the workflow. Select the additional tasks from the list provided.
+
+ [![Screenshot showing additional tasks section.](media/understanding-lifecycle-workflows/workflow-6.png)](media/understanding-lifecycle-workflows/workflow-6.png#lightbox)
+
+For more information, see: [Lifecycle workflow tasks](lifecycle-workflow-tasks.md)
+
+## Review and create
+
+After reviewing the tasks on the review and create screen, you:
+ - Verify all of the information is correct, and create the workflow.
+
+ Based on what was defined in the previous sections our workflow will now show:
+- It's named **on-board pre-hire employee**.
+- Based on the date in the **EmployeeHireDate** attribute, it will trigger **seven** (7) days prior to the date.
+- It will run against users who have **marketing** for the **department** attribute value.
+- It will generate a **TAP (temporary access password)**, and send an email to the user in the **manager** attribute of the pre-hire employee.
+
+ [![Review and create workflow template.](media/understanding-lifecycle-workflows/workflow-7.png)](media/understanding-lifecycle-workflows/workflow-7.png#lightbox)
+
+## Scheduling
+A workflow isn't scheduled to run by default. To enable the workflow, it needs to be scheduled.
+
+To verify whether the workflow is scheduled, you can view the **Scheduled** column.
+
+To enable the workflow, select the **Enable schedule** option for the workflow.
+
+Once scheduled, the workflow will be evaluated every 3 hours to determine whether or not it should run based on the execution conditions.
+
+ [![Workflow template schedule.](media/understanding-lifecycle-workflows/workflow-10.png)](media/understanding-lifecycle-workflows/workflow-10.png#lightbox)
++
+### On-demand scheduling
+
+A workflow can be run on-demand for testing or in situations where it's required.
+
+Use the **Run on demand** feature to execute the workflow immediately. The workflow must be enabled before you can run it on demand.
+
+>[!NOTE]
+> A workflow that is run on demand for any user does not take into account whether or not a user meets the workflow's execution. It will apply the task regardless of whether the execution conditions are met or not.
+
+For more information, see [Run a workflow on-demand](on-demand-workflow.md)
+
+## Managing the workflow
+
+By selecting on a workflow you created, you can manage the workflow.
+
+You can select which portion of the workflow you wish to update or change using the left navigation bar. Select the section you wish to update.
+
+[![Update manage workflow section review.](media/understanding-lifecycle-workflows/workflow-11.png)](media/understanding-lifecycle-workflows/workflow-11.png#lightbox)
+
+For more information, see [Manage lifecycle workflow properties](manage-workflow-properties.md)
+
+## Versioning
+
+Workflow versions are separate workflows built using the same information of an original workflow, but with updated parameters so that they're reported differently within logs. Workflow versions can change the actions or even scope of an existing workflow.
+
+You can view versioning information by selecting **Versions** under **Manage** from the left.
+
+[![Manage workflow versioning selection.](media/understanding-lifecycle-workflows/workflow-12.png)](media/understanding-lifecycle-workflows/workflow-12.png#lightbox)
+
+For more information, see [Lifecycle Workflow versioning](lifecycle-workflow-versioning.md)
+
+## Developer information
+This document covers the parts of a lifecycle workflow
+
+For more information, see the [Workflow API Reference](lifecycle-workflows-developer-reference.md)
+
+## Next steps
+- [Create a custom workflow using the Azure portal](tutorial-onboard-custom-workflow-portal.md)
+- [Create a Lifecycle workflow](create-lifecycle-workflow.md)
active-directory What Are Lifecycle Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/what-are-lifecycle-workflows.md
+
+ Title: 'What are lifecycle workflows? - Azure Active Directory'
+description: Describes overview of Lifecycle workflow feature.
++++++ Last updated : 01/20/2022+++++
+# What are Lifecycle Workflows? (Public Preview)
+
+Azure AD Lifecycle Workflows is a new Azure AD Identity Governance service that enables organizations to manage Azure AD users by automating these three basic lifecycle processes:
+
+- Joiner - When an individual comes into scope of needing access. An example is a new employee joining a company or organization.
+- Mover - When an individual moves between boundaries within an organization. This movement may require more access or authorization. An example would be a user who was in marketing is now a member of the sales organization.
+- Leaver - When an individual leaves the scope of needing access, access may need to be removed. Examples would be an employee who is retiring or an employee who has been terminated.
+
+Workflows contain specific processes, which run automatically against users as they move through their life cycle. Workflows are made up of [Tasks](lifecycle-workflow-tasks.md) and [Execution conditions](understanding-lifecycle-workflows.md#understanding-lifecycle-workflows).
+
+Tasks are specific actions that run automatically when a workflow is triggered. An Execution condition defines the 'Scope' of "ΓÇ£whoΓÇ¥ and the 'Trigger' of ΓÇ£whenΓÇ¥ a workflow will be performed. For example, send a manager an email 7 days before the value in the NewEmployeeHireDate attribute of new employees, can be described as a workflow. It consists of:
+ - Task: send email
+ - When (trigger): Seven days before the NewEmployeeHireDate attribute value
+ - Who (scope): new employees
+
+Automatic workflow schedules [trigger](understanding-lifecycle-workflows.md#trigger-details) off of user attributes. Scoping of automatic workflows is possible using a wide range of user and extended attributes; such as the "department" that a user belongs to.
+
+Finally, Lifecycle Workflows can even [integrate with Logic Apps](lifecycle-workflow-extensibility.md) tasks ability to extend workflows for more complex scenarios using your existing Logic apps.
++
+ :::image type="content" source="media/what-are-lifecycle-workflows/intro-2.png" alt-text="Lifecycle Workflows diagram." lightbox="media/what-are-lifecycle-workflows/intro-2.png":::
++
+## Why use Lifecycle workflows?
+Anyone who wants to modernize their identity lifecycle management process for employees, needs to ensure:
+
+ - **New employee on-boarding** - That when a user joins the organization, they're ready to go on day one. They have the correct access to the information, membership to groups, and applications they need.
+ - **Employee retirement/terminations/off-boarding** - That users who are no longer tied to the company for various reasons (termination, separation, leave of absence or retirement), have their access revoked in a timely manner
+ - **Easy to administer in my organization** - That there's a seamless process to accomplish the above tasks, that isn't overly burdensome or time consuming for Administrators.
+ - **Robust troubleshooting/auditing/compliance** - That there's the ability to easily troubleshoot issues when they arise and that there's sufficient logging to help with this and compliance related issues.
+
+The following are key reasons to use Lifecycle workflows.
+- **Extend** your HR-driven provisioning process with other workflows that simplify and automate tasks.
+- **Centralize** your workflow process so you can easily create and manage workflows all in one location.
+- Easily **troubleshoot** workflow scenarios with the Workflow history and Audit logs
+- **Manage** user lifecycle at scale. As your organization grows, the need for other resources to manage user lifecycles are reduced.
+- **Reduce** or remove manual tasks that were done in the past with automated lifecycle workflows
+- **Apply** logic apps to extend workflows for more complex scenarios using your existing Logic apps
++
+All of the above can help ensure a holistic experience by allowing you to remove other dependencies and applications to achieve the same result. Thus translating into, increased on-boarding and off-boarding efficiency.
++
+## When to use Lifecycle Workflows
+You can use Lifecycle workflows to address any of the following conditions.
+- **Automating and extending user onboarding/HR provisioning** - Use Lifecycle workflows when you want to extend your HR provisioning scenarios by automating tasks such as generating temporary passwords and emailing managers. If you currently have a manual process for on-boarding, use Lifecycle workflows as part of an automated process.
+- **Automate group membership**: When groups in your organization are well-defined, you can automate user membership of these groups. Some of the benefits and differences from dynamic groups include:
+ - LCW manages static groups, where a dynamic group rule isn't needed
+ - No need to have one rule per group ΓÇô the LCW rule determines the set/scope of users to execute workflows against not which group
+ - LCW helps manage users ΓÇÿ lifecycle beyond attributes supported in dynamic groups ΓÇô for example, ΓÇÿXΓÇÖ days before the employeeHireDate
+ - LCW can perform actions on the group not just the membership.
+- **Workflow history and auditing** Use Lifecycle workflows when you need to create an audit trail of user lifecycle processes. Using the portal you can view history and audits for on-boarding and off-boarding scenarios.
+- **Automate user account management**: Making sure users who are leaving have their access to resources revoked is a key part of the identity lifecycle process. Lifecycle Workflows allow you to automate the disabling and removal of user accounts.
+- **Integrate with Logic Apps**: Ability to apply logic apps to extend workflows for more complex scenarios using your existing Logic apps.
++++
+## Next steps
+- [Create a custom workflow using the Azure portal](tutorial-onboard-custom-workflow-portal.md)
+- [Create a Lifecycle workflow](create-lifecycle-workflow.md)
active-directory Workflows Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/workflows-faqs.md
+
+ Title: 'Lifecycle workflows FAQs - Azure AD (preview)'
+description: Frequently asked questions about Lifecycle workflows (preview).
+++++++ Last updated : 07/14/2022++++
+# Lifecycle workflows - FAQs (preview)
+
+In this article you will find questions to commonly asked questions about [Lifecycle Workflows](what-are-lifecycle-workflows.md). Please check back to this page frequently as changes happen often, and answers are continually being added.
+
+## Frequently asked questions
+
+### Can I create custom workflows for guests?
+
+Yes, custom workflows can be configured for members or guests in your tenant. Workflows can run for all types of external guests, external members, internal guests and internal members.
+
+### Do I need to map employeeHireDate in provisioning apps like WorkDay?
+
+Yes, key user properties like employeeHireDate and employeeType are supported for user provisioning from HR apps like WorkDay. To use these properties in Lifecycle workflows, you will need to map them in the provisioning process to ensure the values are set. The following is an example of the mapping:
+
+![Screenshot showing an example of how mapping is done in a Lifecycle Workflow.](./media/workflows-faqs/workflows-mapping.png)
+
+### How do I see more details and parameters of tasks and the attributes that are being updated?
+
+Some tasks do update existing attributes; however, we donΓÇÖt currently share those specific details. As these tasks are updating attributes related to other Azure AD features, so you can find that info in those docs. For temporary access pass, weΓÇÖre writing to the appropriate attributes listed [here](/graph/api/temporaryaccesspassauthenticationmethod-post?view=graph-rest-beta&tabs=csharp#request-body).
+
+### Is it possible for me to create new tasks and how? For example, triggering other graph APIs/web hooks?
+
+We currently donΓÇÖt support the ability to create new tasks outside of the set of tasks supported in the task templates. As an alternative, you may accomplish this by setting up a logic app and then creating a logic apps task in Lifecycle Workflows with the URL. For more information, see [Trigger Logic Apps based on custom task extensions (preview)](trigger-custom-task.md)
+
+## Next steps
+
+- [What are Lifecycle workflows? (Preview)](what-are-lifecycle-workflows.md)
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
We recommend that you harden your Azure AD Connect server to decrease the securi
- Follow these [additional guidelines](/windows-server/identity/ad-ds/plan/security-best-practices/reducing-the-active-directory-attack-surface) to reduce the attack surface of your Active Directory environment. - Follow the [Monitor changes to federation configuration](how-to-connect-monitor-federation-changes.md) to setup alerts to monitor changes to the trust established between your Idp and Azure AD. - Enable Multi Factor Authentication (MFA) for all users that have privileged access in Azure AD or in AD. One security issue with using AADConnect is that if an attacker can get control over the Azure AD Connect server they can manipulate users in Azure AD. To prevent a attacker from using these capabilities to take over Azure AD accounts, MFA offers protections so that even if an attacker manages to e.g. reset a user's password using Azure AD Connect they still cannot bypass the second factor.
+- Disable Soft Matching on your tenant. Soft Matching is a great feature to help transfering source of autority for existing cloud only objects to Azure AD Connect, but it comes with certain security risks. If you do not require Soft Matching, you should disable it: https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-syncservice-features#blocksoftmatch
### SQL Server used by Azure AD Connect * Azure AD Connect requires a SQL Server database to store identity data. By default, a SQL Server 2019 Express LocalDB (a light version of SQL Server Express) is installed. SQL Server Express has a 10-GB size limit that enables you to manage approximately 100,000 objects. If you need to manage a higher volume of directory objects, point the installation wizard to a different installation of SQL Server. The type of SQL Server installation can impact the [performance of Azure AD Connect](./plan-connect-performance-factors.md#sql-database-factors).
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
In | Can do
[Compliance Manager](/office365/securitycompliance/meet-data-protection-and-regulatory-reqs-using-microsoft-cloud) | Track, assign, and verify your organization's regulatory compliance activities [Office 365 Security & Compliance Center](https://support.office.com/article/About-Office-365-admin-roles-da585eea-f576-4f55-a1e0-87090b6aaa9d) | Manage data governance<br>Perform legal and data investigation<br>Manage Data Subject Request<br><br>This role has the same permissions as the [Compliance Administrator RoleGroup](/microsoft-365/security/office-365-security/permissions-in-the-security-and-compliance-center#permissions-needed-to-use-features-in-the-security--compliance-center) in Office 365 Security & Compliance Center role-based access control. [Intune](/intune/role-based-access-control) | View all Intune audit data
-[Cloud App Security](/cloud-app-security/manage-admins) | Has read-only permissions and can manage alerts<br>Can create and modify file policies and allow file governance actions<br>Can view all the built-in reports under Data Management
+[Microsoft Defender for Cloud Apps](/defender-cloud-apps/manage-admins) | Has read-only permissions and can manage alerts<br>Can create and modify file policies and allow file governance actions<br>Can view all the built-in reports under Data Management
> [!div class="mx-tableFixed"] > | Actions | Description |
In | Can do
[Compliance Manager](/office365/securitycompliance/meet-data-protection-and-regulatory-reqs-using-microsoft-cloud) | Track, assign, and verify your organization's regulatory compliance activities [Office 365 Security & Compliance Center](https://support.office.com/article/About-Office-365-admin-roles-da585eea-f576-4f55-a1e0-87090b6aaa9d) | Manage data governance<br>Perform legal and data investigation<br>Manage Data Subject Request<br><br>This role has the same permissions as the [Compliance Data Administrator RoleGroup](/microsoft-365/security/office-365-security/permissions-in-the-security-and-compliance-center#permissions-needed-to-use-features-in-the-security--compliance-center) in Office 365 Security & Compliance Center role-based access control. [Intune](/intune/role-based-access-control) | View all Intune audit data
-[Cloud App Security](/cloud-app-security/manage-admins) | Has read-only permissions and can manage alerts<br>Can create and modify file policies and allow file governance actions<br>Can view all the built-in reports under Data Management
+[Microsoft Defender for Cloud Apps](/defender-cloud-apps/manage-admins) | Has read-only permissions and can manage alerts<br>Can create and modify file policies and allow file governance actions<br>Can view all the built-in reports under Data Management
> [!div class="mx-tableFixed"] > | Actions | Description |
Identity Protection Center | All permissions of the Security Reader role<br>Addi
Azure Advanced Threat Protection | Monitor and respond to suspicious security activity [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/prepare-deployment) | Assign roles<br>Manage machine groups<br>Configure endpoint threat detection and automated remediation<br>View, investigate, and respond to alerts<br/>View machines/device inventory [Intune](/intune/role-based-access-control) | Views user, device, enrollment, configuration, and application information<br>Cannot make changes to Intune
-[Cloud App Security](/cloud-app-security/manage-admins) | Add admins, add policies and settings, upload logs and perform governance actions
+[Microsoft Defender for Cloud Apps](/defender-cloud-apps/manage-admins) | Add admins, add policies and settings, upload logs and perform governance actions
[Microsoft 365 service health](/office365/enterprise/view-service-health) | View the health of Microsoft 365 services [Smart lockout](../authentication/howto-password-smart-lockout.md) | Define the threshold and duration for lockouts when failed sign-in events happen. [Password Protection](../authentication/concept-password-ban-bad.md) | Configure custom banned password list or on-premises password protection.
Users with this role can manage alerts and have global read-only access on secur
| [Office 365 Security & Compliance Center](https://support.office.com/article/About-Office-365-admin-roles-da585eea-f576-4f55-a1e0-87090b6aaa9d) | All permissions of the Security Reader role<br>View, investigate, and respond to security alerts | | [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/prepare-deployment) | All permissions of the Security Reader role<br>View, investigate, and respond to security alerts | | [Intune](/intune/role-based-access-control) | All permissions of the Security Reader role |
-| [Microsoft Defender for Cloud Apps](/cloud-app-security/manage-admins) | All permissions of the Security Reader role<br>View, investigate, and respond to security alerts |
+| [Microsoft Defender for Cloud Apps](/defender-cloud-apps/manage-admins) | All permissions of the Security Reader role<br>View, investigate, and respond to security alerts |
| [Microsoft 365 service health](/microsoft-365/enterprise/view-service-health) | View the health of Microsoft 365 services | > [!div class="mx-tableFixed"]
Identity Protection Center | Read all security reports and settings information
[Office 365 Security & Compliance Center](https://support.office.com/article/About-Office-365-admin-roles-da585eea-f576-4f55-a1e0-87090b6aaa9d) | View security policies<br>View and investigate security threats<br>View reports [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/prepare-deployment) | View and investigate alerts. When you turn on role-based access control in Microsoft Defender for Endpoint, users with read-only permissions such as the Azure AD Security Reader role lose access until they are assigned to a Microsoft Defender for Endpoint role. [Intune](/intune/role-based-access-control) | Views user, device, enrollment, configuration, and application information. Cannot make changes to Intune.
-[Microsoft Defender for Cloud Apps](/cloud-app-security/manage-admins) | Has read permissions.
+[Microsoft Defender for Cloud Apps](/defender-cloud-apps/manage-admins) | Has read permissions.
[Microsoft 365 service health](/office365/enterprise/view-service-health) | View the health of Microsoft 365 services > [!div class="mx-tableFixed"]
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
+
+ Title: Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS) (Preview)
+description: Learn how to configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS), including deploying an AKS cluster into an existing virtual network and subnet.
+++ Last updated : 08/29/2022++
+# Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS)
+
+The traditional [Azure Container Networking Interface (CNI)](./configure-azure-cni.md) assigns a VNet IP address to every Pod either from a pre-reserved set of IPs on every node or from a separate subnet reserved for pods. This approach requires IP address planning and could lead to address exhaustion and difficulties in scaling your clusters as your application demands grow.
+
+With Azure CNI Overlay, the cluster nodes are deployed into an Azure Virtual Network subnet, whereas pods are assigned IP addresses from a private CIDR logically different from the VNet hosting the nodes. Pod and node traffic within the cluster use an overlay network, and Network Address Translation (via the node's IP address) is used to reach resources outside the cluster. This solution saves a significant amount of VNet IP addresses and enables you to seamlessly scale your cluster to very large sizes. An added advantage is that the private CIDR can be reused in different AKS clusters, truly extending the IP space available for containerized applications in AKS.
+
+> [!NOTE]
+> Azure CNI overlay is currently only available in US West Central region.
+
+## Overview of overlay networking
+
+In overlay networking, only the Kubernetes cluster nodes are assigned IPs from a subnet. Pods receive IPs from a private CIDR that is provided at the time of cluster creation. Each node is assigned a `/24` address space carved out from the same CIDR. Additional nodes that are created when you scale out a cluster automatically receive `/24` address spaces from the same CIDR. Azure CNI assigns IPs to pods from this `/24` space.
+
+A separate routing domain is created in the Azure Networking stack for the pod's private CIDR space, which creates an overlay network for direct communication between pods. There is no need to provision custom routes on the cluster subnet or use an encapsulation method to tunnel traffic between pods. This provides connectivity performance between pods on par with VMs in a VNet.
++
+Communication with endpoints outside the cluster, such as on-premises and peered VNets, happens using the node IP through Network Address Translation. Azure CNI translates the source IP (overlay IP of the pod) of the traffic to the primary IP address of the VM, which enables the Azure Networking stack to route the traffic to the destination. Endpoints outside the cluster can't connect to a pod directly. You will have to publish the pod's application as a Kubernetes Load Balancer service to make it reachable on the VNet.
+
+Outbound (egress) connectivity to the internet for overlay pods can be provided using a [Standard SKU Load Balancer](./egress-outboundtype.md#outbound-type-of-loadbalancer) or [Managed NAT Gateway](./nat-gateway.md). You can also control egress traffic by directing it to a firewall using [User Defined Routes on the cluster subnet](./egress-outboundtype.md#outbound-type-of-userdefinedrouting).
+
+Ingress connectivity to the cluster can be achieved using an ingress controller such as Nginx or [HTTP application routing](./http-application-routing.md).
+
+## Difference between Kubenet and Azure CNI Overlay
+
+Like Azure CNI Overlay, Kubenet assigns IP addresses to pods from an address space logically different from the VNet but has scaling and other limitations. The below table provides a detailed comparison between Kubenet and Azure CNI Overlay. If you do not want to assign VNet IP addresses to pods due to IP shortage, then Azure CNI Overlay is the recommended solution.
+
+| Area | Azure CNI Overlay | Kubenet |
+| -- | :--: | -- |
+| Cluster scale | 1000 nodes and 250 pods/node | 400 nodes and 250 pods/node |
+| Network configuration | Simple - no additional configuration required for pod networking | Complex - requires route tables and UDRs on cluster subnet for pod networking |
+| Pod connectivity performance | Performance on par with VMs in a VNet | Additional hop adds minor latency |
+| Kubernetes Network Policies | Azure Network Policies, Calico | Calico |
+| OS platforms supported | Linux only | Linux only |
+
+## IP address planning
+
+* **Cluster Nodes**: Cluster nodes go into a subnet in your VNet, so ensure that you have a subnet big enough to account for future scale. A simple `/24` subnet can host up to 251 nodes (the first three IP addresses in a subnet are reserved for management operations).
+
+* **Pods**: The overlay solution assigns a `/24` address space for pods on every node from the private CIDR that you specify during cluster creation. The `/24` size is fixed and can't be increased or decreased. You can run up to 250 pods on a node. When planning the pod address space, ensure that the private CIDR is large enough to provide `/24` address spaces for new nodes to support future cluster expansion.
+The following are additional factors to consider when planning pod address space:
+ * Pod CIDR space must not overlap with the cluster subnet range.
+ * Pod CIDR space must not overlap with IP ranges used in on-premises networks and peered networks.
+ * The same pod CIDR space can be used on multiple independent AKS clusters in the same VNet.
+
+* **Kubernetes service address range**: The size of the service address CIDR depends on the number of cluster services you plan to create. It must be smaller than `/12`. This range should also not overlap with the pod CIDR range, cluster subnet range, and IP range used in peered VNets and on-premises networks.
+
+* **Kubernetes DNS service IP address**: This is an IP address within the Kubernetes service address range that will be used by cluster service discovery. Don't use the first IP address in your address range. The first address in your subnet range is used for the kubernetes.default.svc.cluster.local address.
+
+## Maximum pods per node
+
+You can configure the maximum number of pods per node at the time of cluster creation or when you add a new node pool. The default for Azure CNI Overlay is 30. The maximum value that you can specify in Azure CNI Overlay is 250, and the minimum value is 10. The maximum pods per node value configured during creation of a node pool applies to the nodes in that node pool only.
+
+## Choosing a network model to use
+
+Azure CNI offers two IP addressing options for pods- the traditional configuration that assigns VNet IPs to pods, and overlay networking. The choice of which option to use for your AKS cluster is a balance between flexibility and advanced configuration needs. The following considerations help outline when each network model may be the most appropriate.
+
+Use overlay networking when:
+
+* You would like to scale to a large number of Pods but have limited IP address space in your VNet.
+* Most of the pod communication is within the cluster.
+* You don't need advanced AKS features, such as virtual nodes.
+
+Use the traditional VNet option when:
+
+* You have available IP address space.
+* Most of the pod communication is to resources outside of the cluster.
+* Resources outside the cluster need to reach pods directly.
+* You need AKS advanced features, such as virtual nodes.
+
+## Limitations with Azure CNI Overlay
+
+The overlay solution has the following limitations today
+
+* Only available for Linux and not for Windows.
+* You can't deploy multiple overlay clusters in the same subnet.
+* Overlay can be enabled only for new clusters. Existing (already deployed) clusters can't be configured to use overlay.
+* You can't use Application Gateway as an Ingress Controller (AGIC) for an overlay cluster.
+
+## Steps to set up overlay clusters
++
+The following example walks through the steps to create a new virtual network with a subnet for the cluster nodes and an AKS cluster that uses Azure CNI Overlay. Be sure to replace the variables with your own values.
+
+First, opt into the feature by running the following command:
+
+```azurecli-interactive
+az feature register --namespace Microsoft.ContainerService --name AzureOverlayPreview
+```
+
+Create a virtual network with a subnet for the cluster nodes.
+
+```azurecli-interactive
+resourceGroup="myResourceGroup"
+vnet="myVirtualNetwork"
+location="westcentralus"
+
+# Create the resource group
+az group create --name $resourceGroup --location $location
+
+# Create a VNet and a subnet for the cluster nodes
+az network vnet create -g $resourceGroup --location $location --name $vnet --address-prefixes 10.0.0.0/8 -o none
+az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name nodesubnet --address-prefix 10.10.0.0/16 -o none
+```
+
+Create a cluster with Azure CNI Overlay. Use `--network-plugin-mode` to specify that this is an overlay cluster. If the pod CIDR is not specified then AKS assigns a default space, viz. 10.244.0.0/16.
+
+```azurecli-interactive
+clusterName="myOverlayCluster"
+subscription="aaaaaaa-aaaaa-aaaaaa-aaaa"
+
+az aks create -n $clusterName -g $resourceGroup --location $location --network-plugin azure --network-plugin-mode overlay --pod-cidr 192.168.0.0/16 --vnet-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/nodesubnet
+```
+
+## Frequently asked questions
+
+* *How do pods and cluster nodes communicate with each other?*
+
+ Pods and nodes talk to each other directly without any SNAT requirements.
++
+* *Can I configure the size of the address space assigned to each space?*
+
+ No, this is fixed at `/24` today and can't be changed.
++
+* *Can I add more private pod CIDRs to a cluster after the cluster has been created?*
+
+ No, a private pod CIDR can only be specified at the time of cluster creation.
++
+* *What are the max nodes and pods per cluster supported by Overlay?*
+
+ The max scale in terms of nodes and pods per cluster is the same as the max limits supported by AKS today.
aks Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-configuration.md
Title: Cluster configuration in Azure Kubernetes Services (AKS)
description: Learn how to configure a cluster in Azure Kubernetes Service (AKS) Previously updated : 08/05/2022 Last updated : 08/31/2022
As you work with the node resource group, keep in mind that you can't:
- Specify names for the managed resources within the node resource group. - Modify or delete Azure-created tags of managed resources within the node resource group.
+## Node Restriction (Preview)
+
+The [Node Restriction](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#noderestriction) admission controller limits the Node and Pod objects a kubelet can modify. Node Restriction is on by default in AKS 1.24+ clusters. If you are using an older version use the below commands to create a cluster with Node Restriction or Update an existing cluster to add Node Restriction.
++
+### Before you begin
+
+You must have the following resource installed:
+
+* The Azure CLI
+* The `aks-preview` extension version 0.5.95 or later
+
+#### Install the aks-preview CLI extension
+
+```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+
+### Create an AKS cluster with Node Restriction
+
+To create a cluster using Node Restriction.
+
+```azurecli-interactive
+az aks create -n aks -g myResourceGroup --enable-node-restriction
+```
+
+### Update an AKS cluster with Node Restriction
+
+To update a cluster to use Node Restriction.
+
+```azurecli-interactive
+az aks update -n aks -g myResourceGroup --enable-node-restriction
+```
+
+### Remove Node Restriction from an AKS cluster
+
+To remove Node Restriction from a cluster.
+
+```azurecli-interactive
+az aks update -n aks -g myResourceGroup --disable-node-restriction
+```
+ ## OIDC Issuer (Preview) This enables an OIDC Issuer URL of the provider which allows the API server to discover public signing keys.
api-management Api Management Howto Aad B2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-aad-b2c.md
In this section, you'll create a user flow in your Azure Active Directory B2C te
1. Enter a unique name for the user flow. 1. In **Identity providers**, select **Email signup**. 1. In **User attributes and token claims**, select the attributes and claims needed for the API Management developer portal (not needed for the legacy developer portal).
- ![Application claims](./media/api-management-howto-aad-b2c/api-management-application-claims.png)
* **Attributes**: Given Name, Surname
- * **Claims**: Email Addresses, Given Name, Surname, UserΓÇÖs ObjectID
+ * **Claims**: Given Name, Surname, Email Addresses, UserΓÇÖs ObjectID
+
+ ![Application claims](./media/api-management-howto-aad-b2c/api-management-application-claims.png)
1. Select **Create**. ## Configure identity provider for developer portal
api-management Devops Api Development Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/devops-api-development-templates.md
An API developer writes an API definition by providing a specification, settings
There are several tools to assist producing the API definition: * The [Azure API Management DevOps Resource Toolkit][4] includes two tools that provide an Azure Resource Manager (ARM) template. The _extractor_ creates an ARM template by extracting an API definition from an API Management service. The _creator_ produces the ARM template from a YAML specification. The DevOps Resource Toolkit supports SOAP, REST, and GraphQL APIs.
-* The [Azure API Ops Toolkit][5] provides a workflow built on top of a [git][21] source code control system (such as [GitHub][22] or [Azure Repos][23]). It uses an _extractor_ similar to the DevOps Resource Toolkit to produce an API definition that is then applied to a target API Management service. API Ops supports REST only at this time.
+* The [Azure APIOps Toolkit][5] provides a workflow built on top of a [git][21] source code control system (such as [GitHub][22] or [Azure Repos][23]). It uses an _extractor_ similar to the DevOps Resource Toolkit to produce an API definition that is then applied to a target API Management service. APIOps supports REST only at this time.
* The [dotnet-apim][6] tool converts a well-formed YAML definition into an ARM template for later deployment. The tool is focused on REST APIs. * [Terraform][7] is an alternative to Azure Resource Manager to configure resources in Azure. You can create a Terraform configuration (together with policies) to implement the API in the same way that an ARM template is created.
Once the automated tools have been run, the API definition is reviewed by the hu
The API definition will be published to an API Management service through a release pipeline. The tools used to publish the API definition depend on the tool used to produce the API definition: * If using the [Azure API Management DevOps Resource Toolkit][4] or [dotnet-apim][6], the API definition is represented as an ARM template. Tasks are available for [Azure Pipelines][17] and [GitHub Actions][18] to deploy an ARM template.
-* If using the [Azure API Ops Toolkit][5], the toolkit includes a publisher that writes the API definition to the service.
+* If using the [Azure APIOps Toolkit][5], the toolkit includes a publisher that writes the API definition to the service.
* If using [Terraform][7], CLI tools will deploy the API definition on your service. There are tasks available for [Azure Pipelines][19] and [GitHub Actions][20] > **Can I use other source code control and CI/CD systems?** >
-> Yes. The process described works with any source code control system (although API Ops does require that the source code control system is [git][21] based). Similarly, you can use any CI/CD platform as long as it can be triggered by a check-in and run command line tools that communicate with Azure.
+> Yes. The process described works with any source code control system (although APIOps does require that the source code control system is [git][21] based). Similarly, you can use any CI/CD platform as long as it can be triggered by a check-in and run command line tools that communicate with Azure.
## Best practices
There's no industry standard for setting up a DevOps pipeline for publishing API
* [Azure Repos][23] stores the API definitions in a [git][21] repository. * [Azure Pipelines][17] runs the automated API approval and API publication processes.
-* [Azure API Ops Toolkit][5] provides tools and workflows for publishing APIs.
+* [Azure APIOps Toolkit][5] provides tools and workflows for publishing APIs.
We've seen the greatest success in customer deployments, and recommend the following practices: * Set up either [GitHub][22] or [Azure Repos][23] for your source code control system. This choice will determine your choice of pipeline runner as well. GitHub can use [Azure Pipelines][17] or [GitHub Actions][18], whereas Azure Repos must use Azure Pipelines. * Set up an Azure API Management service for each API developer so that they can develop API definitions along with the API service. Use the consumption or developer SKU when creating the service. * Use [policy fragments][24] to reduce the new policy that developers need to write for each API.
-* Use the [Azure API Ops Toolkit][5] to extract a working API definition from the developer service.
+* Use the [Azure APIOps Toolkit][5] to extract a working API definition from the developer service.
* Set up an API approval process that runs on each pull request. The API approval process should include breaking change detection, linting, and automated API testing.
-* Use the [Azure API Ops Toolkit][5] publisher to publish the API to your production API Management service.
+* Use the [Azure APIOps Toolkit][5] publisher to publish the API to your production API Management service.
-Review [Automated API deployments with API Ops][28] in the Azure Architecture Center for more details on how to configure and run a CI/CD deployment pipeline with API Ops.
+Review [Automated API deployments with APIOps][28] in the Azure Architecture Center for more details on how to configure and run a CI/CD deployment pipeline with APIOps.
## References * [Azure DevOps Services][25] includes [Azure Repos][23] and [Azure Pipelines][17].
-* [Azure API Ops Toolkit][5] provides a workflow for API Management DevOps.
+* [Azure APIOps Toolkit][5] provides a workflow for API Management DevOps.
* [Spectral][12] provides a linter for OpenAPI specifications. * [openapi-diff][13] provides a breaking change detector for OpenAPI v3 definitions. * [Newman][15] provides an automated test runner for Postman collections.
applied-ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/language-support.md
This article lists supported human languages for Immersive Reader features.
| Language | Tag | |-|--|
+| Afrikaans | af |
+| Afrikaans (South Africa) | af-ZA |
+| Albanian | sq |
+| Albanian (Albania) | sq-AL |
+| Amharic | am |
+| Amharic (Ethiopia) | am-ET |
| Arabic (Egyptian) | ar-EG |
+| Arabic (Lebanon) | ar-LB |
+| Arabic (Oman) | ar-OM |
| Arabic (Saudi Arabia) | ar-SA |
+| Azerbaijani | az |
+| Azerbaijani (Azerbaijan) | az-AZ |
+| Bangla | bn |
+| Bangla (Bangladesh) | bn-BD |
+| Bangla (India) | bn-IN |
+| Bosnian | bs |
+| Bosnian (Bosnia & Herzegovina) | bs-BA |
| Bulgarian | bg | | Bulgarian (Bulgaria) | bg-BG |
+| Burmese | my |
+| Burmese (Myanmar) | my-MM |
| Catalan | ca | | Catalan (Catalan) | ca-ES | | Chinese | zh |
This article lists supported human languages for Immersive Reader features.
| Chinese Traditional (Hong Kong SAR) | zh-Hant-HK | | Chinese Traditional (Macao SAR) | zh-Hant-MO | | Chinese Traditional (Taiwan) | zh-Hant-TW |
+| Chinese (Literary) | lzh |
+| Chinese (Literary, China) | lzh-CN |
| Croatian | hr | | Croatian (Croatia) | hr-HR | | Czech | cs |
This article lists supported human languages for Immersive Reader features.
| English (Hong Kong SAR) | en-HK | | English (India) | en-IN | | English (Ireland) | en-IE |
+| English (Kenya) | en-KE |
| English (New Zealand) | en-NZ |
+| English (Nigeria) | en-NG |
| English (Philippines) | en-PH |
+| English (Singapore) | en-SG |
+| English (South Africa) | en-ZA |
+| English (Tanzania) | en-TZ |
| English (United Kingdom) | en-GB | | English (United States) | en-US | | Estonian | et-EE |
+| Filipino | fil |
+| Filipino (Philippines) | fil-PH |
| Finnish | fi | | Finnish (Finland) | fi-FI | | French | fr |
This article lists supported human languages for Immersive Reader features.
| French (Canada) | fr-CA | | French (France) | fr-FR | | French (Switzerland) | fr-CH |
+| Galician | gl |
+| Galician (Spain) | gl-ES |
+| Georgian | ka |
+| Georgian (Georgia) | ka-GE |
| German | de | | German (Austria) | de-AT | | German (Germany) | de-DE | | German (Switzerland)| de-CH | | Greek | el | | Greek (Greece) | el-GR |
+| Gujarati | gu |
+| Gujarati (India) | gu-IN |
| Hebrew | he | | Hebrew (Israel) | he-IL | | Hindi | hi | | Hindi (India) | hi-IN | | Hungarian | hu | | Hungarian (Hungary) | hu-HU |
+| Icelandic | is |
+| Icelandic (Iceland) | is-IS |
| Indonesian | id | | Indonesian (Indonesia) | id-ID |
-| Irish | ga-IE |
+| Irish | ga |
+| Irish (Ireland) | ga-IE |
| Italian | it | | Italian (Italy) | it-IT | | Japanese | ja | | Japanese (Japan) | ja-JP |
+| Javanese | jv |
+| Javanese (Indonesia) | jv-ID |
+| Kannada | kn |
+| Kannada (India) | kn-IN |
+| Kazakh | kk |
+| Kazakh (Kazakhstan) | kk-KZ |
+| Khmer | km |
+| Khmer (Cambodia) | km-KH |
| Korean | ko | | Korean (Korea) | ko-KR |
+| Lao | lo |
+| Lao (Laos) | lo-LA |
| Latvian | Lv-LV |
+| Latvian (Latvia) | lv-LV |
+| Lithuanian | lt |
| Lithuanian | lt-LT |
+| Macedonian | mk |
+| Macedonian (North Macedonia) | mk-MK |
| Malay | ms | | Malay (Malaysia) | ms-MY |
-| Maltese | Mt-MT |
+| Malayalam | ml |
+| Malayalam (India) | ml-IN |
+| Maltese | mt |
+| Maltese (Malta) | Mt-MT |
+| Marathi | mr |
+| Marathi (India) | mr-IN |
+| Mongolian | mn |
+| Mongolian (Mongolia) | mn-MN |
+| Nepali | ne |
+| Nepali (Nepal) | ne-NP |
| Norwegian Bokmal| nb | | Norwegian Bokmal (Norway) | nb-NO |
+| Pashto | ps |
+| Pashto (Afghanistan) | ps-AF |
+| Persian | fa |
+| Persian (Iran) | fa-IR |
| Polish | pl | | Polish (Poland) | pl-PL | | Portuguese | pt | | Portuguese (Brazil) | pt-BR |
-| Portuguese (Portugal) | pt-PT |
+| Portuguese (Portugal) | pt-PT |
| Romanian | ro | | Romanian (Romania) | ro-RO | | Russian | ru | | Russian (Russia) | ru-RU |
+| Serbian (Cyrillic) | sr-Cyrl |
+| Serbian (Cyrillic, Serbia) | sr-Cyrl-RS |
+| Sinhala | si |
+| Sinhala (Sri Lanka) | si-LK |
| Slovak | sk | | Slovak (Slovakia) | sk-SK | | Slovenian | sl | | Slovenian (Slovenia) | sl-SI |
+| Somali | so |
+| Somali (Somalia) | so-SO |
| Spanish | es |
+| Spanish (Argentina) | es-AR |
+| Spanish (Colombia) | es-CO |
| Spanish (Latin America) | es-419 | | Spanish (Mexico) | es-MX | | Spanish (Spain) | es-ES |
+| Spanish (United States) | es-US |
+| Sundanese | su |
+| Sundanese (Indonesia) | su-ID |
+| Swahili | sw |
+| Swahili (Kenya) | sw-KE |
| Swedish | sv | | Swedish (Sweden) | sv-SE | | Tamil | ta | | Tamil (India) | ta-IN |
+| Tamil (Malaysia) | ta-MY |
| Telugu | te | | Telugu (India) | te-IN | | Thai | th | | Thai (Thailand) | th-TH | | Turkish | tr | | Turkish (Turkey) | tr-TR |
-| Ukrainian | ur-PK |
+| Ukrainian | uk |
+| Ukrainian (Ukraine) | uk-UA |
+| Urdu | ur |
+| Urdu (India) | ur-IN |
+| Uzbek | uz |
+| Uzbek (Uzbekistan) | uz-UZ |
| Vietnamese | vi | | Vietnamese (Vietnam) | vi-VN |
-| Welsh | Cy-GB |
+| Welsh | cy |
+| Welsh (United Kingdom) | Cy-GB |
+| Zulu | zu |
+| Zulu (South Africa) | zu-ZA |
## Translation
This article lists supported human languages for Immersive Reader features.
| Arabic (Egyptian) | ar-EG | | Arabic (Saudi Arabia) | ar-SA | | Armenian | hy |
+| Assamese | as |
| Azerbaijani | az |
-| Afrikaans | af |
| Bangla | bn |
+| Bashkir | ba |
| Bosnian | bs | | Bulgarian | bg | | Bulgarian (Bulgaria) | bg-BG |
This article lists supported human languages for Immersive Reader features.
| Chinese Traditional (Hong Kong SAR) | zh-Hant-HK | | Chinese Traditional (Macao SAR) | zh-Hant-MO | | Chinese Traditional (Taiwan) | zh-Hant-TW |
+| Chinese (Literary) | lzh |
| Croatian | hr | | Croatian (Croatia) | hr-HR | | Czech | cs |
This article lists supported human languages for Immersive Reader features.
| Danish | da | | Danish (Denmark) | da-DK | | Dari (Afghanistan) | prs |
+| Divehi | dv |
| Dutch | nl | | Dutch (Netherlands) | nl-NL | | English | en |
This article lists supported human languages for Immersive Reader features.
| English (United Kingdom) | en-GB | | English (United States) | en-US | | Estonian | et |
+| Faroese | fo |
| Fijian | fj | | Filipino | fil | | Finnish | fi |
This article lists supported human languages for Immersive Reader features.
| French (Canada) | fr-CA | | French (France) | fr-FR | | French (Switzerland) | fr-CH |
+| Georgian | ka |
| German | de | | German (Austria) | de-AT | | German (Germany) | de-DE | | German (Switzerland)| de-CH |
-| Gujarati | gu |
| Greek | el | | Greek (Greece) | el-GR |
+| Gujarati | gu |
| Haitian (Creole) | ht | | Hebrew | he | | Hebrew (Israel) | he-IL |
This article lists supported human languages for Immersive Reader features.
| Icelandic | is | | Indonesian | id | | Indonesian (Indonesia) | id-ID |
+| Inuinnaqtun | ikt |
+| Inuktitut | iu |
+| Inuktitut (Latin) | iu-Latn |
| Irish | ga | | Italian | it | | Italian (Italy) | it-IT |
This article lists supported human languages for Immersive Reader features.
| Korean (Korea) | ko-KR | | Kurdish (Central) | ku | | Kurdish (Northern) | kmr |
+| KurdishCentral | ckb |
+| Kyrgyz | ky |
| Lao | lo | | Latvian | lv | | Lithuanian | lt |
+| Macedonian | mk |
| Malagasy | mg | | Malay | ms | | Malay (Malaysia) | ms-MY |
This article lists supported human languages for Immersive Reader features.
| Maltese | mt | | Maori | mi | | Marathi | mr |
+| Mongolian (Cyrillic) | mn-Cyrl |
+| Mongolian (Traditional) | mn-Mong |
| Nepali | ne | | Norwegian Bokmal| nb | | Norwegian Bokmal (Norway) | nb-NO |
This article lists supported human languages for Immersive Reader features.
| Polish (Poland) | pl-PL | | Portuguese | pt | | Portuguese (Brazil) | pt-BR |
-| Portuguese (Portugal) | pt-PT |
+| Portuguese (Portugal) | pt-PT |
| Punjabi | pa | | Querétaro Otomi | otq | | Romanian | ro |
This article lists supported human languages for Immersive Reader features.
| Russian (Russia) | ru-RU | | Samoan | sm | | Serbian | sr |
-| Serbian(Cyrillic) | sr-Cyrl |
+| Serbian (Cyrillic) | sr-Cyrl |
| Serbian (Latin) | sr-Latn | | Slovak | sk | | Slovak (Slovakia) | sk-SK | | Slovenian | sl | | Slovenian (Slovenia) | sl-SI |
+| Somali | so |
| Spanish | es | | Spanish (Latin America) | es-419 | | Spanish (Mexico) | es-MX |
This article lists supported human languages for Immersive Reader features.
| Tahitian | ty | | Tamil | ta | | Tamil (India) | ta-IN |
+| Tatar | tt |
| Telugu | te | | Telugu (India) | te-IN | | Thai | th | | Thai (Thailand) | th-TH |
+| Tibetan | bo |
| Tigrinya | ti | | Tongan | to | | Turkish | tr | | Turkish (Turkey) | tr-TR |
+| Turkmen | tk |
| Ukrainian | uk |
+| UpperSorbian | hsb |
| Urdu | ur |
+| Uyghur | ug |
| Vietnamese | vi | | Vietnamese (Vietnam) | vi-VN | | Welsh | cy | | Yucatec Maya | yua | | Yue Chinese | yue |-
+| Zulu | zu |
## Language detection
applied-ai-services Tutorial Ios Picture Immersive Reader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/tutorial-ios-picture-immersive-reader.md
In the main project folder, which contains the ViewController.swift file, create
Open AppDelegate.swift and replace the file with the following code.
+```swift
+import UIKit
+
+@UIApplicationMain
+class AppDelegate: UIResponder, UIApplicationDelegate {
+
+ var window: UIWindow?
+
+ var navigationController: UINavigationController?
+
+ func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
+ // Override point for customization after application launch.
+
+ window = UIWindow(frame: UIScreen.main.bounds)
+
+ // Allow the app run without a storyboard
+ if let window = window {
+ let mainViewController = PictureLaunchViewController()
+ navigationController = UINavigationController(rootViewController: mainViewController)
+ window.rootViewController = navigationController
+ window.makeKeyAndVisible()
+ }
+ return true
+ }
+
+ func applicationWillResignActive(_ application: UIApplication) {
+ // Sent when the application is about to move from active to inactive state. This can occur for certain types of temporary interruptions (such as an incoming phone call or SMS message) or when the user quits the application and it begins the transition to the background state.
+ // Use this method to pause ongoing tasks, disable timers, and invalidate graphics rendering callbacks. Games should use this method to pause the game.
+ }
+
+ func applicationDidEnterBackground(_ application: UIApplication) {
+ // Use this method to release shared resources, save user data, invalidate timers, and store enough application state information to restore your application to its current state in case it is terminated later.
+ // If your application supports background execution, this method is called instead of applicationWillTerminate: when the user quits.
+ }
+
+ func applicationWillEnterForeground(_ application: UIApplication) {
+ // Called as part of the transition from the background to the active state; here you can undo many of the changes made on entering the background.
+ }
+
+ func applicationDidBecomeActive(_ application: UIApplication) {
+ // Restart any tasks that were paused (or not yet started) while the application was inactive. If the application was previously in the background, optionally refresh the user interface.
+ }
+
+ func applicationWillTerminate(_ application: UIApplication) {
+ // Called when the application is about to terminate. Save data if appropriate. See also applicationDidEnterBackground:.
+ }
+
+}
+```
+ ## Add functionality for taking and uploading photos Rename ViewController.swift to PictureLaunchViewController.swift and replace the file with the following code.
+```swift
+import UIKit
+import immersive_reader_sdk
+
+class PictureLaunchViewController: UIViewController, UINavigationControllerDelegate, UIImagePickerControllerDelegate {
+
+ private var photoButton: UIButton!
+ private var cameraButton: UIButton!
+ private var titleText: UILabel!
+ private var bodyText: UILabel!
+ private var sampleContent: Content!
+ private var sampleChunk: Chunk!
+ private var sampleOptions: Options!
+ private var imagePicker: UIImagePickerController!
+ private var spinner: UIActivityIndicatorView!
+ private var activityIndicatorBackground: UIView!
+ private var textURL = "vision/v2.0/read/core/asyncBatchAnalyze";
+
+ override func viewDidLoad() {
+ super.viewDidLoad()
+
+ view.backgroundColor = .white
+
+ titleText = UILabel()
+ titleText.text = "Picture to Immersive Reader with OCR"
+ titleText.font = UIFont.boldSystemFont(ofSize: 32)
+ titleText.textAlignment = .center
+ titleText.lineBreakMode = .byWordWrapping
+ titleText.numberOfLines = 0
+ view.addSubview(titleText)
+
+ bodyText = UILabel()
+ bodyText.text = "Capture or upload a photo of handprinted text on a piece of paper, handwriting, typed text, text on a computer screen, writing on a white board and many more, and watch it be presented to you in the Immersive Reader!"
+ bodyText.font = UIFont.systemFont(ofSize: 18)
+ bodyText.lineBreakMode = .byWordWrapping
+ bodyText.numberOfLines = 0
+ let screenSize = self.view.frame.height
+ if screenSize <= 667 {
+ // Font size for smaller iPhones.
+ bodyText.font = bodyText.font.withSize(16)
+
+ } else if screenSize <= 812.0 {
+ // Font size for medium iPhones.
+ bodyText.font = bodyText.font.withSize(18)
+
+ } else if screenSize <= 896 {
+ // Font size for larger iPhones.
+ bodyText.font = bodyText.font.withSize(20)
+
+ } else {
+ // Font size for iPads.
+ bodyText.font = bodyText.font.withSize(26)
+ }
+ view.addSubview(bodyText)
+
+ photoButton = UIButton()
+ photoButton.backgroundColor = .darkGray
+ photoButton.contentEdgeInsets = UIEdgeInsets(top: 10, left: 5, bottom: 10, right: 5)
+ photoButton.layer.cornerRadius = 5
+ photoButton.setTitleColor(.white, for: .normal)
+ photoButton.setTitle("Choose Photo from Library", for: .normal)
+ photoButton.titleLabel?.font = UIFont.systemFont(ofSize: 18, weight: .bold)
+ photoButton.addTarget(self, action: #selector(selectPhotoButton(sender:)), for: .touchUpInside)
+ view.addSubview(photoButton)
+
+ cameraButton = UIButton()
+ cameraButton.backgroundColor = .darkGray
+ cameraButton.contentEdgeInsets = UIEdgeInsets(top: 10, left: 5, bottom: 10, right: 5)
+ cameraButton.layer.cornerRadius = 5
+ cameraButton.setTitleColor(.white, for: .normal)
+ cameraButton.setTitle("Take Photo", for: .normal)
+ cameraButton.titleLabel?.font = UIFont.systemFont(ofSize: 18, weight: .bold)
+ cameraButton.addTarget(self, action: #selector(takePhotoButton(sender:)), for: .touchUpInside)
+ view.addSubview(cameraButton)
+
+ activityIndicatorBackground = UIView()
+ activityIndicatorBackground.backgroundColor = UIColor.black
+ activityIndicatorBackground.alpha = 0
+ view.addSubview(activityIndicatorBackground)
+ view.bringSubviewToFront(_: activityIndicatorBackground)
+
+ spinner = UIActivityIndicatorView(style: .whiteLarge)
+ view.addSubview(spinner)
+
+ let layoutGuide = view.safeAreaLayoutGuide
+
+ titleText.translatesAutoresizingMaskIntoConstraints = false
+ titleText.topAnchor.constraint(equalTo: layoutGuide.topAnchor, constant: 25).isActive = true
+ titleText.leadingAnchor.constraint(equalTo: layoutGuide.leadingAnchor, constant: 20).isActive = true
+ titleText.trailingAnchor.constraint(equalTo: layoutGuide.trailingAnchor, constant: -20).isActive = true
+
+ bodyText.translatesAutoresizingMaskIntoConstraints = false
+ bodyText.topAnchor.constraint(equalTo: titleText.bottomAnchor, constant: 35).isActive = true
+ bodyText.leadingAnchor.constraint(equalTo: layoutGuide.leadingAnchor, constant: 20).isActive = true
+ bodyText.trailingAnchor.constraint(equalTo: layoutGuide.trailingAnchor, constant: -20).isActive = true
+
+ cameraButton.translatesAutoresizingMaskIntoConstraints = false
+ if screenSize > 896 {
+ // Constraints for iPads.
+ cameraButton.heightAnchor.constraint(equalToConstant: 150).isActive = true
+ cameraButton.leadingAnchor.constraint(equalTo: layoutGuide.leadingAnchor, constant: 60).isActive = true
+ cameraButton.trailingAnchor.constraint(equalTo: layoutGuide.trailingAnchor, constant: -60).isActive = true
+ cameraButton.topAnchor.constraint(equalTo: bodyText.bottomAnchor, constant: 150).isActive = true
+ } else {
+ // Constraints for iPhones.
+ cameraButton.heightAnchor.constraint(equalToConstant: 100).isActive = true
+ cameraButton.leadingAnchor.constraint(equalTo: layoutGuide.leadingAnchor, constant: 30).isActive = true
+ cameraButton.trailingAnchor.constraint(equalTo: layoutGuide.trailingAnchor, constant: -30).isActive = true
+ cameraButton.topAnchor.constraint(equalTo: bodyText.bottomAnchor, constant: 100).isActive = true
+ }
+ cameraButton.bottomAnchor.constraint(equalTo: photoButton.topAnchor, constant: -40).isActive = true
+
+ photoButton.translatesAutoresizingMaskIntoConstraints = false
+ if screenSize > 896 {
+ // Constraints for iPads.
+ photoButton.heightAnchor.constraint(equalToConstant: 150).isActive = true
+ photoButton.leadingAnchor.constraint(equalTo: layoutGuide.leadingAnchor, constant: 60).isActive = true
+ photoButton.trailingAnchor.constraint(equalTo: layoutGuide.trailingAnchor, constant: -60).isActive = true
+ } else {
+ // Constraints for iPhones.
+ photoButton.heightAnchor.constraint(equalToConstant: 100).isActive = true
+ photoButton.leadingAnchor.constraint(equalTo: layoutGuide.leadingAnchor, constant: 30).isActive = true
+ photoButton.trailingAnchor.constraint(equalTo: layoutGuide.trailingAnchor, constant: -30).isActive = true
+ }
+
+ spinner.translatesAutoresizingMaskIntoConstraints = false
+ spinner.centerXAnchor.constraint(equalTo: view.centerXAnchor).isActive = true
+ spinner.centerYAnchor.constraint(equalTo: view.centerYAnchor).isActive = true
+
+ activityIndicatorBackground.translatesAutoresizingMaskIntoConstraints = false
+ activityIndicatorBackground.topAnchor.constraint(equalTo: layoutGuide.topAnchor).isActive = true
+ activityIndicatorBackground.bottomAnchor.constraint(equalTo: layoutGuide.bottomAnchor).isActive = true
+ activityIndicatorBackground.leadingAnchor.constraint(equalTo: layoutGuide.leadingAnchor).isActive = true
+ activityIndicatorBackground.trailingAnchor.constraint(equalTo: layoutGuide.trailingAnchor).isActive = true
+
+ // Create content and options.
+ sampleChunk = Chunk(content: bodyText.text!, lang: nil, mimeType: nil)
+ sampleContent = Content( Title: titleText.text!, chunks: [sampleChunk])
+ sampleOptions = Options(uiLang: nil, timeout: nil, uiZIndex: nil)
+ }
+
+ @IBAction func selectPhotoButton(sender: AnyObject) {
+ // Launch the photo picker.
+ imagePicker = UIImagePickerController()
+ imagePicker.delegate = self
+ self.imagePicker.sourceType = .photoLibrary
+ self.imagePicker.allowsEditing = true
+ self.present(self.imagePicker, animated: true, completion: nil)
+ self.photoButton.isEnabled = true
+ }
+
+ @IBAction func takePhotoButton(sender: AnyObject) {
+ if !UIImagePickerController.isSourceTypeAvailable(.camera) {
+ // If there is no camera on the device, disable the button
+ self.cameraButton.backgroundColor = .gray
+ self.cameraButton.isEnabled = true
+
+ } else {
+ // Launch the camera.
+ imagePicker = UIImagePickerController()
+ imagePicker.delegate = self
+ self.imagePicker.sourceType = .camera
+ self.present(self.imagePicker, animated: true, completion: nil)
+ self.cameraButton.isEnabled = true
+ }
+ }
+
+ func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
+ imagePicker.dismiss(animated: true, completion: nil)
+ photoButton.isEnabled = false
+ cameraButton.isEnabled = false
+ self.spinner.startAnimating()
+ activityIndicatorBackground.alpha = 0.6
+
+ // Retrieve the image.
+ let image = (info[.originalImage] as? UIImage)!
+
+ // Retrieve the byte array from image.
+ let imageByteArray = image.jpegData(compressionQuality: 1.0)
+
+ // Call the getTextFromImage function passing in the image the user takes or chooses.
+ getTextFromImage(subscriptionKey: Constants.computerVisionSubscriptionKey, getTextUrl: Constants.computerVisionEndPoint + textURL, pngImage: imageByteArray!, onSuccess: { cognitiveText in
+ print("cognitive text is: \(cognitiveText)")
+ DispatchQueue.main.async {
+ self.photoButton.isEnabled = true
+ self.cameraButton.isEnabled = true
+ }
+
+ // Create content and options with the text from the image.
+ let sampleImageChunk = Chunk(content: cognitiveText, lang: nil, mimeType: nil)
+ let sampleImageContent = Content( Title: "Text from image", chunks: [sampleImageChunk])
+ let sampleImageOptions = Options(uiLang: nil, timeout: nil, uiZIndex: nil)
+
+ // Callback to get token for Immersive Reader.
+ self.getToken(onSuccess: {cognitiveToken in
+
+ DispatchQueue.main.async {
+
+ launchImmersiveReader(navController: self.navigationController!, token: cognitiveToken, subdomain: Constants.subdomain, content: sampleImageContent, options: sampleImageOptions, onSuccess: {
+ self.spinner.stopAnimating()
+ self.activityIndicatorBackground.alpha = 0
+ self.photoButton.isEnabled = true
+ self.cameraButton.isEnabled = true
+
+ }, onFailure: { error in
+ print("An error occured launching the Immersive Reader: \(error)")
+ self.spinner.stopAnimating()
+ self.activityIndicatorBackground.alpha = 0
+ self.photoButton.isEnabled = true
+ self.cameraButton.isEnabled = true
+
+ })
+ }
+
+ }, onFailure: { error in
+ DispatchQueue.main.async {
+ self.photoButton.isEnabled = true
+ self.cameraButton.isEnabled = true
+
+ }
+ print("An error occured retrieving the token: \(error)")
+ })
+
+ }, onFailure: { error in
+ DispatchQueue.main.async {
+ self.photoButton.isEnabled = true
+ self.cameraButton.isEnabled = true
+ }
+
+ })
+ }
+
+ /// Retrieves the token for the Immersive Reader using Azure Active Directory authentication
+ ///
+ /// - Parameters:
+ /// -onSuccess: A closure that gets called when the token is successfully recieved using Azure Active Directory authentication.
+ /// -theToken: The token for the Immersive Reader recieved using Azure Active Directory authentication.
+ /// -onFailure: A closure that gets called when the token fails to be obtained from the Azure Active Directory Authentication.
+ /// -theError: The error that occured when the token fails to be obtained from the Azure Active Directory Authentication.
+ func getToken(onSuccess: @escaping (_ theToken: String) -> Void, onFailure: @escaping ( _ theError: String) -> Void) {
+
+ let tokenForm = "grant_type=client_credentials&resource=https://cognitiveservices.azure.com/&client_id=" + Constants.clientId + "&client_secret=" + Constants.clientSecret
+ let tokenUrl = "https://login.windows.net/" + Constants.tenantId + "/oauth2/token"
+
+ var responseTokenString: String = "0"
+
+ let url = URL(string: tokenUrl)!
+ var request = URLRequest(url: url)
+ request.httpBody = tokenForm.data(using: .utf8)
+ request.httpMethod = "POST"
+
+ let task = URLSession.shared.dataTask(with: request) { data, response, error in
+ guard let data = data,
+ let response = response as? HTTPURLResponse,
+ // Check for networking errors.
+ error == nil else {
+ print("error", error ?? "Unknown error")
+ onFailure("Error")
+ return
+ }
+
+ // Check for http errors.
+ guard (200 ... 299) ~= response.statusCode else {
+ print("statusCode should be 2xx, but is \(response.statusCode)")
+ print("response = \(response)")
+ onFailure(String(response.statusCode))
+ return
+ }
+
+ let responseString = String(data: data, encoding: .utf8)
+ print("responseString = \(String(describing: responseString!))")
+
+ let jsonResponse = try? JSONSerialization.jsonObject(with: data, options: [])
+ guard let jsonDictonary = jsonResponse as? [String: Any] else {
+ onFailure("Error parsing JSON response.")
+ return
+ }
+ guard let responseToken = jsonDictonary["access_token"] as? String else {
+ onFailure("Error retrieving token from JSON response.")
+ return
+ }
+ responseTokenString = responseToken
+ onSuccess(responseTokenString)
+ }
+
+ task.resume()
+ }
+
+ /// Returns the text string after it has been extracted from an Image input.
+ ///
+ /// - Parameters:
+ /// -subscriptionKey: The Azure subscription key.
+ /// -pngImage: Image data in PNG format.
+ /// - Returns: a string of text representing the
+ func getTextFromImage(subscriptionKey: String, getTextUrl: String, pngImage: Data, onSuccess: @escaping (_ theToken: String) -> Void, onFailure: @escaping ( _ theError: String) -> Void) {
+
+ let url = URL(string: getTextUrl)!
+ var request = URLRequest(url: url)
+ request.setValue(subscriptionKey, forHTTPHeaderField: "Ocp-Apim-Subscription-Key")
+ request.setValue("application/octet-stream", forHTTPHeaderField: "Content-Type")
+
+ // Two REST API calls are required to extract text. The first call is to submit the image for processing, and the next call is to retrieve the text found in the image.
+
+ // Set the body to the image in byte array format.
+ request.httpBody = pngImage
+
+ request.httpMethod = "POST"
+
+ let task = URLSession.shared.dataTask(with: request) { data, response, error in
+ guard let data = data,
+ let response = response as? HTTPURLResponse,
+ // Check for networking errors.
+ error == nil else {
+ print("error", error ?? "Unknown error")
+ onFailure("Error")
+ return
+ }
+
+ // Check for http errors.
+ guard (200 ... 299) ~= response.statusCode else {
+ print("statusCode should be 2xx, but is \(response.statusCode)")
+ print("response = \(response)")
+ onFailure(String(response.statusCode))
+ return
+ }
+
+ let responseString = String(data: data, encoding: .utf8)
+ print("responseString = \(String(describing: responseString!))")
+
+ // Send the second call to the API. The first API call returns operationLocation which stores the URI for the second REST API call.
+ let operationLocation = response.allHeaderFields["Operation-Location"] as? String
+
+ if (operationLocation == nil) {
+ print("Error retrieving operation location")
+ return
+ }
+
+ // Wait 10 seconds for text recognition to be available as suggested by the Text API documentation.
+ print("Text submitted. Waiting 10 seconds to retrieve the recognized text.")
+ sleep(10)
+
+ // HTTP GET request with the operationLocation url to retrieve the text.
+ let getTextUrl = URL(string: operationLocation!)!
+ var getTextRequest = URLRequest(url: getTextUrl)
+ getTextRequest.setValue(subscriptionKey, forHTTPHeaderField: "Ocp-Apim-Subscription-Key")
+ getTextRequest.httpMethod = "GET"
+
+ // Send the GET request to retrieve the text.
+ let taskGetText = URLSession.shared.dataTask(with: getTextRequest) { data, response, error in
+ guard let data = data,
+ let response = response as? HTTPURLResponse,
+ // Check for networking errors.
+ error == nil else {
+ print("error", error ?? "Unknown error")
+ onFailure("Error")
+ return
+ }
+
+ // Check for http errors.
+ guard (200 ... 299) ~= response.statusCode else {
+ print("statusCode should be 2xx, but is \(response.statusCode)")
+ print("response = \(response)")
+ onFailure(String(response.statusCode))
+ return
+ }
+
+ // Decode the JSON data into an object.
+ let customDecoding = try! JSONDecoder().decode(TextApiResponse.self, from: data)
+
+ // Loop through the lines to get all lines of text and concatenate them together.
+ var textFromImage = ""
+ for textLine in customDecoding.recognitionResults[0].lines {
+ textFromImage = textFromImage + textLine.text + " "
+ }
+
+ onSuccess(textFromImage)
+ }
+ taskGetText.resume()
+
+ }
+
+ task.resume()
+ }
+
+ // Structs used for decoding the Text API JSON response.
+ struct TextApiResponse: Codable {
+ let status: String
+ let recognitionResults: [RecognitionResult]
+ }
+
+ struct RecognitionResult: Codable {
+ let page: Int
+ let clockwiseOrientation: Double
+ let width, height: Int
+ let unit: String
+ let lines: [Line]
+ }
+
+ struct Line: Codable {
+ let boundingBox: [Int]
+ let text: String
+ let words: [Word]
+ }
+
+ struct Word: Codable {
+ let boundingBox: [Int]
+ let text: String
+ let confidence: String?
+ }
+
+}
+```
+ ## Build and run the app Set the archive scheme in Xcode by selecting a simulator or device target.
azure-app-configuration Howto Import Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-import-export-data.md
description: Learn how to import or export configuration data to or from Azure A
- Previously updated : 04/06/2022+ Last updated : 08/24/2022
You can import or export data using either the [Azure portal](https://portal.azu
## Import data
-Import brings configuration data into an App Configuration store from an existing source. Use the import function to migrate data into an App Configuration store or aggregate data from multiple sources. App Configuration supports importing from another App Configuration store, an App Service resource or a configuration file in JSON, YAML or .properties.
+Import brings configuration data into an App Configuration store from an existing source. Use the import function to migrate data into an App Configuration store or aggregate data from multiple sources.
-### [Portal](#tab/azure-portal)
+This guide shows how to import App Configuration data:
+
+- [from a configuration file in Json, Yaml or Properties](#import-data-from-a-configuration-file)
+- [from an App Configuration store](#import-data-from-an-app-configuration-store)
+- [from Azure App Service](#import-data-from-azure-app-service)
+
+### Import data from a configuration file
+
+Follow the steps below to import key-values from a file.
+
+> [!NOTE]
+> Importing feature flags from a file is not supported. If a configuration file contains feature flags, they will be imported as regular key-values automatically.
+
+#### [Portal](#tab/azure-portal)
From the Azure portal, follow these steps: 1. Browse to your App Configuration store, and select **Import/export** from the **Operations** menu.
- :::image type="content" source="./media/import-file.png" alt-text="Screenshot of the Azure portal, importing a file.":::
+ :::image type="content" source="./media/import-export/import-file.png" alt-text="Screenshot of the Azure portal, importing a file.":::
-1. On the **Import** tab, select **Configuration file** under **Source service**. Other options are **App Configuration** and **App Services**.
+1. On the **Import** tab, select **Configuration file** under **Source service**.
1. Fill out the form with the following parameters:
- | Parameter | Description | Examples |
- |--|||
- | For language | Choose the language of the file you're importing between .NET, Java (Spring) and Other. | .NET |
- | File type | Select the type of file you're importing between YAML, properties or JSON. | JSON |
+ | Parameter | Description | Example |
+ |--|--|-|
+ | For language | Choose the language of the file you're importing between .NET, Java (Spring) and Other. | *.NET* |
+ | File type | Select the type of file you're importing between Yaml, Properties and Json. | *Json* |
1. Select the **Folder** icon, and browse to the file to import.
+ > [!NOTE]
+ > A message is displayed on screen, indicating that the file was fetched successfully.
+ 1. Fill out the next part of the form:
- | Parameter | Description | Example |
- |--|--|-|
- | Separator | The separator is the character parsed in your imported configuration file to separate key-values which will be added to your configuration store. Select one of the following options: `.`, `,`,`:`, `;`, `/`, `-`. | : |
- | Prefix | Optional. A key prefix is the beginning part of a key. Prefixes can be used to manage groups of keys in a configuration store. | TestApp:Settings:Backgroundcolor |
- | Label | Optional. Select an existing label or enter a new label that will be assigned to your imported key-values. | prod |
- | Content type | Optional. Indicate if the file you're importing is a Key Vault reference or a JSON file. For more information about Key Vault references, go to [Use Key Vault references in an ASP.NET Core app](./use-key-vault-references-dotnet-core.md). | JSON (application/json) |
+ | Parameter | Description | Example |
+ |--|--||
+ | Separator | The separator is the character parsed in your imported configuration file to separate key-values that will be added to your configuration store. Select one of the following options: *.*, *,*, *:*, *;*, */*, *-*, *_*, *ΓÇö*. | *;* |
+ | Prefix | Optional. A key prefix is the beginning part of a key. Prefixes can be used to manage groups of keys in a configuration store. The entered prefix will be appended to the beginning of every key you import from this file. | *TestApp:* |
+ | Label | Optional. Select an existing label or enter a new label that will be assigned to your imported key-values. | *prod* |
+ | Content type | Optional. Indicate if you're importing a JSON file or Key Vault references. For more information about Key Vault references, go to [Use Key Vault references in an ASP.NET Core app](./use-key-vault-references-dotnet-core.md). | *JSON (application/json)* |
1. Select **Apply** to proceed with the import.
-### [Azure CLI](#tab/azure-cli)
+You've imported key-values from a JSON file, have assigned them the label "prod" and the prefix "TestApp". The separator ":" is used and all the keys you've imported have content type set as "JSON".
+
+#### [Azure CLI](#tab/azure-cli)
+
+From the Azure CLI, follow the steps below. If you don't have the Azure CLI installed locally, you can optionally use [Azure Cloud Shell](../cloud-shell/overview.md).
-Use the Azure CLI as explained below to import App Configuration data. If you don't have the Azure CLI installed locally, you can optionally use [Azure Cloud Shell](../cloud-shell/overview.md). Specify the source of the data: `appconfig`, `appservice` or `file`. Optionally specify a source label with `--src-label` and a label to apply with `--label`.
+1. Enter the import command `az appconfig kv import` and add the following parameters:
-Import all keys and feature flags from a file and apply test label.
+ | Parameter | Description | Example |
+ ||-|-|
+ | `--name` | Enter the name of the App Configuration store you want to import data to. | `my-app-config-store` |
+ | `--source` | Enter `file` to indicate that you're importing app configuration data from a file. | `file` |
+ | `--path` | Enter the local path to the file containing the data you want to import. | `C:/Users/john/Downloads/data.json` |
+ | `--format` | Enter yaml, properties or json to indicate the format of the file you're importing. | `json` |
-```azurecli
-az appconfig kv import --name <your-app-config-store-name> --label test --source file --path D:/abc.json --format json
-```
+1. Optionally also add the following parameters:
-Import all keys with label test and apply test2 label.
+ | Parameter | Description | Example |
+ ||-|--|
+ | `--separator` | Optional. The separator is the delimiter for flattening the key-values to Json/Yaml. It's required for exporting hierarchical structure and will be ignored for property files and feature flags. Select one of the following options: `.`, `,`, `:`, `;`, `/`, `-`, `_`, `ΓÇö`. | `;` |
+ | `--prefix` | Optional. A key prefix is the beginning part of a key. Prefixes can be used to manage groups of keys in a configuration store. This prefix will be appended to the front of imported keys. | `TestApp:` |
+ | `--label` | Optional. Enter a label that will be assigned to your imported key-values. | `prod` |
+ | `--content-type` | Optional. Enter `appconfig/kvset` or `application/json` to state that the imported content consists of a Key Vault reference or a JSON file. | `application/json` |
-```azurecli
-az appconfig kv import --name <your-app-config-store-name> --source appconfig --src-label test --label test2 --src-name <another-app-config-store-name>
-```
+ Example: import all keys and feature flags from a JSON file, apply the label "prod", and append the prefix "TestApp". Add the "application/json" content type.
-Import all keys and apply null label from an App Service application.
+ ```azurecli
+ az appconfig kv import --name my-app-config-store --source file --path D:/abc.json --format json --separator ; --prefix TestApp: --label prod --content-type application/json
+ ```
-For `--appservice-account` use the ARM ID for AppService or use the name of the AppService, assuming it's in the same subscription and resource group as the App Configuration.
+1. The command line displays a list of the coming changes. Confirm the import by selecting `y`.
-```python
-az appconfig kv import --name <your-app-config-store-name> --source appservice --appservice-account <your-app-service>
-```
+ :::image type="content" source="./media/import-export/continue-import-file-prompt.png" alt-text="Screenshot of the CLI. Import from file confirmation prompt.":::
-For more details and examples, go to [az appconfig kv import](/cli/azure/appconfig/kv?view=azure-cli-latest#az-appconfig-kv-import&preserve-view=true).
+You've imported key-values from a JSON file, have assigned them the label "prod" and the prefix "TestApp:". The separator ";" is used and all keys that you have imported have content type set as "JSON".
+
+For more optional parameters and examples, go to [az appconfig kv import](/cli/azure/appconfig/kv?view=azure-cli-latest#az-appconfig-kv-import&preserve-view=true).
+++
+### Import data from an App Configuration store
+
+You can import values from one App Configuration store to another App Configuration store, or you can import values from one App Configuration store to the same App Configuration store in order to duplicate its values and apply different parameters, such as new label or content type.
+
+Follow the steps below to import key-values and feature flags from an Azure App Configuration store.
+
+#### [Portal](#tab/azure-portal)
+
+From the Azure portal, follow these steps:
+
+1. Browse to your App Configuration store, and select **Import/export** from the **Operations** menu.
+
+ :::image type="content" source="./media/import-export/import-app-configuration.png" alt-text="Screenshot of the Azure portal, importing from an App Configuration store.":::
+
+1. On the **Import** tab, select **App Configuration** under **Source service**.
+
+1. Select **Select resource**, fill out the form with the following parameters, and select **Apply**:
+
+ | Parameter | Description | Example |
+ |-|-|--|
+ | Subscription | Your current subscription is selected by default. | *my-subscription* |
+ | Resource group | Select a resource group that contains the App Configuration store with configuration to import. Your current resource group is selected by default. | *my-resource-group* |
+ | Resource | Select the App Configuration store that contains the configuration you want to import. | *my-other-app-config-store* |
+
+ > [!NOTE]
+ > The message "Access keys fetched successfully" indicates that the connection with the App Configuration store was successful."
+
+1. Fill out the next part of the form:
+
+ | Parameter | Description | Example |
+ |--|-||
+ | From label | Select at least one label to import values with the corresponding labels. **Select all** will import keys with any label, and **(No label)** will restrict the import to keys with no label. | *prod* |
+ | At a specific time | Optional. Fill out to import key-values from a specific point in time. This is the point in time of the resource you selected in the previous step. Format: "YYYY-MM-DDThh:mm:ssZ". | *07/28/2022 12:00:00 AM* |
+ | Override default key-value labels | Optional. By default, imported items use their current label. Check the box and enter a label to override these defaults with a custom label. | *new* |
+ | Override default key-value content type | Optional. By default, imported items use their current content type. Check the box and select **Key Vault Reference** or **JSON (application/json)** under **Content type** to state that the imported content consists of a Key Vault reference or a JSON file. Content type can only be overridden for imported key-values. Default content type for feature flags is "application/vnd.microsoft.appconfig.ff+json;charset=utf-8' and isn't updated by this parameter.| *JSON (application/json)* |
+
+1. Select **Apply** to proceed with the import.
+
+You've imported keys and feature flags with the "prod" label from an App Configuration store on January 28, 2021 at 12 AM, and have assigned them the label "new". All keys that you have imported have content type set as "JSON".
+
+#### [Azure CLI](#tab/azure-cli)
+
+From the Azure CLI, follow the steps below. If you don't have the Azure CLI installed locally, you can optionally use [Azure Cloud Shell](../cloud-shell/overview.md).
+
+1. Enter the import command `az appconfig kv import` and enter the following parameters:
+
+ | Parameter | Description | Example |
+ |--|-|-|
+ | `--name` | Enter the name of the App Configuration store you want to import data into | `my-app-config-store` |
+ | `--source` | Enter `appconfig` to indicate that you're importing data from an App Configuration store. | `appconfig` |
+ | `--src-name` | Enter the name of the App Configuration store you want to import data from. | `my-source-app-config` |
+ | `--src-label`| Restrict your import to keys with a specific label. If you don't use this parameter, only keys with a null label will be imported. Supports star sign as filter: enter `*` for all labels; `abc*` for all labels with abc as prefix.| `prod` |
+
+1. Optionally add the following parameters:
+
+ | Parameter | Description | Example |
+ ||--||
+ | `--label` | Optional. Enter a label that will be assigned to your imported key-values. | `new` |
+ | `--content-type` | Optional. Enter `appconfig/kvset` or `application/json` to state that the imported content consists of a Key Vault reference or a JSON file. Content type can only be overridden for imported key-values. Default content type for feature flags is "application/vnd.microsoft.appconfig.ff+json;charset=utf-8' by default and isn't updated by this parameter. | `application/json` |
+
+ Example: import keys-values and feature flags with the label "prod" from another App Configuration on January 28, 2021 at 1PM, and assign them the label "new". Add the "application/json" content type.
+
+ ```azurecli
+ az appconfig kv import --name my-app-config-store --source appconfig --src-name my-source-app-config --src-label prod --label new --content-type application/json
+ ```
+
+1. The command line displays a list of the coming changes. Confirm the import by selecting `y`.
+
+ :::image type="content" source="./media/import-export/continue-import-app-configuration-prompt.png" alt-text="Screenshot of the CLI. Import from App Configuration confirmation prompt.":::
+
+You've imported keys with the label "prod" from an App Configuration store and have assigned them the label "new". All keys that you have imported have content type set as "JSON".
+
+For more optional parameters and examples, go to [az appconfig kv import](/cli/azure/appconfig/kv?view=azure-cli-latest#az-appconfig-kv-import&preserve-view=true).
+++
+### Import data from Azure App Service
+
+Follow the steps below to import key-values from Azure App Service.
+
+> [!NOTE]
+> App Service doesn't currently support feature flags. All feature flags imported to App Service are converted to key-values automatically. Your App Service resources can only contain key-values.
+
+#### [Portal](#tab/azure-portal)
+
+From the Azure portal:
+
+1. Browse to your App Configuration store, and select **Import/export** from the **Operations** menu.
+
+ :::image type="content" source="./media/import-export/import-app-service.png" alt-text="Screenshot of the Azure portal, importing from App Service.":::
+
+1. On the **Import** tab, select **App Services** under **Source service**.
+
+1. Select **Select resource**, fill out the form with the following parameters, and select **Apply**:
+
+ | Parameter | Description | Example |
+ |-|-|-|
+ | Subscription | Your current subscription is selected by default. | *my-subscription* |
+ | Resource group | Select a resource group that contains the App Service with configuration to import. | *my-resource-group* |
+ | Resource | Select the App Service that contains the configuration you want to import. | *my-app-service* |
+
+ > [!NOTE]
+ > A message is displayed, indicating the number of keys that were successfully fetched from the source App Service resource.
+
+1. Fill out the next part of the form:
+
+ | Parameter | Description | Example |
+ |--|||
+ | Prefix | Optional. A key prefix is the beginning part of a key. Prefixes can be used to manage groups of keys in a configuration store. This prefix will be appended to the front of imported keys. | *TestApp:* |
+ | Label | Optional. Select an existing label or enter a new label that will be assigned to your imported key-values. | *prod* |
+ | Content type | Optional. Indicate if the file you're importing is a Key Vault reference or a JSON file. For more information about Key Vault references, go to [Use Key Vault references in an ASP.NET Core app](./use-key-vault-references-dotnet-core.md). | *JSON (application/json)* |
+
+1. Select **Apply** to proceed with the import.
+
+You've imported all application settings from an App Service as key-values and assigned them the label "prod" and the prefix "TestApp". All keys that you have imported have content type set as "JSON".
+
+#### [Azure CLI](#tab/azure-cli)
+
+From the Azure CLI, follow the steps below. If you don't have the Azure CLI installed locally, you can optionally use [Azure Cloud Shell](../cloud-shell/overview.md).
+
+1. Enter the import command `az appconfig kv import` and add the following parameters:
+
+ | Parameter | Description | Example |
+ ||--||
+ | `--name` | Enter the name of the App Configuration store you want to import data to. | `my-app-config-store` |
+ | `--source` | Enter `appservice` to indicate that you're importing app configuration data from Azure App Service. | `appservice` |
+ | `--appservice-account` | Enter the App Service's ARM ID or use the name of the App Service, if it's in the same subscription and resource group as the App Configuration. | `/subscriptions/123/resourceGroups/my-resource-group/providers/Microsoft.Web/sites/my-app-service` or `my-app-service` |
+
+1. Optionally also add the following parameters:
+
+ | Parameter | Description | Example |
+ ||||
+ | `--prefix` | Optional. A key prefix is the beginning part of a key. Prefixes can be used to manage groups of keys in a configuration store. This prefix will be appended to the front of imported keys. | `TestApp:` |
+ | `--label` | Optional. Enter a label that will be assigned to your imported key-values. If you don't specify a label, the null label will be assigned to your key-values. | `prod` |
+ | `--content-type` | Optional. Enter appconfig/kvset or application/json to state that the imported content consists of a Key Vault reference or a JSON file. | `application/json` |
+
+ To get the value for `--appservice-account`, use the command `az webapp show --resource-group <resource-group> --name <resource-name>`.
+
+ Example: import all application settings from your App Service as key-values with the label "prod", to your App Configuration store, and add a "TestApp:" prefix.
+
+ ```azurecli
+ az appconfig kv import --name my-app-config-store --source appservice --appservice-account /subscriptions/123/resourceGroups/my-resource-group/providers/Microsoft.Web/sites/my-app-service --label prod --prefix TestApp:
+ ```
+
+1. The command line displays a list of the coming changes. Confirm the import by selecting `y`.
+
+ :::image type="content" source="./media/import-export/continue-import-app-service-prompt.png" alt-text="Screenshot of the CLI. Import from App Service confirmation prompt.":::
+
+You've imported all application settings from your App Service as key-values, have assigned them the label "prod", and have added a "TestApp:" prefix. All keys that you have imported have content type set as "JSON".
+
+For more optional parameters and examples, go to [az appconfig kv import](/cli/azure/appconfig/kv?view=azure-cli-latest#az-appconfig-kv-import&preserve-view=true).
## Export data
-Export writes configuration data stored in App Configuration to another destination. Use the export function, for example, to save data from an App Configuration store to a file that can be embedded in your application code during deployment. You can export data from an App Configuration store, an App Service resource or a configuration file in JSON, YAML or .properties.
+Export writes configuration data stored in App Configuration to another destination. Use the export function, for example, to save data from an App Configuration store to a file that can be embedded in your application code during deployment.
+
+This guide shows how to export App Configuration data:
+
+- [to a configuration file in Json, Yaml or Properties](#export-data-to-a-configuration-file)
+- [to an App Configuration store](#export-data-to-an-app-configuration-store)
+- [to an Azure App Service resource](#export-data-to-azure-app-service)
+
+### Export data to a configuration file
+
+Follow the steps below to export configuration data from an app configuration store to a Json, Yaml or Properties file.
+
+> [!NOTE]
+> Exporting feature flags from an App Configuration store to a configuration file is currently only supported in the CLI.
### [Portal](#tab/azure-portal)
From the [Azure portal](https://portal.azure.com), follow these steps:
1. Browse to your App Configuration store, and select **Import/export**.
-1. On the **Export** tab, select **Target service** > **Configuration file**.
+ :::image type="content" source="./media/import-export/export-file.png" alt-text="Screenshot of the Azure portal, exporting a file":::
+
+1. On the **Export** tab, select **Configuration file** under **Target service**.
1. Fill out the form with the following parameters:
- | Parameter | Description | Example |
- ||--|-|
- | Prefix | Optional. A key prefix is the beginning part of a key. Enter a prefix to restrict your export to key-values with the specified prefix. | TestApp:Settings:Backgroundcolor |
- | From label | Optional. Select an existing label to restrict your export to key-values with a specific label. If you don't select a label, only key-values without a label will be exported. See note below. | prod |
- | At a specific time | Optional. Fill out to export key-values from a specific point in time. | 01/28/2021 12:00:00 AM |
- | File type | Select the type of file you're importing between YAML, properties or JSON. | JSON |
- | Separator | The separator is the character that will be used in the configuration file to separate the exported key-values from one another. Select one of the following options: `.`, `,`,`:`, `;`, `/`, `-`. | ; |
+ | Parameter | Description | Example |
+ |--|--|--|
+ | Prefix | Optional. This prefix will be trimmed from the keys. A key prefix is the beginning part of a key. Prefixes can be used to manage groups of keys in a configuration store. | *TestApp:* |
+ | From label | Select an existing label to restrict your export to key-values with a specific label. If you don't select a label, by default only key-values with the "No Label" label will be exported. See note below. | *prod* |
+ | At a specific time | Optional. Fill out to import key-values from a specific point in time. This is the point in time of the resource you selected in the previous step. Format: "YYYY-MM-DDThh:mm:ssZ". | *07/28/2022 12:00:00 AM* |
+ | File type | Select the type of file you're exporting between Yaml, Properties or Json. | *JSON* |
+ | Separator | The separator is the delimiter for flattening the key-values to Json/Yaml. It supports the configuration's hierarchical structure and doesn't apply to property files and feature flags. Select one of the following options: *.*, *,*, *:*, *;*, */*, *-*, *_*, *ΓÇö*, or *(No separator)*. | *;* |
> [!IMPORTANT]
- > If you don't select a label, only keys without labels will be exported. To export a key-value with a label, you must select its label. Note that you can only select one label per export, so to export keys with multiple labels, you may need to export multiple times, once per label you select.
+ > If you don't select a *From label*, only keys without labels will be exported. To export a key-value with a label, you must select its label. Note that you can only select one label per export in portal, in case you want to export the key-values with all labels specified please use CLI.
1. Select **Export** to finish the export.
- :::image type="content" source="./media/export-file-complete.png" alt-text="Screenshot of the Azure portal, exporting a file":::
+You've exported key-values that have the "prod" label from a configuration file, at their state from 07/28/2021 12:00:00 AM, and have trimmed the prefix "TestApp". Values are separated by ";" in the file.
### [Azure CLI](#tab/azure-cli)
-Use the Azure CLI as explained below to export configurations from App Configuration to another place. If you don't have the Azure CLI installed locally, you can optionally use [Azure Cloud Shell](../cloud-shell/overview.md). Specify the destination of the data: `appconfig`, `appservice` or `file`. Specify a label for the data you want to export with `--label` or export data with no label by not entering a label.
+From the Azure CLI, follow the steps below. If you don't have the Azure CLI installed locally, you can optionally use [Azure Cloud Shell](../cloud-shell/overview.md).
+
+1. Enter the export command `az appconfig kv export` and add the following parameters:
+
+ | Parameter | Description | Example |
+ |--|-|-|
+ | `--name` | Enter the name of the App Configuration store that contains the key-values you want to export. | `my-app-config-store` |
+ | `--destination` | Enter `file` to indicate that you're exporting data to a file. | `file` |
+ | `--path` | Enter the path where you want to save the file. | `C:/Users/john/Downloads/data.json` |
+ | `--format` | Enter `yaml`, `properties` or `json` to indicate the format of the file you want to export. | `json` |
+ | `--label` | Enter a label to export keys and feature flags with this label. If you don't specify a label, by default, you will only export keys and feature flags with no label. You can enter one label, enter several labels by separating them with `,`, or use `*` to take all of the labels in account. | `prod` |
+
+ > [!IMPORTANT]
+ > If you don't select a label, only keys without labels will be exported. To export a key-value with a label, you must select its label.
+
+1. Optionally also add the following parameters:
+
+ | Parameter | Description | Example |
+ ||||
+ | `--separator` | Optional. The separator is the delimiter for flattening the key-values to Json/Yaml. It's required for exporting hierarchical structure and will be ignored for property files and feature flags. Select one of the following options: `.`, `,`, `:`, `;`, `/`, `-`, `_`, `ΓÇö`. | `;` |
+ | `--prefix` | Optional. Prefix to be trimmed from keys. A key prefix is the beginning part of a key. Prefixes can be used to manage groups of keys in a configuration store. Prefix will be ignored for feature flags. | `TestApp:` |
+
+ Example: export all keys and feature flags with label "prod" to a JSON file.
+
+ ```azurecli
+ az appconfig kv export --name my-app-config-store --label prod --destination file --path D:/abc.json --format json --separator ; --prefix TestApp:
+ ```
+
+1. The command line displays a list of key-values getting exported to the file. Confirm the export by selecting `y`.
+
+ :::image type="content" source="./media/import-export/continue-export-file-prompt.png" alt-text="Screenshot of the CLI. Export to a file confirmation prompt.":::
+
+You've exported key-values and feature flags that have the "prod" label to a configuration file, and have trimmed the prefix "TestApp". Values are separated by ";" in the file.
+
+For more optional parameters and examples, go to [az appconfig kv export](/cli/azure/appconfig/kv?view=azure-cli-latest#az-appconfig-kv-export&preserve-view=true).
+++
+### Export data to an App Configuration store
+
+Follow the steps below to export key-values and feature flags to an Azure App Configuration store.
-> [!IMPORTANT]
-> If the keys you want to export have labels, do select the corresponding labels. If you don't select a label, only keys without labels will be exported.
+You can export values from one App Configuration store to another App Configuration store, or you can export values from one App Configuration store to the same App Configuration store in order to duplicate its values and apply different parameters, such as new label or content type.
-Export all keys and feature flags with label test to a json file.
+#### [Portal](#tab/azure-portal)
+
+From the Azure portal, follow these steps:
+
+1. Browse to the App Configuration store that contains the data you want to export, and select **Import/export** from the **Operations** menu.
+
+ :::image type="content" source="./media/import-export/export-app-configuration.png" alt-text="Screenshot of the Azure portal, exporting from an App Configuration store.":::
+
+1. On the **Export** tab, select **App Configuration** under **Target service**.
+
+1. Select **Select resource**, fill out the form with the following parameters, and select **Apply**:
+
+ | Parameter | Description | Example |
+ |-|-|--|
+ | Subscription | Your current subscription is selected by default. | *my-subscription* |
+ | Resource group | Select a resource group that contains the App Configuration store with configuration to import. | *my-resource-group* |
+ | Resource | Select the App Configuration store that contains the configuration you want to import. | *my-app-config-store* |
+
+1. The page now displays the selected **Target service** and resource ID. The **Select resource** action lets you switch to another source App Configuration store.
+
+ > [!NOTE]
+ > A message is displayed on screen, indicating that the keys were fetched successfully.
+
+1. Fill out the next part of the form:
+
+ | Parameter | Description | Example |
+ |--|-||
+ | From label | Select at least one label to export values with the corresponding labels. **Select all** will export keys with any label, and **(No label)** will restrict the export to keys with no label. | *prod* |
+ | At a specific time | Optional. Fill out to import key-values from a specific point in time. This is the point in time of the resource you selected in the previous step. Format: "YYYY-MM-DDThh:mm:ssZ". | *07/28/2022 12:00:00 AM* |
+ | Override default key-value labels | Optional. By default, imported items use their current label. Check the box and enter a label to override these defaults with a custom label. | *new* |
+
+1. Select **Apply** to proceed with the export.
+
+You've exported key-values and feature flags that have the label "prod" from an App Configuration store, at their state from 07/28/2022 12:00:00 AM, and have assigned them the label "new".
+
+#### [Azure CLI](#tab/azure-cli)
+
+From the Azure CLI, follow the steps below. If you don't have the Azure CLI installed locally, you can optionally use [Azure Cloud Shell](../cloud-shell/overview.md).
+
+1. Enter the export command `az appconfig kv export` and enter the following parameters:
+
+ | Parameter | Description | Example |
+ ||--|--|
+ | `--name` | Enter the name of the App Configuration store that contains the key-values you want to export. | `my-app-config-store` |
+ | `--destination` | Enter `appconfig` to indicate that you're exporting data to an App Configuration store. | `appconfig` |
+ | `--dest-name` | Enter the name of the App Configuration store you want to export data to. | `my-other-app-config-store` |
+ | `--label` | Enter a label to export keys and feature flags with this label. If you don't specify a label, by default, you will only export keys and feature flags with no label. You can enter one label, enter several labels by separating them with `,`, or use `*` to take all of the labels in account. | `prod` |
+
+ > [!IMPORTANT]
+ > If the keys you want to export have labels, you must use the command `--label` and enter the corresponding labels. If you don't select a label, only keys without labels will be exported. Use a comma sign (`,`) to select several labels or use `*` to include all labels, including the null label (no label).
-```python
-az appconfig kv export --name <your-app-config-store-name> --label test --destination file --path D:/abc.json --format json
-```
+1. Optionally also add the following parameter:
-Export all keys with null label excluding feature flags to a json file.
+ | Parameter | Description | Example |
+ ||--|--|
+ | `--dest-label` | Optional. Enter a destination label, to assign this label to exported key-values. | `new` |
-```python
-az appconfig kv export --name <your-app-config-store-name> --destination file --path D:/abc.json --format json --skip-features
-```
+ Example: export keys and feature flags with the label "prod" to another App Configuration store and add the destination label "new".
-Export all keys and feature flags with all labels to another App Configuration.
+ ```azurecli
+ az appconfig kv export --name my-app-config-store --destination appconfig --dest-name my-other-app-config-store --dest-label new --label prod
+ ```
-```python
-az appconfig kv export --name <your-app-config-store-name> --destination appconfig --dest-name <another-app-config-store-name> --key * --label * --preserve-labels
-```
+1. The command line displays a list of key-values getting exported to the files. Confirm the export by selecting `y`.
-Export all keys and feature flags with all labels to another App Configuration and overwrite destination labels.
+ :::image type="content" source="./media/import-export/continue-export-app-configuration-prompt.png" alt-text="Screenshot of the CLI. Export to App Configuration confirmation prompt.":::
-```python
-az appconfig kv export --name <your-app-config-store-name> --destination appconfig --dest-name <another-app-config-store-name> --key * --label * --dest-label ExportedKeys
-```
+You've exported key-values and feature flags that have the label "prod" from an App Configuration store and have assigned them the label "new".
-For more details and examples, go to [az appconfig kv export](/cli/azure/appconfig/kv?view=azure-cli-latest#az-appconfig-kv-export&preserve-view=true).
+For more optional parameters and examples, go to [az appconfig kv import](/cli/azure/appconfig/kv?view=azure-cli-latest#az-appconfig-kv-import&preserve-view=true).
+### Export data to Azure App Service
+
+Follow the steps below to export key-values to Azure App Service.
+
+> [!NOTE]
+> Exporting feature flags to App Service is currently not supported.
+
+#### [Portal](#tab/azure-portal)
+
+From the Azure portal, follow these steps:
+
+1. Browse to your App Configuration store, and select **Import/export** from the **Operations** menu.
+
+ :::image type="content" source="./media/import-export/export-app-service.png" alt-text="Screenshot of the Azure portal, exporting from App Service.":::
+
+1. On the **Export** tab, select **App Services** under **Target service**.
+
+1. Select **Select resource**, fill out the form with the following parameters, and select **Apply**:
+
+ | Parameter | Description | Example |
+ |-|-|--|
+ | Subscription | Your current subscription is selected by default. | *my-subscription* |
+ | Resource group | Select a resource group that contains the App Service with configuration to export. | *my-resource-group* |
+ | Resource | Select the App Service that contains the configuration you want to export. | *my-app-service* |
+
+1. The page now displays the selected **Target service** and resource ID. The **Select resource** action lets you switch to another source App Service.
+
+1. Optionally fill out the next part of the form:
+
+ | Parameter | Description | Example |
+ |--|-||
+ | Prefix | Optional. This prefix will be trimmed from the imported keys. A key prefix is the beginning part of a key. Prefixes can be used to manage groups of keys in a configuration store. Prefix will be ignored for feature flags. | *TestApp:* |
+ | From label | Select an existing label to restrict your export to key-values with a specific label. If you don't select a label, only key-values with the "No label" label will be exported. See note below. | *prod* |
+ | At a specific time | Optional. Fill out to import key-values from a specific point in time. This is the point in time of the resource you selected in the previous step. Format: "YYYY-MM-DDThh:mm:ssZ". | *07/28/2022 12:00:00 AM* |
+ | Content type | Optional. Check the box **Override default key-value content types** and select **Key Vault Reference** or **JSON** under **Content type** to state that the imported content consists of a Key Vault reference or a JSON file. | *JSON (application/json)* |
+
+ > [!IMPORTANT]
+ > If the keys you want to export have labels, you must use the command `--label` and enter the corresponding labels. If you don't select a label, only keys without labels will be exported. Use a comma sign (`,`) to select several labels or use `*` to include all labels, including the null label (no label).
+
+1. Select **Apply** to proceed with the export.
+
+You've exported key-values that have the "prod" label from an App Service resource, at their state from 07/28/2021 12:00:00 AM, and have trimmed the prefix "TestApp". The keys have been exported with a content type in JSON format.
+
+#### [Azure CLI](#tab/azure-cli)
+
+From the Azure CLI, follow the steps below. If you don't have the Azure CLI installed locally, you can optionally use [Azure Cloud Shell](../cloud-shell/overview.md).
+
+1. Enter the export command `az appconfig kv export` and enter the following parameters:
+
+ | Parameter | Description | Example |
+ ||--||
+ | `--name` | Enter the name of the App Configuration store that contains the key-values you want to export. | `my-app-config-store` |
+ | `--destination` | Enter `appservice` to indicate that you're exporting data to App Service. | `appservice` |
+ | `--appservice-account` | Enter the App Service's ARM ID or use the name of the App Service, if it's in the same subscription and resource group as the App Configuration. | `/subscriptions/123/resourceGroups/my-as-resource-group/providers/Microsoft.Web/sites/my-app-service` or `my-app-service` |
+ | `--label` | Enter a label to export keys and feature flags with this label. If you don't specify a label, by default, you will only export keys and feature flags with no label. You can enter one label, enter several labels by separating them with `,`, or use `*` to take all of the labels in account. | `prod` |
+
+ > [!IMPORTANT]
+ > If the keys you want to export have labels, you must use the command `--label` and enter the corresponding labels. If you don't select a label, only keys without labels will be exported. Use a comma sign (`,`) to select several labels or use `*` to include all labels, including the null label (no label).
+
+ To get the value for `--appservice-account`, use the command `az webapp show --resource-group <resource-group> --name <resource-name>`.
+
+1. Optionally also add a prefix:
+
+ | Parameter | Description | Example |
+ ||-|--|
+ | `--prefix` | Optional. Prefix to be trimmed from keys. A key prefix is the beginning part of a key. Prefixes can be used to manage groups of keys in a configuration store. | `TestApp:` |
+
+ Example: export all key-values with the label "prod" to an App Service application and trim the prefix "TestApp".
+
+ ```azurecli
+ az appconfig kv export --name my-app-config-store --destination appservice --appservice-account /subscriptions/123/resourceGroups/my-resource-group/providers/Microsoft.Web/sites/my-app-service/config/web --label prod --prefix TestApp:
+ ```
+
+1. The command line displays a list of key-values getting exported to the file. Confirm the export by selecting `y`.
+
+ :::image type="content" source="./media/import-export/continue-export-app-service-prompt.png" alt-text="Screenshot of the CLI. Export to App Service confirmation prompt.":::
+
+You've exported all keys with the label "prod" to an Azure App Service resource and have trimmed the prefix "TestApp:".
+
+For more optional parameters and examples, go to [az appconfig kv export](/cli/azure/appconfig/kv?view=azure-cli-latest#az-appconfig-kv-export&preserve-view=true).
+++
+## Error messages
+
+You may encounter the following error messages when importing or exporting App Configuration keys:
+
+- **Selected file must be between 1 and 2097152 bytes.**: your file is too large. Select a smaller file.
+- **Public access is disabled for your store or you are accessing from a private endpoint that is not in the storeΓÇÖs private endpoint configurations**. To import keys from an App Configuration store, you need to have access to that store. If necessary, enable public access for the source store or access it from an approved private endpoint. If you just enabled public access, wait up to 5 minutes for the cache to refresh.
+ ## Next steps > [!div class="nextstepaction"]
-> [Create an ASP.NET Core web app](./quickstart-aspnet-core-app.md)
+> [Back up App Configuration stores automatically](./howto-backup-config-store.md)
azure-arc Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/cluster-connect.md
A conceptual overview of this feature is available in [Cluster connect - Azure A
``` ```console
- TOKEN=$(kubectl get secret demo-user-secret -o jsonpath='{$.data.token}' | base64 -d | sed $'s/$/\\\n/g')
+ TOKEN=$(kubectl get secret demo-user-secret -o jsonpath='{$.data.token}' | base64 -d | sed 's/$/\\\n/g')
``` 1. Get the token to output to console
azure-arc Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md
This topic describes the basic requirements for installing the Connected Machine
Azure Arc-enabled servers support the installation of the Connected Machine agent on physical servers and virtual machines hosted outside of Azure. This includes support for virtual machines running on platforms like:
-* VMware
+* VMware (including Azure VMware Solution)
* Azure Stack HCI * Other cloud environments
azure-cache-for-redis Cache Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-overview.md
Azure Cache for Redis is available in these tiers:
| Basic | An OSS Redis cache running on a single VM. This tier has no service-level agreement (SLA) and is ideal for development/test and non-critical workloads. | | Standard | An OSS Redis cache running on two VMs in a replicated configuration. | | Premium | High-performance OSS Redis caches. This tier offers higher throughput, lower latency, better availability, and more features. Premium caches are deployed on more powerful VMs compared to the VMs for Basic or Standard caches. |
-| Enterprise | High-performance caches powered by Redis Inc.'s Redis Enterprise software. This tier supports Redis modules including RediSearch, RedisBloom, and RedisTimeSeries. Also, it offers even higher availability than the Premium tier. |
+| Enterprise | High-performance caches powered by Redis Inc.'s Redis Enterprise software. This tier supports Redis modules including RediSearch, RedisBloom, RedisJSON, and RedisTimeSeries. Also, it offers even higher availability than the Premium tier. |
| Enterprise Flash | Cost-effective large caches powered by Redis Inc.'s Redis Enterprise software. This tier extends Redis data storage to non-volatile memory, which is cheaper than DRAM, on a VM. It reduces the overall per-GB memory cost. | ### Feature comparison
The [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/
| [Data persistence](cache-how-to-premium-persistence.md) |-|-|Γ£ö|Preview|Preview| | [Zone redundancy](cache-how-to-zone-redundancy.md) |-|-|Γ£ö|Γ£ö|Γ£ö| | [Geo-replication](cache-how-to-geo-replication.md) |-|-|Γ£ö|Γ£ö|Γ£ö|
-| [Redis Modules](#choosing-the-right-tier) |-|-|-|Γ£ö|-|
+| [Redis Modules](cache-redis-modules.md) |-|-|-|Γ£ö|Preview|
| [Import/Export](cache-how-to-import-export-data.md) |-|-|Γ£ö|Γ£ö|Γ£ö| | [Reboot](cache-administration.md#reboot) |Γ£ö|Γ£ö|Γ£ö|-|-| | [Scheduled updates](cache-administration.md#schedule-updates) |Γ£ö|Γ£ö|Γ£ö|-|-|
+> [!NOTE]
+> The Enterprise Flash tier currently supports only the RedisJSON and RediSearch modules in preview.
+ ### Choosing the right tier Consider the following options when choosing an Azure Cache for Redis tier:
Consider the following options when choosing an Azure Cache for Redis tier:
- **High availability**: Azure Cache for Redis provides multiple [high availability](cache-high-availability.md) options. It guarantees that a Standard, Premium, or Enterprise cache is available according to our [SLA](https://azure.microsoft.com/support/legal/sla/cache/v1_0/). The SLA only covers connectivity to the cache endpoints. The SLA doesn't cover protection from data loss. We recommend using the Redis data persistence feature in the Premium and Enterprise tiers to increase resiliency against data loss. - **Data persistence**: The Premium and Enterprise tiers allow you to persist the cache data to an Azure Storage account and a Managed Disk respectively. Underlying infrastructure issues might result in potential data loss. We recommend using the Redis data persistence feature in these tiers to increase resiliency against data loss. Azure Cache for Redis offers both RDB and AOF (preview) options. Data persistence can be enabled through Azure portal and CLI. For the Premium tier, see [How to configure persistence for a Premium Azure Cache for Redis](cache-how-to-premium-persistence.md). - **Network isolation**: Azure Private Link and Virtual Network (VNET) deployments provide enhanced security and traffic isolation for your Azure Cache for Redis. VNET allows you to further restrict access through network access control policies. For more information, see [Azure Cache for Redis with Azure Private Link](cache-private-link.md) and [How to configure Virtual Network support for a Premium Azure Cache for Redis](cache-how-to-premium-vnet.md).-- **Redis Modules**: Enterprise tiers support [RediSearch](https://docs.redis.com/latest/modules/redisearch/), [RedisBloom](https://docs.redis.com/latest/modules/redisbloom/) and [RedisTimeSeries](https://docs.redis.com/latest/modules/redistimeseries/). These modules add new data types and functionality to Redis.
+- **Redis Modules**: Enterprise tiers support [RediSearch](https://docs.redis.com/latest/modules/redisearch/), [RedisBloom](https://docs.redis.com/latest/modules/redisbloom/), [RedisTimeSeries](https://docs.redis.com/latest/modules/redistimeseries/), and [RedisJSON](https://docs.redis.com/latest/modules/redisjson/) (preview). These modules add new data types and functionality to Redis.
You can scale your cache from the Basic tier up to Premium after it has been created. Scaling down to a lower tier isn't supported currently. For step-by-step scaling instructions, see [How to Scale Azure Cache for Redis](cache-how-to-scale.md) and [How to automate a scaling operation](cache-how-to-scale.md#how-to-automate-a-scaling-operation).
azure-fluid-relay Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-fluid-relay/resources/support.md
# Help and support options for Azure Fluid Relay
-If you have an issue or question involving Azure Fluid Relay, the following options are available.
+If you have an issue or question involving Azure Fluid Relay, the following options are available:
-## Check out frequently asked questions
-
-You can see if your question is already answered on our Frequently Asked Questions [page](faq.md).
+> [!IMPORTANT]
+> For ongoing service issues that are time sensitive, creating an Azure support request is the preferred option.
## Create an Azure support request
-With Azure, there are many [support options and plans](https://azure.microsoft.com/support/plans/) available, which you can explore and review. You can create a support ticket in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
+If you are running into an ongoing service issue that is impacting your end users, creating a support request is the best way to obtain live-site support. Depending on the degree of impact, setting the right severity level for the support case will get you to the technical support needed. With Azure, there are many [support options and plans](https://azure.microsoft.com/support/plans/) available, which you can explore and review. You can create a support ticket in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview).
+
+## Check out frequently asked questions
+
+You can see if your question is already answered on our Frequently Asked Questions [page](faq.md).
## Post a question to Microsoft Q&A
azure-fluid-relay Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-glossary-cloud-terminology.md
editor: '' ms.assetid: d7ac12f7-24b5-4bcd-9e4d-3d76fbd8d297-++ na
azure-government Documentation Government Overview Itar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-itar.md
recommendations: false Previously updated : 04/06/2022 Last updated : 08/30/2022 # Azure support for export controls
The EAR is applicable to dual-use items that have both commercial and military a
Both Azure and Azure Government can help you meet your EAR compliance requirements. Except for the Azure region in Hong Kong SAR, Azure and Azure Government datacenters aren't located in proscribed countries or in the Russian Federation.
-Azure services rely on [FIPS 140](/azure/compliance/offerings/offering-fips-140-2) validated cryptographic modules in the underlying operating system, and provide you with [many options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md). The Key Vault service can store encryption keys in FIPS 140 validated hardware security modules (HSMs) under your control, also known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Keys generated inside the Azure Key Vault HSMs aren't exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. **Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents don't see or extract your cryptographic keys.**
+Azure services rely on [FIPS 140](/azure/compliance/offerings/offering-fips-140-2) validated cryptographic modules in the underlying operating system, and provide you with [many options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md). The Key Vault service can store encryption keys in FIPS 140 validated hardware security modules (HSMs) under your control, also known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Keys generated inside the Azure Key Vault HSMs aren't exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. **Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents don't see or extract your cryptographic keys.** For extra assurances, see [How does Azure Key Vault protect your keys?](../key-vault/managed-hsm/mhsm-control-data.md#how-does-azure-key-vault-managed-hsm-protect-your-keys)
You're responsible for choosing Azure or Azure Government regions for deploying your applications and data. Moreover, you're responsible for designing your applications to apply end-to-end data encryption that meets EAR requirements. Microsoft doesn't inspect, approve, or monitor your applications deployed on Azure or Azure Government.
The US Department of State has export control authority over defense articles, s
DDTC [revised the ITAR rules](https://www.federalregister.gov/documents/2019/12/26/2019-27438/international-traffic-in-arms-regulations-creation-of-definition-of-activities-that-are-not-exports) effective 25 March 2020 to align them more closely with the EAR. These ITAR revisions introduced an end-to-end data encryption carve-out that incorporated many of the same terms that the US Department of Commerce adopted in 2016 for the EAR. Specifically, the revised ITAR rules state that activities that don't constitute exports, re-exports, re-transfers, or temporary imports include (among other activities) the sending, taking, or storing of technical data that is 1) unclassified, 2) secured using end-to-end encryption, 3) secured using FIPS 140 compliant cryptographic modules as prescribed in the regulations, 4) not intentionally sent to a person in or stored in a [country proscribed in § 126.1](https://www.ecfr.gov/current/title-22/chapter-I/subchapter-M/part-126?toc=1) or the Russian Federation, and 5) not sent from a country proscribed in § 126.1 or the Russian Federation. Moreover, DDTC clarified that data in-transit via the Internet isn't deemed to be stored. End-to-end encryption implies the data is always kept encrypted between the originator and intended recipient, and the means of decryption isn't provided to any third party.
-There's no ITAR compliance certification; however, both Azure and Azure Government can help you meet your ITAR compliance obligations. Except for the Azure region in Hong Kong SAR, Azure and Azure Government datacenters aren't located in proscribed countries or in the Russian Federation. Azure services rely on [FIPS 140](/azure/compliance/offerings/offering-fips-140-2) validated cryptographic modules in the underlying operating system, and provide you with [many options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md). The Key Vault service can store encryption keys in FIPS 140 validated hardware security modules (HSMs) under your control, also known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Keys generated inside the Azure Key Vault HSMs aren't exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. **Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents don't see or extract your cryptographic keys.**
+There's no ITAR compliance certification; however, both Azure and Azure Government can help you meet your ITAR compliance obligations. Except for the Azure region in Hong Kong SAR, Azure and Azure Government datacenters aren't located in proscribed countries or in the Russian Federation. Azure services rely on [FIPS 140](/azure/compliance/offerings/offering-fips-140-2) validated cryptographic modules in the underlying operating system, and provide you with [many options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md). The Key Vault service can store encryption keys in FIPS 140 validated hardware security modules (HSMs) under your control, also known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Keys generated inside the Azure Key Vault HSMs aren't exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. **Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents don't see or extract your cryptographic keys.** For extra assurances, see [How does Azure Key Vault protect your keys?](../key-vault/managed-hsm/mhsm-control-data.md#how-does-azure-key-vault-managed-hsm-protect-your-keys)
You're responsible for choosing Azure or Azure Government regions for deploying your applications and data. Moreover, you're responsible for designing your applications to apply end-to-end data encryption that meets ITAR requirements. Microsoft doesn't inspect, approve, or monitor your applications deployed on Azure or Azure Government.
The [Office of Foreign Assets Control](https://home.treasury.gov/policy-issues/o
The OFAC defines prohibited transactions as trade or financial transactions and other dealings in which US persons may not engage unless authorized by OFAC or expressly exempt by statute. For web-based interactions, see [FAQ No. 73](https://home.treasury.gov/policy-issues/financial-sanctions/faqs/73) for general guidance released by OFAC, which specifies, for example, that &#8220;Firms that facilitate or engage in e-commerce should do their best to know their customers directly.&#8221;
-As stated in the Microsoft Online Services Terms [Data Protection Addendum](https://aka.ms/dpa) (DPA), &#8220;Microsoft doesn't control or limit the regions from which customer or customerΓÇÖs end users may access or move customer data.&#8221; For Microsoft online services, Microsoft conducts due diligence to prevent transactions with entities from OFAC embargoed countries. For example, a sanctions target isn't allowed to provision Azure services. OFAC hasn't issued guidance, like the guidance provided by BIS for the EAR, that draws a distinction between cloud service providers and customers when it comes to deemed export. Therefore, it would be **your responsibility to exclude sanctions targets from online transactions** involving your applications, including web sites, deployed on Azure. Microsoft doesn't block network traffic to your web sites deployed on Azure. Even though OFAC mentions that customers can restrict access based in IP table ranges, they also acknowledge that this approach doesn't fully address an internetΓÇÖs firm compliance risks. Therefore, OFAC recommends that e-commerce firms should know their customers directly. Microsoft isn't responsible for and doesn't have the means to know directly the end users that interact with your applications deployed on Azure.
+As stated in the Microsoft Online Services Terms [Data Protection Addendum](https://aka.ms/dpa) (DPA), &#8220;Microsoft doesn't control or limit the regions from which customer or customerΓÇÖs end users may access or move customer data.&#8221; For Microsoft online services, Microsoft conducts due diligence to prevent transactions with entities from OFAC embargoed countries. For example, a sanctions target isn't allowed to provision Azure services. OFAC hasn't issued guidance, like the guidance provided by BIS for the EAR that draws a distinction between cloud service providers and customers when it comes to deemed export. Therefore, it would be **your responsibility to exclude sanctions targets from online transactions** involving your applications, including web sites, deployed on Azure. Microsoft doesn't block network traffic to your web sites deployed on Azure. Even though OFAC mentions that customers can restrict access based in IP table ranges, they also acknowledge that this approach doesn't fully address an internetΓÇÖs firm compliance risks. Therefore, OFAC recommends that e-commerce firms should know their customers directly. Microsoft isn't responsible for and doesn't have the means to know directly the end users that interact with your applications deployed on Azure.
OFAC sanctions are in place to prevent &#8220;conducting business with a sanctions target&#8221;, that is, preventing transactions involving trade, payments, financial instruments, and so on. OFAC sanctions aren't intended to prevent a resident of a proscribed country from viewing a public web site.
OFAC sanctions are in place to prevent &#8220;conducting business with a sanctio
You should assess carefully how your use of Azure may implicate US export controls, and determine whether any of the data you want to store or process in the cloud may be subject to export controls. Microsoft provides you with contractual commitments, operational processes, and technical features to help you meet your export control obligations when using Azure. The following Azure features are available to help you manage potential export control risks: - **Ability to control data location** ΓÇô You have visibility as to where your [data is stored](https://azure.microsoft.com/global-infrastructure/data-residency/), and robust tools to restrict data storage to a single geography, region, or country. For example, you may therefore ensure that data is stored in the United States or your country of choice and minimize transfer of controlled technology/technical data outside the target country. Your data isn't *intentionally stored* in a non-conforming location, consistent with the EAR and ITAR rules.-- **End-to-end encryption** ΓÇô Implies the data is always kept encrypted between the originator and intended recipient, and the means of decryption isn't provided to any third party. Azure relies on [FIPS 140](/azure/compliance/offerings/offering-fips-140-2) validated cryptographic modules in the underlying operating system, and provides you with [many options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md). The Key Vault service can store encryption keys in FIPS 140 validated hardware security modules (HSMs) under your control, also known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents don't see or extract your cryptographic keys.
+- **End-to-end encryption** ΓÇô Implies the data is always kept encrypted between the originator and intended recipient, and the means of decryption isn't provided to any third party. Azure relies on [FIPS 140](/azure/compliance/offerings/offering-fips-140-2) validated cryptographic modules in the underlying operating system, and provides you with [many options for encrypting data](../security/fundamentals/encryption-overview.md) in transit and at rest, including encryption key management using [Azure Key Vault](../key-vault/general/overview.md). The Key Vault service can store encryption keys in FIPS 140 validated hardware security modules (HSMs) under your control, also known as [customer-managed keys (CMK)](../security/fundamentals/encryption-models.md). Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents [don't see or extract your cryptographic keys](../key-vault/managed-hsm/mhsm-control-data.md#how-does-azure-key-vault-managed-hsm-protect-your-keys).
- **Control over access to data** ΓÇô You can know and control who can access your data and on what terms. Microsoft technical support personnel don't need and don't have default access to your data. For those rare instances where resolving your support requests requires elevated access to your data, [Customer Lockbox for Azure](../security/fundamentals/customer-lockbox-overview.md) puts you in charge of approving or denying data access requests.-- **Tools and protocols to prevent unauthorized deemed export/re-export** ΓÇô Apart from the EAR and ITAR *end-to-end encryption* safe harbor for physical storage locations, the use of encryption also helps protect against a potential deemed export, or deemed re-export, because even if a non-US person has access to the encrypted data, nothing is revealed to non-US person who can't read or understand the data while it's encrypted and thus there is no release of any controlled data. However, ITAR requires some authorization before granting foreign persons with access information that would enable them to decrypt ITAR technical data. Azure offers a wide range of encryption capabilities and solutions, flexibility to choose among encryption options, and robust tools for managing encryption.
+- **Tools and protocols to prevent unauthorized deemed export/re-export** ΓÇô Apart from the EAR and ITAR *end-to-end encryption* safe harbor for physical storage locations, the use of encryption also helps protect against a potential deemed export, or deemed re-export, because even if a non-US person has access to the encrypted data, nothing is revealed to non-US person who can't read or understand the data while it's encrypted and thus there's no release of any controlled data. However, ITAR requires some authorization before granting foreign persons with access information that would enable them to decrypt ITAR technical data. Azure offers a wide range of encryption capabilities and solutions, flexibility to choose among encryption options, and robust tools for managing encryption.
## Location of customer data
azure-government Documentation Government Overview Jps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-jps.md
recommendations: false Previously updated : 03/02/2022 Last updated : 08/30/2022 # Public safety and justice in Azure Government
While the current CMVP FIPS 140 implementation guidance precludes a FIPS 140 val
Proper protection and management of encryption keys is essential for data security. [Azure Key Vault](../key-vault/index.yml) is a cloud service for securely storing and managing secrets. Key Vault enables you to store your encryption keys in hardware security modules (HSMs) that are FIPS 140 validated. For more information, see [Data encryption key management](./azure-secure-isolation-guidance.md#data-encryption-key-management).
-With Key Vault, you can import or generate encryption keys in HSMs, ensuring that keys never leave the HSM protection boundary to support *bring your own key* (BYOK) scenarios. Keys generated inside the Key Vault HSMs aren't exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. **Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents do not see or extract your cryptographic keys.**
+With Key Vault, you can import or generate encryption keys in HSMs, ensuring that keys never leave the HSM protection boundary to support *bring your own key* (BYOK) scenarios. Keys generated inside the Key Vault HSMs aren't exportable ΓÇô there can be no clear-text version of the key outside the HSMs. This binding is enforced by the underlying HSM. **Azure Key Vault is designed, deployed, and operated such that Microsoft and its agents don't see or extract your cryptographic keys.** For extra assurances, see [How does Azure Key Vault protect your keys?](../key-vault/managed-hsm/mhsm-control-data.md#how-does-azure-key-vault-managed-hsm-protect-your-keys) Therefore, if you use CMK stored in Azure Key Vault HSMs, you effectively maintain sole ownership of encryption keys.
### Data encryption in transit
azure-monitor Alerts Create New Alert Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md
description: Learn how to create a new alert rule.
Previously updated : 08/03/2022 Last updated : 08/23/2022 # Create a new alert rule
And then defining these elements for the resulting alert actions using:
1. In the **Select a resource** pane, set the scope for your alert rule. You can filter by **subscription**, **resource type**, **resource location**, or do a search.
- You can see the **Available signal types** for your selected resource(s) at the bottom right of the pane. The available signal types change based on the selected resource.
+ The **Available signal types** for your selected resource(s) are at the bottom right of the pane.
:::image type="content" source="media/alerts-create-new-alert-rule/alerts-select-resource.png" alt-text="Screenshot showing the select resource pane for creating new alert rule."::: 1. Select **Include all future resources** to include any future resources added to the selected scope. 1. Select **Done**. 1. Select **Next: Condition>** at the bottom of the page.
-1. In the **Select a signal** pane, the **Signal type**, **Monitor service**, and **Signal name** fields are pre-populated with the available values for your selected scope. You can narrow the signal list using these fields. The **Signal type** determines which [type of alert](alerts-overview.md#types-of-alerts) rule you're creating.
-1. Select the **Signal name**, and follow the steps below depending on the type of alert you're creating.
+1. In the **Select a signal** pane, filter the list of signals using the **Signal type** and **Monitor service**.
+ - **Signal Type**: The [type of alert rule](alerts-overview.md#types-of-alerts) you're creating.
+ - **Monitor service**: The service sending the signal. This list is pre-populated based on the type of alert rule you selected.
+
+ This table describes the services available for each type of alert rule:
+
+ |Signal type |Monitor service |Description |
+ ||||
+ |Metrics|Platform |For metric signals, the monitor service is the metric namespace. ΓÇÿPlatformΓÇÖ means the metrics are provided by the resource provider, namely 'Azure'.|
+ | |Azure.ApplicationInsights|Customer-reported metrics, sent by the Application Insights SDK. |
+ | |Azure.VM.Windows.GuestMetrics |VM guest metrics, collected by an extension running on the VM. Can include built-in operating system perf counters, and custom perf counters. |
+ | |\<your custom namespace\>|A custom metric namespace, containing custom metrics sent with the Azure Monitor Metrics API. |
+ |Log |Log Analytics|The service that provides the ΓÇÿCustom log searchΓÇÖ and ΓÇÿLog (saved query)ΓÇÖ signals. |
+ |Activity log|Activity Log ΓÇô Administrative|The service that provides the ΓÇÿAdministrativeΓÇÖ activity log events. |
+ | |Activity Log ΓÇô Policy|The service that provides the 'Policy' activity log events. |
+ | |Activity Log ΓÇô Autoscale|The service that provides the ΓÇÿAutoscaleΓÇÖ activity log events. |
+ | |Activity Log ΓÇô Security|The service that provides the ΓÇÿSecurityΓÇÖ activity log events. |
+ |Resource health|Resource Health|The service that provides the resource-level health status. |
+ |Service health|Service health|The service that provides the subscription-level health status. |
+
+
+1. Select the **Signal name**, and follow the steps in the tab below that corresponds to the type of alert you're creating.
### [Metric alert](#tab/metric) 1. In the **Configure signal logic** pane, you can preview the results of the selected metric signal. Select values for the following fields.
And then defining these elements for the resulting alert actions using:
From this point on, you can select the **Review + create** button at any time. 1. In the **Actions** tab, select or create the required [action groups](./action-groups.md).
+1. (Optional) If you want to make sure that the data processing for the action group takes place within a specific region, you can select an action group in one of these regions in which to process the action group:
+ - Sweden Central
+ - Germany West Central
+
+ > [!NOTE]
+ > We are continually adding more regions for regional data processing.
:::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-actions-tab.png" alt-text="Screenshot of the actions tab when creating a new alert rule.":::
-1. In the **Details** tab, define the **Project details** by selecting the **Subscription** and **Resource group**.
+1. In the **Details** tab, define the **Project details**.
+ - Select the **Subscription**.
+ - Select the **Resource group**.
+ - (Optional) If you want to make sure that the data processing for the alert rule takes place within a specific region, and you're creating a metric alert rule that monitors a custom metric, you can select to process the alert rule in one of these regions.
+ - North Europe
+ - West Europe
+ - Sweden Central
+ - Germany West Central
+
1. Define the **Alert rule details**. ### [Metric alert](#tab/metric)
The *sampleActivityLogAlert.parameters.json* file contains the values provided f
## Changes to log alert rule creation experience
-If you're creating a new log alert rule, note that current alert rule wizard is a little different from the earlier experience:
+If you're creating a new log alert rule, please note that current alert rule wizard is a little different from the earlier experience:
- Previously, search results were included in the payload of the triggered alert and its associated notifications. The email included only 10 rows from the unfiltered results while the webhook payload contained 1000 unfiltered results. To get detailed context information about the alert so that you can decide on the appropriate action: - We recommend using [Dimensions](alerts-types.md#narrow-the-target-using-dimensions). Dimensions provide the column value that fired the alert, giving you context for why the alert fired and how to fix the issue.
azure-monitor Alerts Troubleshoot Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot-metric.md
description: Common issues with Azure Monitor metric alerts and possible solutio
Previously updated : 6/23/2022 Last updated : 8/31/2022 ms:reviwer: harelbr # Troubleshooting problems in Azure Monitor metric alerts
The table below lists the metrics that aren't supported by dynamic thresholds.
| Microsoft.Network/expressRouteGateways | ExpressRouteGatewayPacketsPerSecond | | Microsoft.Network/expressRouteGateways | ExpressRouteGatewayNumberOfVmInVnet | | Microsoft.Network/expressRouteGateways | ExpressRouteGatewayFrequencyOfRoutesChanged |
+| Microsoft.Network/virtualNetworkGateways | ExpressRouteGatewayBitsPerSecond |
| Microsoft.Network/virtualNetworkGateways | ExpressRouteGatewayPacketsPerSecond | | Microsoft.Network/virtualNetworkGateways | ExpressRouteGatewayNumberOfVmInVnet | | Microsoft.Network/virtualNetworkGateways | ExpressRouteGatewayFrequencyOfRoutesChanged |
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md
Add `-javaagent:"path/to/applicationinsights-agent-3.3.1.jar"` to your applicati
- You can set an environment variable: ```console
- APPLICATIONINSIGHTS_CONNECTION_STRING = <Copy connection string from Application Insights Resource Overview>
+ APPLICATIONINSIGHTS_CONNECTION_STRING=<Copy connection string from Application Insights Resource Overview>
``` - Or you can create a configuration file named `applicationinsights.json`. Place it in the same directory as `applicationinsights-agent-3.3.1.jar` with the following content:
In the `applicationinsights.json` file, you can also configure these settings:
For more information, see [Configuration options](./java-standalone-config.md).
-## Instrumentation libraries
+## Auto-Instrumentation
-Java 3.x includes the following instrumentation libraries.
+Java 3.x includes the following auto-instrumentation.
### Autocollected requests
This section explains how to modify telemetry.
### Add spans
-You can use `opentelemetry-api` to create [tracers](https://opentelemetry.io/docs/instrumentation/java/manual/#tracing) and spans. Spans populate the dependencies table in Application Insights. The string passed in for the span's name is saved to the _target_ field within the dependency.
+You can use `opentelemetry-api` to create [tracers](https://opentelemetry.io/docs/instrumentation/java/manual/#tracing) and spans.
+Spans populate the `requests` and `dependencies` tables in Application Insights.
> [!NOTE] > This feature is only in 3.2.0 and later.
You can use `opentelemetry-api` to create span events, which populate the traces
You can use `opentelemetry-api` to add attributes to spans. These attributes can include adding a custom business dimension to your telemetry. You can also use attributes to set optional fields in the Application Insights schema, such as User ID or Client IP.
-#### Add a custom dimension
-
-Adding one or more custom dimensions populates the _customDimensions_ field in the requests, dependencies, traces, or exceptions table.
+Adding one or more span attributes populates the _customDimensions_ field in the requests, dependencies, traces, or exceptions table.
> [!NOTE] > This feature is only in 3.2.0 and later.
The following table represents currently supported custom telemetry types that y
- Custom metrics are supported through micrometer. - Custom exceptions and traces are supported through logging frameworks.-- Custom requests, dependencies, and exceptions are supported through `opentelemetry-api`.-- Any type of the custom telemetry is supported through the [Application Insights Java 2.x SDK](#send-custom-telemetry-by-using-the-2x-sdk).
+- Custom requests, dependencies, metrics, and exceptions are supported through `opentelemetry-api`.
+- All types of the custom telemetry is supported through the [Application Insights Java 2.x SDK](#send-custom-telemetry-by-using-the-2x-sdk).
| Custom telemetry type | Micrometer | Log4j, logback, JUL | 2.x SDK | opentelemetry-api | |--||||-|
The following table represents currently supported custom telemetry types that y
| Exceptions | | Yes | Yes | Yes | | Page views | | | Yes | | | Requests | | | Yes | Yes |
-| Traces | | Yes | Yes | Yes |
+| Traces | | Yes | Yes | |
Currently, we're not planning to release an SDK with Application Insights 3.x.
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
Or you can set the cloud role instance using the Java system property `applicati
## Sampling
-Sampling is helpful if you need to reduce cost.
-Sampling is performed as a function on the operation ID (also known as trace ID), so that the same operation ID will always result in the same sampling decision. This ensures that you won't get parts of a distributed transaction sampled in while other parts of it are sampled out.
+> [!NOTE]
+> Sampling can be a great way to reduce the cost of Application Insights. Make sure to set up your sampling
+> configuration appropriately for your use case.
+
+Sampling is request-based, meaning if a request is captured (sampled), then so are its dependencies, logs and
+exceptions.
+
+Furthermore, sampling is trace ID based, to help ensure consistent sampling decisions across different services.
+
+### Rate-Limited Sampling
+
+Starting from 3.4.0-BETA, rate-limited sampling is available, and is now the default.
+
+If no sampling has been configured, the default is now rate-limited sampling configured to capture at most
+(approximately) 5 requests per second. This replaces the prior default which was to capture all requests.
+If you still wish to capture all requests, use [fixed-percentage sampling](#fixed-percentage-sampling) and set the
+sampling percentage to 100.
+
+> [!NOTE]
+> The rate-limited sampling is approximate, because internally it must adapt a "fixed" sampling percentage over
+> time in order to emit accurate item counts on each telemetry record. Internally, the rate-limited sampling is
+> tuned to adapt quickly (0.1 seconds) to new application loads, so you should not see it exceed the configured rate by
+> much, or for very long.
+
+Here is an example how to set the sampling to capture at most (approximately) 1 request per second:
+
+```json
+{
+ "sampling": {
+ "limitPerSecond": 1.0
+ }
+}
+```
+
+Note that `limitPerSecond` can be a decimal, so you can configure it to capture less than one request per second if you
+wish.
+
+You can also set the sampling percentage using the environment variable `APPLICATIONINSIGHTS_SAMPLING_LIMIT_PER_SECOND`
+(which will then take precedence over rate limit specified in the json configuration).
-For example, if you set sampling to 10%, you will only see 10% of your transactions, but each one of those 10% will have full end-to-end transaction details.
+### Fixed-Percentage Sampling
-Here is an example how to set the sampling to capture approximately **1/3 of all transactions** - make sure you set the sampling rate that is correct for your use case:
+Here is an example how to set the sampling to capture approximately a third of all requests:
```json {
You can also set the sampling percentage using the environment variable `APPLICA
(which will then take precedence over sampling percentage specified in the json configuration). > [!NOTE]
-> For the sampling percentage, choose a percentage that is close to 100/N where N is an integer. Currently sampling doesn't support other values.
+> For the sampling percentage, choose a percentage that is close to 100/N where N is an integer.
+> Currently sampling doesn't support other values.
## Sampling overrides (preview)
Starting from version 3.2.0, if you want to set a custom dimension programmatica
} ```
+## Connection string overrides (preview)
+
+This feature is in preview, starting from 3.4.0-BETA.
+
+Connection string overrides allow you to override the [default connection string](#connection-string), for example:
+* Set one connection string for one http path prefix `/myapp1`.
+* Set another connection string for another http path prefix `/myapp2/`.
+
+```json
+{
+ "preview": {
+ "connectionStringOverrides": [
+ {
+ "httpPathPrefix": "/myapp1",
+ "connectionString": "12345678-0000-0000-0000-0FEEDDADBEEF"
+ },
+ {
+ "httpPathPrefix": "/myapp2",
+ "connectionString": "87654321-0000-0000-0000-0FEEDDADBEEF"
+ }
+ ]
+ }
+}
+```
+ ## Instrumentation key overrides (preview) This feature is in preview, starting from 3.2.3.
These are the valid `level` values that you can specify in the `applicationinsig
### LoggingLevel
-Starting from version 3.3.0, `LoggingLevel` is not captured by default as part of Traces' custom dimension since that data is aleady captured in the `SeverityLevel` field.
+Starting from version 3.3.0, `LoggingLevel` is not captured by default as part of Traces' custom dimension since that data is already captured in the `SeverityLevel` field.
If needed, you can re-enable the previous behavior:
To disable auto-collection of Micrometer metrics (including Spring Boot Actuator
} ```
+## JDBC query masking
+
+Literal values in JDBC queries are masked by default in order to avoid accidentally capturing sensitive data.
+
+Starting from 3.4.0-BETA, this behavior can be disabled if desired, e.g.
+
+```json
+{
+ "instrumentation": {
+ "jdbc": {
+ "masking": {
+ "enabled": false
+ }
+ }
+ }
+}
+```
+ ## HTTP headers Starting from version 3.3.0, you can capture request and response headers on your server (request) telemetry:
azure-monitor Java Standalone Sampling Overrides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-sampling-overrides.md
To begin, create a configuration file named *applicationinsights.json*. Save it
```json {
- "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000",
+ "connectionString": "...",
"sampling": { "percentage": 10 },
To begin, create a configuration file named *applicationinsights.json*. Save it
} ```
+> [!NOTE]
+> Starting from 3.4.0-BETA, `telemetryKind` of `request`, `dependency`, `trace` (log), or `exception` is supported
+> (and should be set) on all sampling overrides, e.g.
+> ```json
+> {
+> "connectionString": "...",
+> "sampling": {
+> "percentage": 10
+> },
+> "preview": {
+> "sampling": {
+> "overrides": [
+> {
+> "telemetryKind": "request",
+> "attributes": [
+> ...
+> ],
+> "percentage": 0
+> },
+> {
+> "telemetryKind": "request",
+> "attributes": [
+> ...
+> ],
+> "percentage": 100
+> }
+> ]
+> }
+> }
+> }
+> ```
+ ## How it works When a span is started, the attributes present on the span at that time are used to check if any of the sampling
overrides match.
Matches can be either `strict` or `regexp`. Regular expression matches are performed against the entire attribute value, so if you want to match a value that contains `abc` anywhere in it, then you need to use `.*abc.*`.
+A sampling override can specify multiple attribute criteria, in which case all of them must match for the sampling
+override to match.
-If one of the sampling overrides match, then its sampling percentage is used to decide whether to sample the span or
+If one of the sampling overrides matches, then its sampling percentage is used to decide whether to sample the span or
not. Only the first sampling override that matches is used. If no sampling overrides match:
-* If this is the first span in the trace, then the [default sampling percentage](./java-standalone-config.md#sampling)
- is used.
+* If this is the first span in the trace, then the
+ [top-level sampling configuration](./java-standalone-config.md#sampling) is used.
* If this is not the first span in the trace, then the parent sampling decision is used.
-> [!WARNING]
-> When a decision has been made to not collect a span, then all downstream spans will also not be collected,
-> even if there are sampling overrides that match the downstream span.
-> This behavior is necessary because otherwise broken traces would result, with downstream spans being collected
-> but being parented to spans that were not collected.
- > [!NOTE]
-> The sampling decision is based on hashing the traceId (also known as the operationId) to a number between 0 and 100,
-> and that hash is then compared to the sampling percentage.
-> Since all spans in a given trace will have the same traceId, they will have the same hash,
-> and so the sampling decision will be consistent across the whole trace.
+> Starting from 3.4.0-BETA, sampling overrides do not apply to "standalone" telemetry by default. Standalone telemetry
+> is any telemetry that is not associated with a request, e.g. startup logs.
+> You can make a sampling override apply to standalone telemetry by including the attribute
+> `includingStandaloneTelemetry` in the sampling override, e.g.
+> ```json
+> {
+> "connectionString": "...",
+> "preview": {
+> "sampling": {
+> "overrides": [
+> {
+> "telemetryKind": "dependency",
+> "includingStandaloneTelemetry": true,
+> "attributes": [
+> ...
+> ],
+> "percentage": 0
+> }
+> ]
+> }
+> }
+> }
+> ```
## Example: Suppress collecting telemetry for health checks
This will also suppress collecting any downstream spans (dependencies) that woul
```json {
- "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000",
+ "connectionString": "...",
"preview": { "sampling": { "overrides": [
This will suppress collecting telemetry for all `GET my-noisy-key` redis calls.
```json {
- "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000",
+ "connectionString": "...",
"preview": { "sampling": { "overrides": [
This will suppress collecting telemetry for all `GET my-noisy-key` redis calls.
} ```
+> [!NOTE]
+> Starting from 3.4.0-BETA, `telemetryKind` is supported (and recommended) on all sampling overrides, e.g.
+ ## Example: Collect 100% of telemetry for an important request type This will collect 100% of telemetry for `/login`.
those will also be collected for all '/login' requests.
```json {
- "connectionString": "InstrumentationKey=00000000-0000-0000-0000-000000000000",
+ "connectionString": "...",
"sampling": { "percentage": 10 },
azure-resource-manager Linter Rule No Loc Expr Outside Params https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-no-loc-expr-outside-params.md
Title: Linter rule - no location expressions outside of parameter default values description: Linter rule - no location expressions outside of parameter default values Previously updated : 1/6/2022 Last updated : 8/30/2022 # Linter rule - no location expressions outside of parameter default values
You can fix the failure by turning the variable into a parameter:
param location string = resourceGroup().location ```
+If you're using Azure PowerShell to deploy to a subscription, management group, or tenant, you should use a parameter name other than `location`. The [New-AzDeployment](/powershell/module/az.resources/new-azdeployment), [New-AzManagementGroupDeployment](/powershell/module/az.resources/new-azmanagementgroupdeployment), and [New-AzTenantDeployment](/powershell/module/az.resources/new-aztenantdeployment) commands have a parameter named `location`. This command parameter conflicts with the parameter in your Bicep file. You can avoid this conflict by using a name such as `rgLocation`.
+
+You can use `location` for a parameter name when deploying to a resource group, because [New-AzResourceGroupDeployment](/powershell/module/az.resources/new-azresourcegroupdeployment) doesn't have a parameter named `location`.
+ ## Next steps For more information about the linter, see [Use Bicep linter](./linter.md).
azure-resource-manager Extension Resource Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/extension-resource-types.md
Title: Extension resource types description: Lists the Azure resource types are used to extend the capabilities of other resource types. Previously updated : 08/24/2022 Last updated : 08/31/2022 # Resource types that extend capabilities of other resources
An extension resource is a resource that adds to another resource's capabilities
* cloudServiceSlots * networkManagerConnections
+## Microsoft.OperationalInsights
+
+* storageInsightConfigs
+ ## Microsoft.PolicyInsights * attestations
azure-resource-manager Resources Without Resource Group Limit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resources-without-resource-group-limit.md
Title: Resources without 800 count limit description: Lists the Azure resource types that can have more than 800 instances in a resource group. Previously updated : 08/10/2022 Last updated : 08/31/2022 # Resources not limited to 800 instances per resource group
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.BotService
-* botServices - By default, limited to 800 instances. That limit can be increased by contacting support.
+* botServices - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit
## Microsoft.Compute
Some resources have a limit on the number instances per region. This limit is di
* snapshots * virtualMachines * virtualMachines/extensions
-* virtualMachineScaleSets - By default, limited to 800 instances. That limit can be increased by contacting support.
+* virtualMachineScaleSets - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.Resources/ARMDisableResourcesPerRGLimit
## Microsoft.ContainerInstance
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.DevTestLab
-* labs/virtualMachines - By default, limited to 800 instances. That limit can be increased by contacting support.
+* labs/virtualMachines - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.DevTestLab/DisableLabVirtualMachineQuota
* schedules ## Microsoft.EdgeOrder
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.NotificationHubs
-* namespaces - By default, limited to 800 instances. That limit can be increased by contacting support.
-* namespaces/notificationHubs - By default, limited to 800 instances. That limit can be increased by contacting support.
+* namespaces - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.NotificationHubs/ARMDisableResourcesPerRGLimit
+* namespaces/notificationHubs - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.NotificationHubs/ARMDisableResourcesPerRGLimit
## Microsoft.PowerBI
-* workspaceCollections - By default, limited to 800 instances. That limit can be increased by contacting support.
+* workspaceCollections - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.PowerBI/UnlimitedQuota
## Microsoft.PowerBIDedicated
-* autoScaleVCores - By default, limited to 800 instances. That limit can be increased by contacting support.
-* capacities - By default, limited to 800 instances. That limit can be increased by contacting support.
+* autoScaleVCores - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.PowerBIDedicated/UnlimitedResourceGroupQuota
+* capacities - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.PowerBIDedicated/UnlimitedResourceGroupQuota
## Microsoft.Relay
Some resources have a limit on the number instances per region. This limit is di
## Microsoft.StreamAnalytics
-* streamingjobs - By default, limited to 800 instances. That limit can be increased by contacting support.
+* streamingjobs - By default, limited to 800 instances. That limit can be increased by [registering the following features](preview-features.md) - Microsoft.StreamAnalytics/ASADisableARMResourcesPerRGLimit
## Microsoft.Web
azure-resource-manager Tag Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-support.md
Title: Tag support for resources description: Shows which Azure resource types support tags. Provides details for all Azure services. Previously updated : 08/10/2022 Last updated : 08/31/2022 # Tag support for Azure resources
To get the same data as a file of comma-separated values, download [tag-support.
> | configurationProfiles / versions | Yes | Yes | > | patchJobConfigurations | Yes | Yes | > | patchJobConfigurations / patchJobs | No | No |
+> | patchSchedules | Yes | Yes |
+> | patchSchedules / associations | Yes | Yes |
> | patchTiers | Yes | Yes | > | servicePrincipals | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | accounts | Yes | Yes | > | accounts / datapools | No | No | > | workspaces | Yes | Yes |
+> | workspaces / eventgridfilters | No | No |
## Microsoft.AutonomousSystems
To get the same data as a file of comma-separated values, download [tag-support.
> | catalogs / deviceRegistrations | Yes | Yes | > | catalogs / provisioningPackages | Yes | Yes |
+## Microsoft.AzureSphereV2
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | catalogs | Yes | Yes |
+> | catalogs / certificates | No | No |
+> | catalogs / deviceRegistrations | Yes | Yes |
+> | catalogs / provisioningPackages | Yes | Yes |
+ ## Microsoft.AzureStack > [!div class="mx-tableFixed"]
To get the same data as a file of comma-separated values, download [tag-support.
> | clusters / offers | No | No | > | clusters / publishers | No | No | > | clusters / publishers / offers | No | No |
-> | galleryimages | Yes | Yes |
-> | marketplacegalleryimages | Yes | Yes |
+> | clusters / updates | No | No |
+> | clusters / updates / updateRuns | No | No |
+> | clusters / updateSummaries | No | No |
+> | galleryImages | Yes | Yes |
+> | marketplaceGalleryImages | Yes | Yes |
> | networkinterfaces | Yes | Yes |
-> | storagecontainers | Yes | Yes |
+> | storageContainers | Yes | Yes |
> | virtualharddisks | Yes | Yes | > | virtualmachines | Yes | Yes |
-> | virtualmachines / extensions | Yes | Yes |
+> | virtualMachines / extensions | Yes | Yes |
> | virtualmachines / hybrididentitymetadata | No | No | > | virtualnetworks | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | billingAccounts / createBillingRoleAssignment | No | No | > | billingAccounts / customers | No | No | > | billingAccounts / customers / billingPermissions | No | No |
+> | billingAccounts / customers / billingRoleAssignments | No | No |
+> | billingAccounts / customers / billingRoleDefinitions | No | No |
> | billingAccounts / customers / billingSubscriptions | No | No |
+> | billingAccounts / customers / createBillingRoleAssignment | No | No |
> | billingAccounts / customers / initiateTransfer | No | No | > | billingAccounts / customers / policies | No | No | > | billingAccounts / customers / products | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | snapshots | Yes | Yes | > | sshPublicKeys | Yes | Yes | > | virtualMachines | Yes | Yes |
+> | virtualMachines / applications | Yes | Yes |
> | virtualMachines / extensions | Yes | Yes | > | virtualMachines / metricDefinitions | No | No | > | virtualMachines / runCommands | Yes | Yes | > | virtualMachineScaleSets | Yes | Yes |
+> | virtualMachineScaleSets / applications | No | No |
> | virtualMachineScaleSets / extensions | No | No | > | virtualMachineScaleSets / networkInterfaces | No | No | > | virtualMachineScaleSets / publicIPAddresses | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | VirtualMachines / GuestAgents | No | No | > | VirtualMachines / HybridIdentityMetadata | No | No | > | VirtualMachines / InstallPatches | No | No |
+> | VirtualMachines / UpgradeExtensions | No | No |
> | VirtualMachineTemplates | Yes | Yes | > | VirtualNetworks | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | devcenters / images | No | No | > | networkconnections | Yes | Yes | > | projects | Yes | Yes |
+> | projects / allowedEnvironmentTypes | No | No |
> | projects / attachednetworks | No | No | > | projects / devboxdefinitions | No | No | > | projects / environmentTypes | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | cassandraClusters | Yes | Yes | > | databaseAccountNames | No | No | > | databaseAccounts | Yes | Yes |
+> | databaseAccounts / encryptionScopes | No | No |
> | restorableDatabaseAccounts | No | No | ## Microsoft.DomainRegistration
To get the same data as a file of comma-separated values, download [tag-support.
> | networkFunctionPublishers / networkFunctionDefinitionGroups | No | No | > | networkFunctionPublishers / networkFunctionDefinitionGroups / publisherNetworkFunctionDefinitionVersions | No | No | > | networkfunctions | Yes | Yes |
+> | networkfunctions / components | No | No |
> | networkFunctionVendors | No | No | > | publishers | Yes | Yes | > | publishers / artifactStores | Yes | Yes |
To get the same data as a file of comma-separated values, download [tag-support.
> | mobileNetworks / simPolicies | Yes | Yes | > | mobileNetworks / sites | Yes | Yes | > | mobileNetworks / slices | Yes | Yes |
-> | networks | Yes | Yes |
-> | networks / sites | Yes | Yes |
> | packetCoreControlPlanes | Yes | Yes | > | packetCoreControlPlanes / packetCoreDataPlanes | Yes | Yes | > | packetCoreControlPlanes / packetCoreDataPlanes / attachedDataNetworks | Yes | Yes | > | packetCoreControlPlaneVersions | No | No |
-> | packetCores | Yes | Yes |
> | simGroups | Yes | Yes | > | simGroups / sims | No | No | > | sims | Yes | Yes |
-> | sims / simProfiles | Yes | Yes |
## Microsoft.Monitor
To get the same data as a file of comma-separated values, download [tag-support.
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | energyServices | Yes | Yes |
+> | energyServices / privateEndpointConnectionProxies | No | No |
+> | energyServices / privateEndpointConnections | No | No |
+> | energyServices / privateLinkResources | No | No |
## Microsoft.OpenLogisticsPlatform
To get the same data as a file of comma-separated values, download [tag-support.
> | workspaces / shares | No | No | > | workspaces / shareSubscriptions | No | No |
+## Microsoft.OperationalInsights
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | clusters | Yes | Yes |
+> | deletedWorkspaces | No | No |
+> | linkTargets | No | No |
+> | querypacks | Yes | Yes |
+> | storageInsightConfigs | No | No |
+> | workspaces | Yes | Yes |
+> | workspaces / dataExports | No | No |
+> | workspaces / dataSources | No | No |
+> | workspaces / linkedServices | No | No |
+> | workspaces / linkedStorageAccounts | No | No |
+> | workspaces / metadata | No | No |
+> | workspaces / networkSecurityPerimeterAssociationProxies | No | No |
+> | workspaces / networkSecurityPerimeterConfigurations | No | No |
+> | workspaces / query | No | No |
+> | workspaces / scopedPrivateLinkProxies | No | No |
+> | workspaces / storageInsightConfigs | No | No |
+> | workspaces / tables | No | No |
+ ## Microsoft.Orbital > [!div class="mx-tableFixed"]
To get the same data as a file of comma-separated values, download [tag-support.
> | - | -- | -- | > | playerAccountPools | Yes | Yes | > | titles | Yes | Yes |
+> | titles / automationRules | No | No |
> | titles / segments | No | No | > | titles / titleDataSets | No | No | > | titles / titleInternalDataKeyValues | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | testBaseAccounts / emailEvents | No | No | > | testBaseAccounts / externalTestTools | No | No | > | testBaseAccounts / externalTestTools / testCases | No | No |
+> | testBaseAccounts / featureUpdateSupportedOses | No | No |
> | testBaseAccounts / flightingRings | No | No | > | testBaseAccounts / packages | Yes | Yes | > | testBaseAccounts / packages / favoriteProcesses | No | No |
To get the same data as a file of comma-separated values, download [tag-support.
> | environments / privateLinkResources | No | No | > | environments / referenceDataSets | Yes | No |
+## Microsoft.UsageBilling
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | accounts | Yes | Yes |
+ ## Microsoft.VideoIndexer > [!div class="mx-tableFixed"]
azure-resource-manager Deployment Complete Mode Deletion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-complete-mode-deletion.md
Title: Complete mode deletion description: Shows how resource types handle complete mode deletion in Azure Resource Manager templates. Previously updated : 08/10/2022 Last updated : 08/31/2022 # Deletion of Azure resources for complete mode deployments
The resources are listed by resource provider namespace. To match a resource pro
> | configurationProfiles / versions | Yes | > | patchJobConfigurations | Yes | > | patchJobConfigurations / patchJobs | No |
+> | patchSchedules | Yes |
+> | patchSchedules / associations | Yes |
> | patchTiers | Yes | > | servicePrincipals | No |
The resources are listed by resource provider namespace. To match a resource pro
> | accounts | Yes | > | accounts / datapools | No | > | workspaces | Yes |
+> | workspaces / eventgridfilters | No |
## Microsoft.AutonomousSystems
The resources are listed by resource provider namespace. To match a resource pro
> | catalogs / deviceRegistrations | Yes | > | catalogs / provisioningPackages | Yes |
+## Microsoft.AzureSphereV2
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | catalogs | Yes |
+> | catalogs / certificates | No |
+> | catalogs / deviceRegistrations | Yes |
+> | catalogs / provisioningPackages | Yes |
+ ## Microsoft.AzureStack > [!div class="mx-tableFixed"]
The resources are listed by resource provider namespace. To match a resource pro
> | clusters / offers | No | > | clusters / publishers | No | > | clusters / publishers / offers | No |
-> | galleryimages | Yes |
-> | marketplacegalleryimages | Yes |
+> | clusters / updates | No |
+> | clusters / updates / updateRuns | No |
+> | clusters / updateSummaries | No |
+> | galleryImages | Yes |
+> | marketplaceGalleryImages | Yes |
> | networkinterfaces | Yes |
-> | storagecontainers | Yes |
+> | storageContainers | Yes |
> | virtualharddisks | Yes | > | virtualmachines | Yes |
-> | virtualmachines / extensions | Yes |
+> | virtualMachines / extensions | Yes |
> | virtualmachines / hybrididentitymetadata | No | > | virtualnetworks | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | billingAccounts / createBillingRoleAssignment | No | > | billingAccounts / customers | No | > | billingAccounts / customers / billingPermissions | No |
+> | billingAccounts / customers / billingRoleAssignments | No |
+> | billingAccounts / customers / billingRoleDefinitions | No |
> | billingAccounts / customers / billingSubscriptions | No |
+> | billingAccounts / customers / createBillingRoleAssignment | No |
> | billingAccounts / customers / initiateTransfer | No | > | billingAccounts / customers / policies | No | > | billingAccounts / customers / products | No |
The resources are listed by resource provider namespace. To match a resource pro
> | snapshots | Yes | > | sshPublicKeys | Yes | > | virtualMachines | Yes |
+> | virtualMachines / applications | Yes |
> | virtualMachines / extensions | Yes | > | virtualMachines / metricDefinitions | No | > | virtualMachines / runCommands | Yes | > | virtualMachineScaleSets | Yes |
+> | virtualMachineScaleSets / applications | No |
> | virtualMachineScaleSets / extensions | No | > | virtualMachineScaleSets / networkInterfaces | No | > | virtualMachineScaleSets / publicIPAddresses | No |
The resources are listed by resource provider namespace. To match a resource pro
> | VirtualMachines / GuestAgents | No | > | VirtualMachines / HybridIdentityMetadata | No | > | VirtualMachines / InstallPatches | No |
+> | VirtualMachines / UpgradeExtensions | No |
> | VirtualMachineTemplates | Yes | > | VirtualNetworks | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | devcenters / images | No | > | networkconnections | Yes | > | projects | Yes |
+> | projects / allowedEnvironmentTypes | No |
> | projects / attachednetworks | No | > | projects / devboxdefinitions | No | > | projects / environmentTypes | No |
The resources are listed by resource provider namespace. To match a resource pro
> | cassandraClusters | Yes | > | databaseAccountNames | No | > | databaseAccounts | Yes |
+> | databaseAccounts / encryptionScopes | No |
> | restorableDatabaseAccounts | No | ## Microsoft.DomainRegistration
The resources are listed by resource provider namespace. To match a resource pro
> | networkFunctionPublishers / networkFunctionDefinitionGroups | No | > | networkFunctionPublishers / networkFunctionDefinitionGroups / publisherNetworkFunctionDefinitionVersions | No | > | networkfunctions | Yes |
+> | networkfunctions / components | No |
> | networkFunctionVendors | No | > | publishers | Yes | > | publishers / artifactStores | Yes |
The resources are listed by resource provider namespace. To match a resource pro
> | mobileNetworks / simPolicies | Yes | > | mobileNetworks / sites | Yes | > | mobileNetworks / slices | Yes |
-> | networks | Yes |
-> | networks / sites | Yes |
> | packetCoreControlPlanes | Yes | > | packetCoreControlPlanes / packetCoreDataPlanes | Yes | > | packetCoreControlPlanes / packetCoreDataPlanes / attachedDataNetworks | Yes | > | packetCoreControlPlaneVersions | No |
-> | packetCores | Yes |
> | simGroups | Yes | > | simGroups / sims | No | > | sims | Yes |
-> | sims / simProfiles | Yes |
## Microsoft.Monitor
The resources are listed by resource provider namespace. To match a resource pro
> | Resource type | Complete mode deletion | > | - | -- | > | energyServices | Yes |
+> | energyServices / privateEndpointConnectionProxies | No |
+> | energyServices / privateEndpointConnections | No |
+> | energyServices / privateLinkResources | No |
## Microsoft.OpenLogisticsPlatform
The resources are listed by resource provider namespace. To match a resource pro
> | workspaces / shares | No | > | workspaces / shareSubscriptions | No |
+## Microsoft.OperationalInsights
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | clusters | Yes |
+> | deletedWorkspaces | No |
+> | linkTargets | No |
+> | querypacks | Yes |
+> | storageInsightConfigs | No |
+> | workspaces | Yes |
+> | workspaces / dataExports | No |
+> | workspaces / dataSources | No |
+> | workspaces / linkedServices | No |
+> | workspaces / linkedStorageAccounts | No |
+> | workspaces / metadata | No |
+> | workspaces / networkSecurityPerimeterAssociationProxies | No |
+> | workspaces / networkSecurityPerimeterConfigurations | No |
+> | workspaces / query | No |
+> | workspaces / scopedPrivateLinkProxies | No |
+> | workspaces / storageInsightConfigs | No |
+> | workspaces / tables | No |
+ ## Microsoft.Orbital > [!div class="mx-tableFixed"]
The resources are listed by resource provider namespace. To match a resource pro
> | - | -- | > | playerAccountPools | Yes | > | titles | Yes |
+> | titles / automationRules | No |
> | titles / segments | No | > | titles / titleDataSets | No | > | titles / titleInternalDataKeyValues | No |
The resources are listed by resource provider namespace. To match a resource pro
> | testBaseAccounts / emailEvents | No | > | testBaseAccounts / externalTestTools | No | > | testBaseAccounts / externalTestTools / testCases | No |
+> | testBaseAccounts / featureUpdateSupportedOses | No |
> | testBaseAccounts / flightingRings | No | > | testBaseAccounts / packages | Yes | > | testBaseAccounts / packages / favoriteProcesses | No |
The resources are listed by resource provider namespace. To match a resource pro
> | environments / privateLinkResources | No | > | environments / referenceDataSets | Yes |
+## Microsoft.UsageBilling
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | accounts | Yes |
+ ## Microsoft.VideoIndexer > [!div class="mx-tableFixed"]
azure-video-indexer Accounts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/accounts-overview.md
This article gives an overview of Azure Video Indexer accounts types and provides links to other articles for more details.
-## Overview
+## A trial account
-The first time you visit the [www.videoindexer.ai/](https://www.videoindexer.ai/) website, a trial account is automatically created. A trial Azure Video Indexer account has limitation on number of indexing minutes, support, and SLA.
+The first time you visit the [Azure Video Indexer](https://www.videoindexer.ai/) website, a trial account is automatically created. The trial Azure Video Indexer account has limitation on number of indexing minutes, support, and SLA.
-With a trial, account Azure Video Indexer provides up to 600 minutes of free indexing to users and up to 2400 minutes of free indexing to users that subscribe to the Azure Video Indexer API on the [developer portal](https://aka.ms/avam-dev-portal).
+With a trial, account Azure Video Indexer provides:
-The trial account is not available on the Azure Government cloud. For other Azure Government limitations, see [Limitations of Azure Video Indexer on Azure Government](connect-to-azure.md#limitations-of-azure-video-indexer-on-azure-government).
+* up to 600 minutes of free indexing to the [Azure Video Indexer](https://www.videoindexer.ai/) website users and
+* up to 2400 minutes of free indexing to users that subscribe to the Azure Video Indexer API on the [developer portal](https://aka.ms/avam-dev-portal).
+
+When using the trial account, you don't have to set up an Azure subscription.
+
+The trial account option is not available on the Azure Government cloud. For other Azure Government limitations, see [Limitations of Azure Video Indexer on Azure Government](connect-to-azure.md#limitations-of-azure-video-indexer-on-azure-government).
+
+## A paid (unlimited) account
You can later create a paid account where you're not limited by the quota. Two types of paid accounts are available to you: Azure Resource Manager (ARM) (currently in preview) and classic (generally available). The main difference between the two is account management platform. While classic accounts are built on the API Management, ARM-based accounts management is built on Azure, which enables apply access control to all services with role-based access control (Azure RBAC) natively.
-Make sure to review [Pricing](https://azure.microsoft.com/pricing/details/video-indexer/).
+With the paid option, you pay for indexed minutes, for more information, see [Azure Video Indexer pricing](https://azure.microsoft.com/pricing/details/video-indexer/).
-## Connecting to Azure subscription
+When creating a new paid account, you need to connect the Azure Video Indexer account to your Azure subscription and an Azure Media Services account.
-With a trial account, you don't have to set up an Azure subscription. When creating a paid account, you need to connect Azure Video Indexer [to your Azure subscription and an Azure Media Services account](connect-to-azure.md).
+**The recommended paid account type is the ARM-based account**.
## To get access to your account
With a trial account, you don't have to set up an Azure subscription. When creat
## Create accounts
-* ARM accounts: **The recommended paid account type is the ARM-based account**.
+* Creating ARM accounts. Make sure you are signed in with the correct domain to the [Azure Video Indexer website](https://www.videoindexer.ai/). For details, see [Switch tenants](switch-tenants-portal.md).
- * You can create an Azure Video Indexer **ARM-based** account through one of the following:
+ * You can create an Azure Video Indexer ARM-based account through one of the following:
- 1. [Azure Video Indexer portal](https://aka.ms/vi-portal-link)
- 2. [Azure portal](https://portal.azure.com/#home)
+ 1. The [Azure Video Indexer website](https://aka.ms/vi-portal-link)
+ 2. The [Azure portal](https://portal.azure.com/#home)
For the detailed description, [Get started with Azure Video Indexer in Azure portal](create-account-portal.md). * Upgrade a trial account to an ARM-based account and [import your content for free](import-content-from-trial.md).
-* Classic accounts: [Create classic accounts using API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Paid-Account).
-* Connect a classic account to ARM: [Connect an existing classic paid Azure Video Indexer account to an ARM-based account](connect-classic-account-to-arm.md).
+* [Create classic accounts using API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Create-Paid-Account).
+* [Connect an existing classic paid Azure Video Indexer account to an ARM-based account](connect-classic-account-to-arm.md).
## Limited access features
For more information, see [Azure Video Indexer limited access features](limited-
## Next steps
-[Pricing](https://azure.microsoft.com/pricing/details/video-indexer/)
+Make sure to review [Pricing](https://azure.microsoft.com/pricing/details/video-indexer/).
azure-video-indexer Logic Apps Connector Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/logic-apps-connector-tutorial.md
Also, add a new "Shared Access Protocol" parameter. Choose HttpsOnly for the val
![SAS uri by path](./media/logic-apps-connector-tutorial/sas-uri-by-path.jpg)
-Fill out [your account location](regions.md) and [account ID](./video-indexer-use-apis.md#account-id) to get the Azure Video Indexer account token.
+Fill out [your account location](regions.md) and [account ID](./video-indexer-use-apis.md#operational-api-calls) to get the Azure Video Indexer account token.
![Get account access token](./media/logic-apps-connector-tutorial/account-access-token.png)
Create the second flow separate from the first one.
To set up this flow, you will need to provide your Azure Video Indexer API Key and Azure Storage credentials again. You will need to update the same parameters as you did for the first flow.
-For your trigger, you will see a HTTP POST URL field. The URL wonΓÇÖt be generated until after you save your flow; however, you will need the URL eventually. We will come back to this.
+For your trigger, you will see an HTTP POST URL field. The URL wonΓÇÖt be generated until after you save your flow; however, you will need the URL eventually. We will come back to this.
-Fill out [your account location](regions.md) and [account ID](./video-indexer-use-apis.md#account-id) to get the Azure Video Indexer account token.
+Fill out [your account location](regions.md) and [account ID](./video-indexer-use-apis.md#operational-api-calls) to get the Azure Video Indexer account token.
Go to the ΓÇ£Get Video IndexΓÇ¥ action and fill out the required parameters. For Video ID, put in the following expression: triggerOutputs()['queries']['id']
azure-video-indexer Video Indexer Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-get-started.md
[!INCLUDE [Gate notice](./includes/face-limited-access.md)]
-This quickstart shows you how to sign in to the Azure Video Indexer [website](https://www.videoindexer.ai/) and how to upload your first video. When visiting the Azure Video Indexer website for the first time, the free trial account is automatically created for you. With the free trial account, you get a certain number of free indexing minutes. When creating an unlimited/paid account, you aren't limited by the quota.
+This quickstart shows you how to sign in to the Azure Video Indexer [website](https://www.videoindexer.ai/) and how to upload your first video. When visiting the Azure Video Indexer website for the first time, a trial account is automatically created for you. With the trial account, you get a certain number of free indexing minutes. You can later add a paid (ARM-based or classic) account. With the paid option, you pay for indexed minutes.
-With free trial, Azure Video Indexer provides up to 600 minutes of free indexing to website users and up to 2400 minutes of free indexing to API users. With paid option, you create an Azure Video Indexer account that is [connected to your Azure subscription and an Azure Media Services account](connect-to-azure.md). You pay for minutes indexed, for more information, see [Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/).
-
-For details about available accounts, see [Azure Video Indexer account types](accounts-overview.md).
+For details about available accounts (trial and paid options), see [Azure Video Indexer account types](accounts-overview.md).
## Sign up for Azure Video Indexer
Once you start using Azure Video Indexer, all your stored data and uploaded cont
## Upload a video using the Azure Video Indexer website
+### Supported browsers
+
+For more information, see [supported browsers](video-indexer-overview.md#supported-browsers).
+ ### Supported file formats for Azure Video Indexer See the [input container/file formats](/azure/media-services/latest/encode-media-encoder-standard-formats-reference) article for a list of file formats that you can use with Azure Video Indexer.
See the [input container/file formats](/azure/media-services/latest/encode-media
> [!div class="mx-imgBorder"] > :::image type="content" source="./media/video-indexer-get-started/progress.png" alt-text="Progress of the upload":::
- The produced JSON output contains `Insights` and `SummarizedInsights` elements. We highly recommend using `Insights` and not using `SummarizedInsights` (which is present for backward compatibility).
+ The produced JSON output contains `Insights` and `SummarizedInsights` elements. We highly recommend using `Insights` and not using `SummarizedInsights` (which is present for backward compatibility).
+
1. Once Azure Video Indexer is done analyzing, you'll get an email with a link to your video and a short description of what was found in your video. For example: people, spoken and written words, topics, and named entities. 1. You can later find your video in the library list and perform different operations. For example: search, reindex, edit.
For more details, see [Upload and index videos](upload-index-videos.md).
To start using the APIs, see [use APIs](video-indexer-use-apis.md)
-## Supported browsers
-
-For more information, see [supported browsers](video-indexer-overview.md#supported-browsers).
- ## Next steps For detailed introduction please visit our [introduction lab](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/IntroToVideoIndexer.md).
azure-video-indexer Video Indexer Use Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-use-apis.md
Azure Video Indexer consolidates various audio and video artificial intelligence (AI) technologies offered by Microsoft into one integrated service, making development simpler. The APIs are designed to enable developers to focus on consuming Media AI technologies without worrying about scale, global reach, availability, and reliability of cloud platforms. You can use the API to upload your files, get detailed video insights, get URLs of embeddable insight and player widgets, and more.
-When creating an Azure Video Indexer account, you can choose a trial account (where you get a certain number of free indexing minutes) or a paid option (where you're not limited by the quota). With a trial, account, Azure Video Indexer provides up to 600 minutes of free indexing to website users and up to 2400 minutes of free indexing to API users. With a paid option, you create an Azure Video Indexer account that's [connected to your Azure subscription and an Azure Media Services account](connect-to-azure.md). You pay for minutes indexed, for more information, see [Media Services pricing](https://azure.microsoft.com/pricing/details/media-services/).
+When visiting the [Azure Video Indexer](https://www.videoindexer.ai/) website for the first time, a trial account is automatically created for you. With the trial account, you get a certain number of free indexing minutes. You can later add a paid (ARM-based or classic) account. With the paid option, you pay for indexed minutes.
+
+For details about available accounts (trial and paid options), see [Azure Video Indexer account types](accounts-overview.md).
This article shows how the developers can take advantage of the [Azure Video Indexer API](https://api-portal.videoindexer.ai/).
This article shows how the developers can take advantage of the [Azure Video Ind
Select the [Products](https://api-portal.videoindexer.ai/products) tab. Then, select Authorization and subscribe.
- ![Products tab in Video Indexer Developer Portal](./media/video-indexer-use-apis/authorization.png)
+ ![Products tab in the Video Indexer developer portal](./media/video-indexer-use-apis/authorization.png)
> [!NOTE] > New users are automatically subscribed to Authorization.
Access tokens expire after 1 hour. Make sure your access token is valid before u
You're ready to start integrating with the API. Find [the detailed description of each Azure Video Indexer REST API](https://api-portal.videoindexer.ai/).
-## Account ID
+## Recommendations
+
+This section lists some recommendations when using Azure Video Indexer API.
+
+- If you're planning to upload a video, it's recommended to place the file in some public network location (for example, an Azure Blob Storage account). Get the link to the video and provide the URL as the upload file param.
+
+ The URL provided to Azure Video Indexer must point to a media (audio or video) file. An easy verification for the URL (or SAS URL) is to paste it into a browser, if the file starts playing/downloading, it's likely a good URL. If the browser is rendering some visualization, it's likely not a link to a file but to an HTML page.
+- When you call the API that gets video insights for the specified video, you get a detailed JSON output as the response content. [See details about the returned JSON in this topic](video-indexer-output-json-v2.md).
+- The JSON output produced by the API contains `Insights` and `SummarizedInsights` elements. We highly recommend using `Insights` and not using `SummarizedInsights` (which is present for backward compatibility).
+- We do not recommend that you use data directly from the artifacts folder for production purposes. Artifacts are intermediate outputs of the indexing process. They are essentially raw outputs of the various AI engines that analyze the videos; the artifacts schema may change over time.
+
+ It is recommended that you use the [Get Video Index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Index) API, as described in [Get insights and artifacts produced by the API](video-indexer-output-json-v2.md#get-insights-produced-by-the-api) and **not** [Get-Video-Artifact-Download-Url](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Artifact-Download-Url).
+
+## Operational API calls
The Account ID parameter is required in all operational API calls. Account ID is a GUID that can be obtained in one of the following ways:
The Account ID parameter is required in all operational API calls. Account ID is
https://www.videoindexer.ai/accounts/00000000-f324-4385-b142-f77dacb0a368/videos/d45bf160b5/ ```
-## Recommendations
-
-This section lists some recommendations when using Azure Video Indexer API.
--- If you're planning to upload a video, it's recommended to place the file in some public network location (for example, an Azure Blob Storage account). Get the link to the video and provide the URL as the upload file param.-
- The URL provided to Azure Video Indexer must point to a media (audio or video) file. An easy verification for the URL (or SAS URL) is to paste it into a browser, if the file starts playing/downloading, it's likely a good URL. If the browser is rendering some visualization, it's likely not a link to a file but to an HTML page.
--- When you call the API that gets video insights for the specified video, you get a detailed JSON output as the response content. [See details about the returned JSON in this topic](video-indexer-output-json-v2.md).- ## Code sample The following C# code snippet demonstrates the usage of all the Azure Video Indexer APIs together.
Debug.WriteLine(playerWidgetLink);
After you are done with this tutorial, delete resources that you are not planning to use.
-## Considerations
-
-* The JSON output produced by the API contains `Insights` and `SummarizedInsights` elements. We highly recommend using `Insights` and not using `SummarizedInsights` (which is present for backward compatibility).
-* We do not recommend that you use data directly from the artifacts folder for production purposes. Artifacts are intermediate outputs of the indexing process. They are essentially raw outputs of the various AI engines that analyze the videos; the artifacts schema may change over time.
-
- It is recommended that you use the [Get Video Index](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Index) API, as described in [Get insights and artifacts produced by the API](video-indexer-output-json-v2.md#get-insights-produced-by-the-api) and **not** [Get-Video-Artifact-Download-Url](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Get-Video-Artifact-Download-Url).
- ## See also - [Azure Video Indexer overview](video-indexer-overview.md)
azure-vmware Concepts Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-identity.md
The CloudAdmin role in Azure VMware Solution has the following privileges on vCe
| Privilege | Description | | | -- | | **Alarms** | Acknowledge alarm<br />Create alarm<br />Disable alarm action<br />Modify alarm<br />Remove alarm<br />Set alarm status |
-| **Content Library** | Add library item<br />Create a subscription for a published library<br />Create local library<br />Create subscribed library<br />Delete library item<br />Delete local library<br />Delete subscribed library<br />Delete subscription of a published library<br />Download files<br />Evict library items<br />Evict subscribed library<br />Import storage<br />Probe subscription information<br />Publish a library item to its subscribers<br />Publish a library to its subscribers<br />Read storage<br />Sync library item<br />Sync subscribed library<br />Type introspection<br />Update configuration settings<br />Update files<br />Update library<br />Update library item<br />Update local library<br />Update subscribed library<br />Update subscription of a published library<br />View configuration settings |
+| **Content Library** | Add library item<br />Add root certificate to trust store<br />Check in a template<br />Check out a template<br />Create a subscription for a published library<br />Create local library<br />Create or delete a Harbor registry<br />Create subscribed library<br />Create, delete or purge a Harbor registry project<br />Delete library item<br />Delete local library<br />Delete root certificate from trust store<br />Delete subscribed library<br />Delete subscription of a published library<br />Download files<br />Evict library items<br />Evict subscribed library<br />Import storage<br />Manage Harbor registry resources on specified compute resource<br />Probe subscription information<br />Publish a library item to its subscribers<br />Publish a library to its subscribers<br />Read storage<br />Sync library item<br />Sync subscribed library<br />Type introspection<br />Update configuration settings<br />Update files<br />Update library<br />Update library item<br />Update local library<br />Update subscribed library<br />Update subscription of a published library<br />View configuration settings |
| **Cryptographic operations** | Direct access | | **Datastore** | Allocate space<br />Browse datastore<br />Configure datastore<br />Low-level file operations<br />Remove files<br />Update virtual machine metadata | | **Folder** | Create folder<br />Delete folder<br />Move folder<br />Rename folder |
The CloudAdmin role in Azure VMware Solution has the following privileges on vCe
### Create custom roles on vCenter Server
-Azure VMware Solution supports the use of custom roles with equal or lesser privileges than the CloudAdmin role.
+Azure VMware Solution supports the use of custom roles with equal or lesser privileges than the CloudAdmin role. You'll use the CloudAdmin role to create, modify, or delete custom roles with privileges lesser than or equal to their current role.
-You'll use the CloudAdmin role to create, modify, or delete custom roles with privileges lesser than or equal to their current role. You can create roles with privileges greater than CloudAdmin. You can't assign the role to any users or groups or delete the role.
+ >[!NOTE]
+ >You can create roles with privileges greater than CloudAdmin. However, you can't assign the role to any users or groups or delete the role. Roles that have privileges greater than that of CloudAdmin is unsupported.
To prevent creating roles that can't be assigned or deleted, clone the CloudAdmin role as the basis for creating new custom roles.
To prevent creating roles that can't be assigned or deleted, clone the CloudAdmi
1. Provide the name you want for the cloned role.
-1. Add or remove privileges for the role and select **OK**. The cloned role is visible in the **Roles** list.
+1. Remove privileges for the role and select **OK**. The cloned role is visible in the **Roles** list.
#### Apply a custom role
To prevent creating roles that can't be assigned or deleted, clone the CloudAdmi
1. Search for the user or group after selecting the Identity Source under the **User** section. 1. Select the role that you want to apply to the user or group.
+ >[!NOTE]
+ >Attempting to apply a user or group to a role that has privileges greater than that of CloudAdmin will result in errors.
1. Check the **Propagate to children** if needed, and select **OK**. The added permission displays in the **Permissions** section. + ## NSX-T Manager access and identity When a private cloud is provisioned using Azure portal, software-defined data center (SDDC) management components like vCenter Server and NSX-T Manager are provisioned for customers.
cognitive-services Rest Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-text-to-speech.md
Title: Text-to-speech API reference (REST) - Speech service
-description: Learn how to use the REST API to convert text into synthesized speech.
+description: Learn how to use the REST API to convert text into synthesized speech.
This response has been truncated to illustrate the structure of a response.
], "Status": "Preview" },
-
+ ...
-
+ { "Name": "Microsoft Server Speech Text to Speech Voice (ga-IE, OrlaNeural)", "DisplayName": "Orla",
If the HTTP status is `200 OK`, the body of the response contains an audio file
This is a list of supported audio formats that are sent in each request as the `X-Microsoft-OutputFormat` header. Each format incorporates a bit rate and encoding type. The Speech service supports 48-kHz, 24-kHz, 16-kHz, and 8-kHz audio outputs. Prebuilt neural voices are created from samples that use a 24-khz sample rate. All voices can upsample or downsample to other sample rates when synthesizing.
-|Streaming |Non-Streaming |
-|-|-|
-|audio-16khz-16bit-32kbps-mono-opus|riff-8khz-8bit-mono-alaw |
-|audio-16khz-32kbitrate-mono-mp3 |riff-8khz-8bit-mono-mulaw|
-|audio-16khz-64kbitrate-mono-mp3 |riff-8khz-16bit-mono-pcm |
-|audio-16khz-128kbitrate-mono-mp3 |riff-24khz-16bit-mono-pcm|
-|audio-24khz-16bit-24kbps-mono-opus|riff-48khz-16bit-mono-pcm|
-|audio-24khz-16bit-48kbps-mono-opus| |
-|audio-24khz-48kbitrate-mono-mp3 | |
-|audio-24khz-96kbitrate-mono-mp3 | |
-|audio-24khz-160kbitrate-mono-mp3 | |
-|audio-48khz-96kbitrate-mono-mp3 | |
-|audio-48khz-192kbitrate-mono-mp3 | |
-|ogg-16khz-16bit-mono-opus | |
-|ogg-24khz-16bit-mono-opus | |
-|ogg-48khz-16bit-mono-opus | |
-|raw-8khz-8bit-mono-alaw | |
-|raw-8khz-8bit-mono-mulaw | |
-|raw-8khz-16bit-mono-pcm | |
-|raw-16khz-16bit-mono-pcm | |
-|raw-16khz-16bit-mono-truesilk | |
-|raw-24khz-16bit-mono-pcm | |
-|raw-24khz-16bit-mono-truesilk | |
-|raw-48khz-16bit-mono-pcm | |
-|webm-16khz-16bit-mono-opus | |
-|webm-24khz-16bit-24kbps-mono-opus | |
-|webm-24khz-16bit-mono-opus | |
+| Streaming | Non-Streaming |
+| - | |
+| audio-16khz-16bit-32kbps-mono-opus | riff-8khz-8bit-mono-alaw |
+| audio-16khz-32kbitrate-mono-mp3 | riff-8khz-8bit-mono-mulaw |
+| audio-16khz-64kbitrate-mono-mp3 | riff-8khz-16bit-mono-pcm |
+| audio-16khz-128kbitrate-mono-mp3 | riff-22050hz-16bit-mono-pcm |
+| audio-24khz-16bit-24kbps-mono-opus | riff-24khz-16bit-mono-pcm |
+| audio-24khz-16bit-48kbps-mono-opus | riff-44100hz-16bit-mono-pcm |
+| audio-24khz-48kbitrate-mono-mp3 | riff-48khz-16bit-mono-pcm |
+| audio-24khz-96kbitrate-mono-mp3 | |
+| audio-24khz-160kbitrate-mono-mp3 | |
+| audio-48khz-96kbitrate-mono-mp3 | |
+| audio-48khz-192kbitrate-mono-mp3 | |
+| ogg-16khz-16bit-mono-opus | |
+| ogg-24khz-16bit-mono-opus | |
+| ogg-48khz-16bit-mono-opus | |
+| raw-8khz-8bit-mono-alaw | |
+| raw-8khz-8bit-mono-mulaw | |
+| raw-8khz-16bit-mono-pcm | |
+| raw-16khz-16bit-mono-pcm | |
+| raw-16khz-16bit-mono-truesilk | |
+| raw-22050hz-16bit-mono-pcm | |
+| raw-24khz-16bit-mono-pcm | |
+| raw-24khz-16bit-mono-truesilk | |
+| raw-44100hz-16bit-mono-pcm | |
+| raw-48khz-16bit-mono-pcm | |
+| webm-16khz-16bit-mono-opus | |
+| webm-24khz-16bit-24kbps-mono-opus | |
+| webm-24khz-16bit-mono-opus | |
> [!NOTE] > en-US-AriaNeural, en-US-JennyNeural and zh-CN-XiaoxiaoNeural are available in public preview in 48Khz output. Other voices support 24khz upsampled to 48khz output.
cognitive-services Cognitive Services Environment Variables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-environment-variables.md
+
+ Title: Use environment variables with Cognitive Services
+
+description: "This guide shows you how to set and retrieve environment variables to handle your Cognitive Services subscription credentials in a more secure way when you test out applications."
+++++ Last updated : 08/15/2022+++
+# Use environment variables with Cognitive Services
+
+This guide shows you how to set and retrieve environment variables to handle your Cognitive Services subscription credentials in a more secure way when you test out applications.
+
+## Set an environment variable
+
+To set environment variables, use one the following commands, where the `ENVIRONMENT_VARIABLE_KEY` is the named key and `value` is the value stored in the environment variable.
+
+# [Command Line](#tab/command-line)
+
+Use the following command to create and assign a persisted environment variable, given the input value.
+
+```CMD
+:: Assigns the env var to the value
+setx ENVIRONMENT_VARIABLE_KEY="value"
+```
+
+In a new instance of the Command Prompt, use the following command to read the environment variable.
+
+```CMD
+:: Prints the env var value
+echo %ENVIRONMENT_VARIABLE_KEY%
+```
+
+# [PowerShell](#tab/powershell)
+
+Use the following command to create and assign a persisted environment variable, given the input value.
+
+```powershell
+# Assigns the env var to the value
+[System.Environment]::SetEnvironmentVariable('ENVIRONMENT_VARIABLE_KEY', 'value', 'User')
+```
+
+In a new instance of the Windows PowerShell, use the following command to read the environment variable.
+
+```powershell
+# Prints the env var value
+[System.Environment]::GetEnvironmentVariable('ENVIRONMENT_VARIABLE_KEY')
+```
+
+# [Bash](#tab/bash)
+
+Use the following command to create and assign a persisted environment variable, given the input value.
+
+```Bash
+# Assigns the env var to the value
+echo export ENVIRONMENT_VARIABLE_KEY="value" >> /etc/environment && source /etc/environment
+```
+
+In a new instance of the **Bash**, use the following command to read the environment variable.
+
+```Bash
+# Prints the env var value
+echo "${ENVIRONMENT_VARIABLE_KEY}"
+
+# Or use printenv:
+# printenv ENVIRONMENT_VARIABLE_KEY
+```
+++
+> [!TIP]
+> After you set an environment variable, restart your integrated development environment (IDE) to ensure that the newly added environment variables are available.
+
+## Retrieve an environment variable
+
+To use an environment variable in your code, it must be read into memory. Use one of the following code snippets, depending on which language you're using. These code snippets demonstrate how to get an environment variable given the `ENVIRONMENT_VARIABLE_KEY` and assign the value to a program variable named `value`.
+
+# [C#](#tab/csharp)
+
+For more information, see <a href="/dotnet/api/system.environment.getenvironmentvariable" target="_blank">`Environment.GetEnvironmentVariable` </a>.
+
+```csharp
+using static System.Environment;
+
+class Program
+{
+ static void Main()
+ {
+ // Get the named env var, and assign it to the value variable
+ var value =
+ GetEnvironmentVariable(
+ "ENVIRONMENT_VARIABLE_KEY");
+ }
+}
+```
+
+# [C++](#tab/cpp)
+
+For more information, see <a href="/cpp/c-runtime-library/reference/getenv-wgetenv" target="_blank">`getenv` </a>.
+
+```cpp
+#include <stdlib.h>
+
+int main()
+{
+ // Get the named env var, and assign it to the value variable
+ auto value =
+ getenv("ENVIRONMENT_VARIABLE_KEY");
+}
+```
+
+# [Java](#tab/java)
+
+For more information, see <a href="https://docs.oracle.com/javase/7/docs/api/java/lang/System.html#getenv(java.lang.String)" target="_blank">`System.getenv` </a>.
+
+```java
+import java.lang.*;
+
+public class Program {
+ public static void main(String[] args) throws Exception {
+ // Get the named env var, and assign it to the value variable
+ String value =
+ System.getenv(
+ "ENVIRONMENT_VARIABLE_KEY")
+ }
+}
+```
+
+# [Node.js](#tab/node-js)
+
+For more information, see <a href="https://nodejs.org/api/process.html#process_process_env" target="_blank">`process.env` </a>.
+
+```javascript
+// Get the named env var, and assign it to the value variable
+const value =
+ process.env.ENVIRONMENT_VARIABLE_KEY;
+```
+
+# [Python](#tab/python)
+
+For more information, see <a href="https://docs.python.org/2/library/os.html#os.environ" target="_blank">`os.environ` </a>.
+
+```python
+import os
+
+# Get the named env var, and assign it to the value variable
+value = os.environ['ENVIRONMENT_VARIABLE_KEY']
+```
+
+# [Objective-C](#tab/objective-c)
+
+For more information, see <a href="https://developer.apple.com/documentation/foundation/nsprocessinfo/1417911-environment?language=objc" target="_blank">`environment` </a>.
+
+```objectivec
+// Get the named env var, and assign it to the value variable
+NSString* value =
+ [[[NSProcessInfo processInfo]environment]objectForKey:@"ENVIRONMENT_VARIABLE_KEY"];
+```
+++
+## Next steps
+
+* Explore [Cognitive Services](./what-are-cognitive-services.md) and choose a service to get started.
cognitive-services Cognitive Services Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/cognitive-services-security.md
- Title: Azure Cognitive Services security-
-description: Learn about the various security considerations for Cognitive Services usage.
----- Previously updated : 08/28/2020----
-# Azure Cognitive Services security
-
-Security should be considered a top priority when developing any and all applications. With the onset of artificial intelligence enabled applications, security is even more important. In this article various aspects of Azure Cognitive Services security are outlined, such as the use of transport layer security, authentication, securely configuring sensitive data, and Customer Lockbox for customer data access.
-
-## Transport Layer Security (TLS)
-
-All of the Cognitive Services endpoints exposed over HTTP enforce TLS 1.2. With an enforced security protocol, consumers attempting to call a Cognitive Services endpoint should adhere to these guidelines:
-
-* The client Operating System (OS) needs to support TLS 1.2
-* The language (and platform) used to make the HTTP call need to specify TLS 1.2 as part of the request
- * Depending on the language and platform, specifying TLS is done either implicitly or explicitly
-
-For .NET users, consider the <a href="/dotnet/framework/network-programming/tls" target="_blank">Transport Layer Security best practices </a>.
-
-## Authentication
-
-When discussing authentication, there are several common misconceptions. Authentication and authorization are often confused for one another. Identity is also a major component in security. An identity is a collection of information about a <a href="https://en.wikipedia.org/wiki/Principal_(computer_security)" target="_blank">principal </a>. Identity providers (IdP) provide identities to authentication services. Authentication is the act of verifying a user's identity. Authorization is the specification of access rights and privileges to resources for a given identity. Several of the Cognitive Services offerings, include Azure role-based access control (Azure RBAC). Azure RBAC could be used to simplify some of the ceremony involved with manually managing principals. For more details, see [Azure role-based access control for Azure resources](../role-based-access-control/overview.md).
-
-For more information on authentication with subscription keys, access tokens and Azure Active Directory (AAD), see <a href="/azure/cognitive-services/authentication" target="_blank">authenticate requests to Azure Cognitive Services</a>.
-
-## Environment variables and application configuration
-
-Environment variables are name-value pairs, stored within a specific environment. A more secure alternative to using hardcoded values for sensitive data, is to use environment variables. Hardcoded values are insecure and should be avoided.
-
-> [!CAUTION]
-> Do **not** use hardcoded values for sensitive data, doing so is a major security vulnerability.
-
-> [!NOTE]
-> While environment variables are stored in plain text, they are isolated to an environment. If an environment is compromised, so too are the variables with the environment.
-
-### Set environment variable
-
-To set environment variables, use one the following commands - where the `ENVIRONMENT_VARIABLE_KEY` is the named key and `value` is the value stored in the environment variable.
-
-# [Command Line](#tab/command-line)
-
-Create and assign persisted environment variable, given the value.
-
-```CMD
-:: Assigns the env var to the value
-setx ENVIRONMENT_VARIABLE_KEY="value"
-```
-
-In a new instance of the **Command Prompt**, read the environment variable.
-
-```CMD
-:: Prints the env var value
-echo %ENVIRONMENT_VARIABLE_KEY%
-```
-
-# [PowerShell](#tab/powershell)
-
-Create and assign persisted environment variable, given the value.
-
-```powershell
-# Assigns the env var to the value
-[System.Environment]::SetEnvironmentVariable('ENVIRONMENT_VARIABLE_KEY', 'value', 'User')
-```
-
-In a new instance of the **Windows PowerShell**, read the environment variable.
-
-```powershell
-# Prints the env var value
-[System.Environment]::GetEnvironmentVariable('ENVIRONMENT_VARIABLE_KEY')
-```
-
-# [Bash](#tab/bash)
-
-Create and assign persisted environment variable, given the value.
-
-```Bash
-# Assigns the env var to the value
-echo export ENVIRONMENT_VARIABLE_KEY="value" >> /etc/environment && source /etc/environment
-```
-
-In a new instance of the **Bash**, read the environment variable.
-
-```Bash
-# Prints the env var value
-echo "${ENVIRONMENT_VARIABLE_KEY}"
-
-# Or use printenv:
-# printenv ENVIRONMENT_VARIABLE_KEY
-```
---
-> [!TIP]
-> After setting an environment variable, restart your integrated development environment (IDE) to ensure that newly added environment variables are available.
-
-### Get environment variable
-
-To get an environment variable, it must be read into memory. Depending on the language you're using, consider the following code snippets. These code snippets demonstrate how to get environment variable given the `ENVIRONMENT_VARIABLE_KEY` and assign to a variable named `value`.
-
-# [C#](#tab/csharp)
-
-For more information, see <a href="/dotnet/api/system.environment.getenvironmentvariable" target="_blank">`Environment.GetEnvironmentVariable` </a>.
-
-```csharp
-using static System.Environment;
-
-class Program
-{
- static void Main()
- {
- // Get the named env var, and assign it to the value variable
- var value =
- GetEnvironmentVariable(
- "ENVIRONMENT_VARIABLE_KEY");
- }
-}
-```
-
-# [C++](#tab/cpp)
-
-For more information, see <a href="/cpp/c-runtime-library/reference/getenv-wgetenv" target="_blank">`getenv` </a>.
-
-```cpp
-#include <stdlib.h>
-
-int main()
-{
- // Get the named env var, and assign it to the value variable
- auto value =
- getenv("ENVIRONMENT_VARIABLE_KEY");
-}
-```
-
-# [Java](#tab/java)
-
-For more information, see <a href="https://docs.oracle.com/javase/7/docs/api/java/lang/System.html#getenv(java.lang.String)" target="_blank">`System.getenv` </a>.
-
-```java
-import java.lang.*;
-
-public class Program {
- public static void main(String[] args) throws Exception {
- // Get the named env var, and assign it to the value variable
- String value =
- System.getenv(
- "ENVIRONMENT_VARIABLE_KEY")
- }
-}
-```
-
-# [Node.js](#tab/node-js)
-
-For more information, see <a href="https://nodejs.org/api/process.html#process_process_env" target="_blank">`process.env` </a>.
-
-```javascript
-// Get the named env var, and assign it to the value variable
-const value =
- process.env.ENVIRONMENT_VARIABLE_KEY;
-```
-
-# [Python](#tab/python)
-
-For more information, see <a href="https://docs.python.org/2/library/os.html#os.environ" target="_blank">`os.environ` </a>.
-
-```python
-import os
-
-# Get the named env var, and assign it to the value variable
-value = os.environ['ENVIRONMENT_VARIABLE_KEY']
-```
-
-# [Objective-C](#tab/objective-c)
-
-For more information, see <a href="https://developer.apple.com/documentation/foundation/nsprocessinfo/1417911-environment?language=objc" target="_blank">`environment` </a>.
-
-```objectivec
-// Get the named env var, and assign it to the value variable
-NSString* value =
- [[[NSProcessInfo processInfo]environment]objectForKey:@"ENVIRONMENT_VARIABLE_KEY"];
-```
---
-## Customer Lockbox
-
-[Customer Lockbox for Microsoft Azure](../security/fundamentals/customer-lockbox-overview.md) provides an interface for customers to review, and approve or reject customer data access requests. It is used in cases where a Microsoft engineer needs to access customer data during a support request. For information on how Customer Lockbox requests are initiated, tracked, and stored for later reviews and audits, see [Customer Lockbox](../security/fundamentals/customer-lockbox-overview.md).
-
-Customer Lockbox is available for this service:
-
-* Translator
-* Conversational language understanding
-* Custom text classification
-* Custom named entity recognition
-* Orchestration workflow
-
-For the following services, Microsoft engineers will not access any customer data in the E0 tier:
-
-* Language Understanding
-* Face
-* Content Moderator
-* Personalizer
-
-To request the ability to use the E0 SKU, fill out and submit thisΓÇ»[request Form](https://aka.ms/cogsvc-cmk). It will take approximately 3-5 business days to hear back on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once approved for using the E0 SKU with LUIS, you'll need to create a new resource from the Azure portal and select E0 as the Pricing Tier. Users won't be able to upgrade from the F0 to the new E0 SKU.
-
-The Speech service doesn't currently support Customer Lockbox. However, customer data can be stored using bring your own storage (BYOS), allowing you to achieve similar data controls to Customer Lockbox. Keep in mind that Speech service data stays and is processed in the region where the Speech resource was created. This applies to any data at rest and data in transit. When using customization features, like Custom Speech and Custom Voice, all customer data is transferred, stored, and processed in the same region where your BYOS (if used) and Speech service resource reside.
-
-> [!IMPORTANT]
-> Microsoft **does not** use customer data to improve its Speech models. Additionally, if endpoint logging is disabled and no customizations are used, then no customer data is stored.
-
-## Next steps
-
-* Explore the various [Cognitive Services](./what-are-cognitive-services.md)
-* Learn more about [Cognitive Services Virtual Networks](cognitive-services-virtual-networks.md)
cognitive-services Security Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/security-features.md
+
+ Title: Azure Cognitive Services security
+
+description: Learn about the security considerations for Cognitive Services usage.
+++++ Last updated : 08/09/2022++++
+# Azure Cognitive Services security
+
+Security should be considered a top priority in the development of all applications, and with the growth of artificial intelligence enabled applications, security is even more important. This article outlines various security features available for Azure Cognitive Services. Each feature addresses a specific liability, so multiple features can be used in the same workflow.
+
+For a comprehensive list of Azure service security recommendations see the [Cognitive Services security baseline](/security/benchmark/azure/baselines/cognitive-services-security-baseline?toc=%2Fazure%2Fcognitive-services%2FTOC.json) article.
+
+## Security features
+
+|Feature | Description |
+|:|:|
+| [Transport Layer Security (TLS)](/dotnet/framework/network-programming/tls) | All of the Cognitive Services endpoints exposed over HTTP enforce the TLS 1.2 protocol. With an enforced security protocol, consumers attempting to call a Cognitive Services endpoint should follow these guidelines: </br>- The client operating system (OS) needs to support TLS 1.2.</br>- The language (and platform) used to make the HTTP call need to specify TLS 1.2 as part of the request. Depending on the language and platform, specifying TLS is done either implicitly or explicitly.</br>- For .NET users, consider the [Transport Layer Security best practices](/dotnet/framework/network-programming/tls). |
+| [Authentication options](./authentication.md)| Authentication is the act of verifying a user's identity. Authorization, by contrast, is the specification of access rights and privileges to resources for a given identity. An identity is a collection of information about a <a href="https://en.wikipedia.org/wiki/Principal_(computer_security)" target="_blank">principal</a>, and a principal can be either an individual user or a service.</br></br>By default, you authenticate your own calls to Cognitive Services using the subscription keys provided; this is the simplest method but not the most secure. The most secure authentication method is to use manged roles in Azure Active Directory. To learn about this and other authentication options, see [Authenticate requests to Cognitive Services](/azure/cognitive-services/authentication). |
+| [Environment variables](cognitive-services-environment-variables.md) | Environment variables are name-value pairs that are stored within a specific development environment. You can store your credentials in this way as a more secure alternative to using hardcoded values in your code. However, if your environment is compromised, the environment variables are compromised as well, so this is not the most secure approach.</br></br> For instructions on how to use environment variables in your code, see the [Environment variables guide](cognitive-services-environment-variables.md). |
+| [Customer-managed keys (CMK)](./encryption/cognitive-services-encryption-keys-portal.md) | This feature is for services that store customer data at rest (longer than 48 hours). While this data is already double-encrypted on Azure servers, users can get extra security by adding another layer of encryption, with keys they manage themselves. You can link your service to Azure Key Vault and manage your data encryption keys there. </br></br>You need special approval to get the E0 SKU for your service, which enables CMK. Within 3-5 business days after you submit the [request form](https://aka.ms/cogsvc-cmk), you'll get an update on the status of your request. Depending on demand, you may be placed in a queue and approved as space becomes available. Once you're approved for using the E0 SKU, you'll need to create a new resource from the Azure portal and select E0 as the Pricing Tier. You won't be able to upgrade from F0 to the new E0 SKU. </br></br>Only some services can use CMK; look for your service on the [Customer-managed keys](./encryption/cognitive-services-encryption-keys-portal.md) page.|
+| [Virtual networks](./cognitive-services-virtual-networks.md) | Virtual networks allow you to specify which endpoints can make API calls to your resource. The Azure service will reject API calls from devices outside of your network. You can set a formula-based definition of the allowed network, or you can define an exhaustive list of endpoints to allow. This is another layer of security that can be used in combination with others. |
+| [Data loss prevention](./cognitive-services-data-loss-prevention.md) | The data loss prevention feature lets an administrator decide what types of URIs their Azure resource can take as inputs (for those API calls that take URIs as input). This can be done to prevent the possible exfiltration of sensitive company data: If a company stores sensitive information (such as a customer's private data) in URL parameters, a bad actor inside that company could submit the sensitive URLs to an Azure service, which surfaces that data outside the company. Data loss prevention lets you configure the service to reject certain URI forms on arrival.|
+| [Customer Lockbox](../security/fundamentals/customer-lockbox-overview.md) |The Customer Lockbox feature provides an interface for customers to review and approve or reject data access requests. It's used in cases where a Microsoft engineer needs to access customer data during a support request. For information on how Customer Lockbox requests are initiated, tracked, and stored for later reviews and audits, see the [Customer Lockbox guide](../security/fundamentals/customer-lockbox-overview.md).</br></br>Customer Lockbox is available for the following
+| [Bring your own storage (BYOS)](/azure/cognitive-services/speech-service/speech-encryption-of-data-at-rest)| The Speech service doesn't currently support Customer Lockbox. However, you can arrange for your service-specific data to be stored in your own storage resource using bring-your-own-storage (BYOS). BYOS allows you to achieve similar data controls to Customer Lockbox. Keep in mind that Speech service data stays and is processed in the Azure region where the Speech resource was created. This applies to any data at rest and data in transit. For customization features like Custom Speech and Custom Voice, all customer data is transferred, stored, and processed in the same region where the Speech service resource and BYOS resource (if used) reside. </br></br>To use BYOS with Speech, follow the [Speech encryption of data at rest](/azure/cognitive-services/speech-service/speech-encryption-of-data-at-rest) guide.</br></br> Microsoft does not use customer data to improve its Speech models. Additionally, if endpoint logging is disabled and no customizations are used, then no customer data is stored by Speech. |
+
+## Next steps
+
+* Explore [Cognitive Services](./what-are-cognitive-services.md) and choose a service to get started.
communication-services Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/capabilities.md
-# Capabilities for Teams external users
+# Teams meeting capabilities for Teams external users
-In this article, you will learn which capabilities are supported for Teams external users using Azure Communication Services SDKs.
+In this article, you will learn which capabilities are supported for Teams external users using Azure Communication Services SDKs in Teams meetings. You can find per platform availability in [voice and video calling capabilities](../../voice-video-calling/calling-sdk-features.md).
-## Client capabilities
-The following table shows supported client-side capabilities available in Azure Communication Services SDKs. You can find per platform availability in [voice and video calling capabilities](../../voice-video-calling/calling-sdk-features.md).
-| Category | Capability | Supported |
-| | | |
-|Chat | Send and receive chat messages | ✔️ |
-| | Send and receive Giphy | ❌ |
-| | Send messages with high priority | ❌ |
-| | Recieve messages with high priority | ✔️ |
-| | Send and receive Loop components | ❌ |
-| | Send and receive Emojis | ❌ |
-| | Send and receive Stickers | ❌ |
-| | Send and receive Stickers | ❌ |
-| | Send and receive Teams messaging extensions | ❌ |
-| | Use typing indicators | ✔️ |
-| | Read receipt | ❌ |
-| | File sharing | ❌ |
-| | Reply to chat message | ❌ |
-| | React to chat message | ❌ |
-|Calling - core | Audio send and receive | ✔️ |
-| | Send and receive video | ✔️ |
-| | Share screen and see shared screen | ✔️ |
-| | Manage Teams convenient recording | ❌ |
-| | Manage Teams transcription | ❌ |
-| | Manage breakout rooms | ❌ |
-| | Participation in breakout rooms | ❌ |
-| | Leave meeting | ✔️ |
-| | End meeting | ❌ |
-| | Change meeting options | ❌ |
-| | Lock meeting | ❌ |
-| Calling - participants| See roster | ✔️ |
-| | Add and remove meeting participants | ❌ |
-| | Dial out to phone number | ❌ |
-| | Disable mic or camera of others | ❌ |
-| | Make a participant and attendee or presenter | ❌ |
-| | Admit or reject participants in the lobby | ❌ |
-| Calling - engagement | Raise and lower hand | ❌ |
-| | See raised and lowered hand | ❌ |
-| | See and set reactions | ❌ |
-| Calling - video streams | Send and receive video | ✔️ |
-| | See together mode video stream | ❌ |
-| | See Large gallery view | ❌ |
-| | See Video stream from Teams media bot | ❌ |
-| | See adjusted content from Camera | ❌ |
-| | Set and unset spotlight | ❌ |
-| | Apply background effects | ❌ |
-| Calling - integrations | Control Teams third-party applications | ❌ |
-| | See PowerPoint Live stream | ❌ |
-| | See Whiteboard stream | ❌ |
-| | Interact with a poll | ❌ |
-| | Interact with a Q&A | ❌ |
-| | Interact with a OneNote | ❌ |
-| | Manage SpeakerCoach | ❌ |
-| Accessibility | Receive closed captions | ❌ |
-| | Communication access real-time translation (CART) | ❌ |
-| | Language interpretation | ❌ |
+| Group of features | Capability | JavaScript |
+| -- | - | - |
+| Core Capabilities | Join Teams meeting | ✔️ |
+| | Leave meeting | ✔️ |
+| | End meeting for everyone | ✔️ |
+| | Change meeting options | ❌ |
+| | Lock & unlock meeting | ❌ |
+| | Prevent joining locked meeting | ✔️ |
+| | Honor assigned Teams meeting role | ✔️ |
+| Chat | Send and receive chat messages | ✔️ |
+| | Send and receive Giphy | ❌ |
+| | Send messages with high priority | ❌ |
+| | Receive messages with high priority | ✔️ |
+| | Send and receive Loop components | ❌ |
+| | Send and receive Emojis | ❌ |
+| | Send and receive Stickers | ❌ |
+| | Send and receive Stickers | ❌ |
+| | Send and receive Teams messaging extensions | ❌ |
+| | Use typing indicators | ✔️ |
+| | Read receipt | ❌ |
+| | File sharing | ❌ |
+| | Reply to chat message | ❌ |
+| | React to chat message | ❌ |
+| Mid call control | Turn your video on/off | ✔️ |
+| | Mute/Unmute mic | ✔️ |
+| | Switch between cameras | ✔️ |
+| | Local hold/un-hold | ✔️ |
+| | Indicator of dominant speakers in the call | ✔️ |
+| | Choose speaker device for calls | ✔️ |
+| | Choose microphone for calls | ✔️ |
+| | Indicator of participant's state<br/>*Idle, Early media, Connecting, Connected, On hold, In Lobby, Disconnected* | ✔️ |
+| | Indicator of call's state <br/>*Early Media, Incoming, Connecting, Ringing, Connected, Hold, Disconnecting, Disconnected* | ✔️ |
+| | Indicate participants being muted | ✔️ |
+| | Indicate participants' reasons for terminating the call | ✔️ |
+| Screen sharing | Share the entire screen from within the application | ✔️ |
+| | Share a specific application (from the list of running applications) | ✔️ |
+| | Share a web browser tab from the list of open tabs | ✔️ |
+| | Share content in "content-only" mode | ✔️ |
+| | Receive video stream with content for "content-only" screen sharing experience | ✔️ |
+| | Share content in "standout" mode | ❌ |
+| | Receive video stream with content for a "standout" screen sharing experience | ❌ |
+| | Share content in "side-by-side" mode | ❌ |
+| | Receive video stream with content for "side-by-side" screen sharing experience | ❌ |
+| | Share content in "reporter" mode | ❌ |
+| | Receive video stream with content for "reporter" screen sharing experience | ❌ |
+| Roster | List participants | ✔️ |
+| | Add an Azure Communication Services user | ❌ |
+| | Add a Teams user | ✔️ |
+| | Adding Teams user honors Teams external access configuration | ✔️ |
+| | Adding Teams user honors Teams guest access configuration | ✔️ |
+| | Add a phone number | ✔️ |
+| | Remove a participant | ✔️ |
+| | Manage breakout rooms | ❌ |
+| | Participation in breakout rooms | ❌ |
+| | Admit participants in the lobby into the Teams meeting | ❌ |
+| | Be admitted from the lobby into the Teams meeting | ✔️ |
+| | Promote participant to a presenter or attendee | ❌ |
+| | Be promoted to presenter or attendee | ✔️ |
+| | Disable or enable mic for attendees | ❌ |
+| | Honor disabling or enabling a mic as an attendee | ✔️ |
+| | Disable or enable camera for attendees | ❌ |
+| | Honor disabling or enabling a camera as an attendee | ✔️ |
+| | Adding Teams user honors information barriers | ✔️ |
+| Device Management | Ask for permission to use audio and/or video | ✔️ |
+| | Get camera list | ✔️ |
+| | Set camera | ✔️ |
+| | Get selected camera | ✔️ |
+| | Get microphone list | ✔️ |
+| | Set microphone | ✔️ |
+| | Get selected microphone | ✔️ |
+| | Get speakers list | ✔️ |
+| | Set speaker | ✔️ |
+| | Get selected speaker | ✔️ |
+| Video Rendering | Render single video in many places (local camera or remote stream) | ✔️ |
+| | Set / update scaling mode | ✔️ |
+| | Render remote video stream | ✔️ |
+| | See together mode video stream | ❌ |
+| | See Large gallery view | ❌ |
+| | Receive video stream from Teams media bot | ❌ |
+| | Receive adjusted stream for "content from Camera" | ❌ |
+| | Add and remove video stream from spotlight | ❌ |
+| | Allow video stream to be selected for spotlight | ❌ |
+| | Apply Teams background effects | ❌ |
+| Recording & transcription | Manage Teams convenient recording | ❌ |
+| | Receive information of call being recorded | ✔️ |
+| | Manage Teams transcription | ❌ |
+| | Receive information of call being transcribed | ✔️ |
+| | Manage Teams closed captions | ❌ |
+| | Support for compliance recording | ✔️ |
+| | [Azure Communication Services recording](../../voice-video-calling/call-recording.md) | ❌ |
+| Engagement | Raise and lower hand | ❌ |
+| | Indicate other participants' raised and lowered hands | ❌ |
+| | Trigger reactions | ❌ |
+| | Indicate other participants' reactions | ❌ |
+| Integrations | Control Teams third-party applications | ❌ |
+| | Receive PowerPoint Live stream | ❌ |
+| | Receive Whiteboard stream | ❌ |
+| | Interact with a poll | ❌ |
+| | Interact with a Q&A | ❌ |
+| | Interact with a OneNote | ❌ |
+| | Manage SpeakerCoach | ❌ |
+| | [Include participant in Teams meeting attendance report](https://support.microsoft.com/office/view-and-download-meeting-attendance-reports-in-teams-ae7cf170-530c-47d3-84c1-3aedac74d310) | ✔️ |
+| Accessibility | Receive closed captions | ❌ |
+| | Communication access real-time translation (CART) | ❌ |
+| | Language interpretation | ❌ |
+| Advanced call routing | Does meeting dial-out honor forwarding rules | ✔️ |
+| | Read and configure call forwarding rules | ❌ |
+| | Does meeting dial-out honor simultaneous ringing | ✔️ |
+| | Read and configure simultaneous ringing | ❌ |
+| | Does meeting dial-out honor shared line configuration | ✔️ |
+| | Dial-out from meeting on behalf of the Teams user | ❌ |
+| | Read and configure shared line configuration | ❌ |
+| Teams meeting policy | Honor setting "Let anonymous people join a meeting" | ✔️ |
+| | Honor setting "Mode for IP audio" | ❌ |
+| | Honor setting "Mode for IP video" | ❌ |
+| | Honor setting "IP video" | ❌ |
+| | Honor setting "Local broadcasting" | ❌ |
+| | Honor setting "Media bit rate (Kbs)" | ❌ |
+| | Honor setting "Network configuration lookup" | ❌ |
+| | Honor setting "Transcription" | No API available |
+| | Honor setting "Cloud recording" | No API available |
+| | Honor setting "Meetings automatically expire" | ✔️ |
+| | Honor setting "Default expiration time" | ✔️ |
+| | Honor setting "Store recordings outside of your country or region" | ✔️ |
+| | Honor setting "Screen sharing mode" | No API available |
+| | Honor setting "Participants can give or request control" | No API available |
+| | Honor setting "External participants can give or request control" | No API available |
+| | Honor setting "PowerPoint Live" | No API available |
+| | Honor setting "Whiteboard" | No API available |
+| | Honor setting "Shared notes" | No API available |
+| | Honor setting "Select video filters" | ❌ |
+| | Honor setting "Let anonymous people start a meeting" | ✔️ |
+| | Honor setting "Who can present in meetings" | ❌ |
+| | Honor setting "Automatically admit people" | ✔️ |
+| | Honor setting "Dial-in users can bypass the lobby" | ✔️ |
+| | Honor setting "Meet now in private meetings" | ✔️ |
+| | Honor setting "Live captions" | No API available |
+| | Honor setting "Chat in meetings" | ✔️ |
+| | Honor setting "Teams Q&A" | No API available |
+| | Honor setting "Meeting reactions" | No API available |
+| DevOps | [Azure Metrics](../../metrics.md) | ✔️ |
+| | [Azure Monitor](../../logging-and-diagnostics.md) | ✔️ |
+| | [Azure Communication Services Insights](../../analytics/insights.md) | ✔️ |
+| | [Azure Communication Services Voice and video calling events](../../../../event-grid/communication-services-voice-video-events.md) | ❌ |
+| | [Teams Call Analytics](/MicrosoftTeams/use-call-analytics-to-troubleshoot-poor-call-quality) | ✔️ |
+| | [Teams real-time Analytics](/microsoftteams/use-real-time-telemetry-to-troubleshoot-poor-meeting-quality) | ❌ |
When Teams external users leave the meeting, or the meeting ends, they can no longer send or receive new chat messages and no longer have access to messages sent and received during the meeting.
-## Server capabilities
-
-The following table shows supported server-side capabilities available in Azure Communication
-
-|Capability | Supported |
-| | |
-| [Manage ACS call recording](../../voice-video-calling/call-recording.md) | ❌ |
-| [Azure Metrics](../../metrics.md) | ✔️ |
-| [Azure Monitor](../../logging-and-diagnostics.md) | ✔️ |
-| [Azure Communication Services Insights](../../analytics/insights.md) | ✔️ |
-| [Azure Communication Services Voice and video calling events](../../../../event-grid/communication-services-voice-video-events.md) | ❌ |
--
-## Teams capabilities
-
-The following table shows supported Teams capabilities:
-
-|Capability | Supported |
-| | |
-| [Teams Call Analytics](/MicrosoftTeams/use-call-analytics-to-troubleshoot-poor-call-quality) | ✔️ |
-| [Teams real-time Analytics](/microsoftteams/use-real-time-telemetry-to-troubleshoot-poor-meeting-quality) | ❌ |
-| [Teams meeting attendance report](/office/view-and-download-meeting-attendance-reports-in-teams-ae7cf170-530c-47d3-84c1-3aedac74d310) | ✔️ |
## Next steps
communication-services Teams User Calling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user-calling.md
Title: Azure Communication Services Teams identity overview
+ Title: Calling capabilities for Teams users
-description: Provides an overview of the support for Teams identity in Azure Communication Services Calling SDK.
+description: Provides an overview of supported calling capabilities for Teams users in Azure Communication Services Calling SDK.
-# Support for Teams identity in Calling SDK
+# Calling capabilities supported for Teams users in Calling SDK
The Azure Communication Services Calling SDK for JavaScript enables Teams user devices to drive voice and video communication experiences. This page provides detailed descriptions of Calling features, including platform and browser support information. To get started right away, check out [Calling quickstarts](../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md).
Key features of the Calling SDK:
- **Teams Meetings** - The Calling SDK can [join Teams meetings](../../quickstarts/voice-video-calling/get-started-teams-interop.md) and interact with the Teams voice and video data plane. - **Notifications** - The Calling SDK provides APIs that allow clients to be notified of an incoming call. In situations where your app is not running in the foreground, patterns are available to [fire pop-up notifications](../notifications.md) ("toasts") to inform users of an incoming call.
-## Detailed Azure Communication Services capabilities
+## Calling capabilities
-The following list presents the set of features that are currently available in the Azure Communication Services Calling SDK for JavaScript.
+The following list presents the set of features that are currently available in the Azure Communication Services Calling SDK for JavaScript when participating in 1:1 voice-over-IP (VoIP) or group VoIP calls.
| Group of features | Capability | JavaScript | | -- | - | - |
-| Core Capabilities | Place a one-to-one call between two users | ✔️ |
-| | Place a group call with more than two users (up to 350 users) | ✔️ |
-| | Promote a one-to-one call with two users into a group call with more than two users | ✔️ |
-| | Join a group call after it has started | ✔️ |
+| Core Capabilities | Place a one-to-one call to Teams user | ✔️ |
+| | Place a one-to-one call to Azure Communication Services user | ❌ |
+| | Place a group call with more than two Teams users (up to 350 users) | ✔️ |
+| | Promote a one-to-one call with two Teams users into a group call with more than two Teams users | ✔️ |
+| | Join a group call after it has started | ❌ |
| | Invite another VoIP participant to join an ongoing group call | ✔️ |
-| | Join Teams meeting | ✔️ |
+| | Test your mic, speaker, and camera with an audio testing service (available by calling 8:echo123) | ✔️ |
+| | Placing a call honors Teams external access configuration | ✔️ |
+| | Placing a call honors Teams guest access configuration | ✔️ |
| Mid call control | Turn your video on/off | ✔️ | | | Mute/Unmute mic | ✔️ | | | Switch between cameras | ✔️ | | | Local hold/un-hold | ✔️ |
-| | Active speaker | ✔️ |
-| | Choose speaker for calls | ✔️ |
+| | Indicator of dominant speakers in the call | ✔️ |
+| | Choose speaker device for calls | ✔️ |
| | Choose microphone for calls | ✔️ |
-| | Show state of a participant<br/>*Idle, Early media, Connecting, Connected, On hold, In Lobby, Disconnected* | ✔️ |
-| | Show state of a call<br/>*Early Media, Incoming, Connecting, Ringing, Connected, Hold, Disconnecting, Disconnected* | ✔️ |
-| | Show if a participant is muted | ✔️ |
-| | Show the reason why a participant left a call | ✔️ |
-| | Admit participant in the lobby into the Teams meeting | ❌ |
+| | Indicator of participant's state<br/>*Idle, Early media, Connecting, Connected, On hold, In Lobby, Disconnected* | ✔️ |
+| | Indicator of call's state <br/>*Early Media, Incoming, Connecting, Ringing, Connected, Hold, Disconnecting, Disconnected* | ✔️ |
+| | Indicate participants being muted | ✔️ |
+| | Indicate participants' reasons for terminating the call | ✔️ |
| Screen sharing | Share the entire screen from within the application | ✔️ | | | Share a specific application (from the list of running applications) | ✔️ | | | Share a web browser tab from the list of open tabs | ✔️ |
-| | Participant can view remote screen share | ✔️ |
+| | Share content in "content-only" mode | ✔️ |
+| | Receive video stream with content for "content-only" screen sharing experience | ✔️ |
+| | Share content in "standout" mode | ❌ |
+| | Receive video stream with content for a "standout" screen sharing experience | ❌ |
+| | Share content in "side-by-side" mode | ❌ |
+| | Receive video stream with content for "side-by-side" screen sharing experience | ❌ |
+| | Share content in "reporter" mode | ❌ |
+| | Receive video stream with content for "reporter" screen sharing experience | ❌ |
| Roster | List participants | ✔️ |
+| | Add an Azure Communication Services user | ❌ |
+| | Add a Teams user | ✔️ |
+| | Adding Teams users honors Teams external access configuration | ✔️ |
+| | Adding Teams user honors Teams guest access configuration | ✔️ |
+| | Add a phone number | ✔️ |
| | Remove a participant | ✔️ |
-| PSTN | Place a one-to-one call with a PSTN participant | ✔️ |
-| | Place a group call with PSTN participants | ✔️ |
-| | Promote a one-to-one call with a PSTN participant into a group call | ✔️ |
-| | Dial-out from a group call as a PSTN participant | ✔️ |
-| | Support for early media | ❌ |
-| General | Test your mic, speaker, and camera with an audio testing service (available by calling 8:echo123) | ✔️ |
-| Device Management | Ask for permission to use audio and/or video | ✔️ |
+| | Adding Teams users honors information barriers | ✔️ |
+| Device Management | Ask for permission to use audio and/or video | ✔️ |
| | Get camera list | ✔️ | | | Set camera | ✔️ | | | Get selected camera | ✔️ |
The following list presents the set of features that are currently available in
| Video Rendering | Render single video in many places (local camera or remote stream) | ✔️ | | | Set / update scaling mode | ✔️ | | | Render remote video stream | ✔️ |-
-Support for streaming, timeouts, platforms, and browsers is shared with [Communication Services calling SDK overview](./../voice-video-calling/calling-sdk-features.md).
-
-## Detailed Teams capabilities
-
-The following list presents the set of Teams capabilities, which are currently available in the Azure Communication Services Calling SDK for JavaScript.
-
-|Group of features | Teams capability | JS |
-|-|--||
-| Core Capabilities | Placing a call honors Teams external access configuration | ✔️ |
-| | Placing a call honors Teams guest access configuration | ✔️ |
-| | Joining Teams meeting honors configuration for automatic people admit in the Lobby | ✔️ |
-| | Actions available in the Teams meeting are defined by assigned role | ✔️ |
-| Mid call control | Receive forwarded call | ✔️ |
-| | Receive simultaneous ringing | ✔️ |
-| | Play music on hold | ❌ |
-| | Park a call | ❌ |
-| | Transfer a call to a person | ✔️ |
-| | Transfer a call to a call | ✔️ |
-| | Transfer a call to Voicemail | ❌ |
-| | Merge ongoing calls | ❌ |
-| | Place a call on behalf of the user | ❌ |
-| | Start call recording | ❌ |
-| | Start call transcription | ❌ |
-| | Start live captions | ❌ |
-| | Receive information of call being recorded | ✔️ |
-| PSTN | Make an Emergency call | ✔️ |
-| | Place a call honors location-based routing | ❌ |
-| | Support for survivable branch appliance | ❌ |
-| Phone system | Receive a call from Teams auto attendant | ✔️ |
-| | Transfer a call to Teams auto attendant | ✔️ |
-| | Receive a call from Teams call queue (only conference mode) | ✔️ |
-| | Transfer a call from Teams call queue (only conference mode) | ✔️ |
-| Compliance | Place a call honors information barriers | ✔️ |
-| | Support for compliance recording | ✔️ |
-| Meeting | [Include participant in Teams meeting attendance report](/office/view-and-download-meeting-attendance-reports-in-teams-ae7cf170-530c-47d3-84c1-3aedac74d310) | ❌ |
--
-## Teams meeting options
-
-Teams meeting organizers can configure the Teams meeting options to adjust the experience for participants. The following options are supported in Azure Communication Services for Teams users:
-
-|Option name|Description| Supported |
-| | | |
-| [Automatically admit people](/microsoftteams/meeting-policies-participants-and-guests#automatically-admit-people) | Teams user can bypass the lobby, if Teams meeting organizer set value to include "people in my organization" for single tenant meetings and "people in trusted organizations" for cross-tenant meetings. Otherwise, Teams users have to wait in the lobby until an authenticated user admits them.| ✔️ |
-| [Always let callers bypass the lobby](/microsoftteams/meeting-policies-participants-and-guests#allow-dial-in-users-to-bypass-the-lobby)| Participants joining through phone can bypass lobby | Not applicable |
-| Announce when callers join or leave| Participants hear announcement sounds when phone participants join and leave the meeting | ✔️ |
-| [Choose co-organizers](/office/add-co-organizers-to-a-meeting-in-teams-0de2c31c-8207-47ff-ae2a-fc1792d466e2)| Teams user can be selected as co-organizer. It affects the availability of actions in Teams meetings. | ✔️ |
-| [Who can present in meetings](/microsoftteams/meeting-policies-in-teams-general#designated-presenter-role-mode) | Controls who in the Teams meeting can share screen. | ❌ |
-|[Manage what attendees see](/office/spotlight-someone-s-video-in-a-teams-meeting-58be74a4-efac-4e89-a212-8d198182081e)|Teams organizer, co-organizer and presenter can spotlight videos for everyone. Azure Communication Services does not receive the spotlight signals. |❌|
-|[Allow mic for attendees](/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If Teams user is attendee, then this option controls whether Teams user can send local audio |✔️|
-|[Allow camera for attendees](/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If Teams user is attendee, then this option controls whether Teams user can send local video |✔️|
-|[Record automatically](/graph/api/resources/onlinemeeting)|Records meeting when anyone starts the meeting. The user in the lobby does not start a recording.|✔️|
-|Allow meeting chat|If enabled, Teams users can use the chat associated with the Teams meeting.|✔️|
-|[Allow reactions](/microsoftteams/meeting-policies-in-teams-general#meeting-reactions)|If enabled, Teams users can use reactions in the Teams meeting. Azure Communication Sevices don't support reactions. |❌|
-|[RTMP-IN](/microsoftteams/stream-teams-meetings)|If enabled, organizers can stream meetings and webinars to external endpoints by providing a Real-Time Messaging Protocol (RTMP) URL and key to the built-in Custom Streaming app in Teams. |Not applicable|
-|[Provide CART Captions](/office/use-cart-captions-in-a-microsoft-teams-meeting-human-generated-captions-2dd889e8-32a8-4582-98b8-6c96cf14eb47)|Communication access real-time translation (CART) is a service in which a trained CART captioner listens to the speech and instantaneously translates all speech to text. As a meeting organizer, you can set up and offer CART captioning to your audience instead of the Microsoft Teams built-in live captions that are automatically generated.|❌|
+| | See together mode video stream | ❌ |
+| | See Large gallery view | ❌ |
+| | Receive video stream from Teams media bot | ❌ |
+| | Receive adjusted stream for "content from Camera" | ❌ |
+| | Add and remove video stream from spotlight | ❌ |
+| | Allow video stream to be selected for spotlight | ❌ |
+| | Apply Teams background effects | ❌ |
+| Recording & transcription | Manage Teams convenient recording | ❌ |
+| | Receive information of call being recorded | ✔️ |
+| | Manage Teams transcription | ❌ |
+| | Receive information of call being transcribed | ✔️ |
+| | Manage Teams closed captions | ❌ |
+| | Support for compliance recording | ✔️ |
+| Engagement | Raise and lower hand | ❌ |
+| | Indicate other participants' raised and lowered hands | ❌ |
+| | Trigger reactions | ❌ |
+| | Indicate other participants' reactions | ❌ |
+| Integrations | Control Teams third-party applications | ❌ |
+| | Receive PowerPoint Live stream | ❌ |
+| | Receive Whiteboard stream | ❌ |
+| | Interact with a poll | ❌ |
+| | Interact with a Q&A | ❌ |
+| Accessibility | Receive closed captions | ❌ |
+| Advanced call routing | Does start a call and add user operations honor forwarding rules | ✔️ |
+| | Read and configure call forwarding rules | ❌ |
+| | Does start a call and add user operations honor simultaneous ringing | ✔️ |
+| | Read and configure simultaneous ringing | ❌ |
+| | Placing participant on hold plays music on hold | ❌ |
+| | Being placed by Teams user on Teams client on hold plays music on hold | ✔️ |
+| | Park a call | ❌ |
+| | Be parked | ✔️ |
+| | Transfer a call to a user | ✔️ |
+| | Be transferred to a user or call | ✔️ |
+| | Transfer a call to a call | ✔️ |
+| | Transfer a call to Voicemail | ❌ |
+| | Be transferred to voicemail | ✔️ |
+| | Merge ongoing calls | ❌ |
+| | Does start a call and add user operations honor shared line configuration | ✔️ |
+| | Start a call on behalf of the Teams user | ❌ |
+| | Read and configure shared line configuration | ❌ |
+| | Receive a call from Teams auto attendant | ✔️ |
+| | Transfer a call to Teams auto attendant | ✔️ |
+| | Receive a call from Teams call queue | ✔️ |
+| | Transfer a call from Teams call queue | ✔️ |
+| Teams calling policy | Honor "Make private calls" | ✔️ |
+| | Honor setting "Cloud recording for calling" | No API available |
+| | Honor setting "Transcription" | No API available |
+| | Honor setting "Call forwarding and simultaneous ringing to people in your organization" | ✔️ |
+| | Honor setting "Call forwarding and simultaneous ringing to external phone numbers" | ✔️ |
+| | Honor setting "Voicemail is available for routing inbound calls" | ✔️ |
+| | Honor setting "Inbound calls can be routed to call groups" | ✔️ |
+| | Honor setting "Delegation for inbound and outbound calls" | ✔️ |
+| | Honor setting "Prevent toll bypass and send calls through the PSTN" | ❌ |
+| | Honor setting "Music on hold" | ❌ |
+| | Honor setting "Busy on busy when in a call" | ❌ |
+| | Honor setting "Web PSTN calling" | ❌ |
+| | Honor setting "Real-time captions in Teams calls" | No API available |
+| | Honor setting "Automatically answer incoming meeting invites" | ❌ |
+| | Honor setting "Spam filtering" | ✔️ |
+| | Honor setting "SIP devices can be used for calls" | ✔️ |
+| DevOps | [Azure Metrics](../metrics.md) | ✔️ |
+| | [Azure Monitor](../logging-and-diagnostics.md) | ✔️ |
+| | [Azure Communication Services Insights](../analytics/insights.md) | ✔️ |
+| | [Azure Communication Services Voice and video calling events](../../../event-grid/communication-services-voice-video-events.md) | ❌ |
+| | [Teams Call Analytics](/MicrosoftTeams/use-call-analytics-to-troubleshoot-poor-call-quality) | ✔️ |
+| | [Teams real-time Analytics](/microsoftteams/use-real-time-telemetry-to-troubleshoot-poor-meeting-quality) | ❌ |
+
+Support for streaming, timeouts, platforms, and browsers is shared with [Communication Services calling SDK overview](../voice-video-calling/calling-sdk-features.md).
## Next steps
communication-services Meeting Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/meeting-capabilities.md
+
+ Title: Teams meeting capabilities for Teams users
+
+description: Provides an overview of supported Teams meeting capabilities for Teams users in Azure Communication Services Calling SDK.
+++++ Last updated : 12/01/2021++++
+# Teams meeting support for Teams user in Calling SDK
+
+The Azure Communication Services Calling SDK for JavaScript enables Teams user devices to drive voice and video communication experiences. This page provides detailed descriptions of Teams meeting features. To get started right away, check out [Calling quickstarts](../../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md).
++
+The following list of capabilities is allowed when Teams user participates in Teams meeting:
+
+| Group of features | Capability | JavaScript |
+| -- | - | - |
+| Core Capabilities | Join Teams meeting | ✔️ |
+| | Leave meeting | ✔️ |
+| | End meeting for everyone | ✔️ |
+| | Change meeting options | ❌ |
+| | Lock & unlock meeting | ❌ |
+| | Prevent joining locked meeting | ✔️ |
+| | Honor assigned Teams meeting role | ✔️ |
+| Mid call control | Turn your video on/off | ✔️ |
+| | Mute/Unmute mic | ✔️ |
+| | Switch between cameras | ✔️ |
+| | Local hold/un-hold | ✔️ |
+| | Indicator of dominant speakers in the call | ✔️ |
+| | Choose speaker device for calls | ✔️ |
+| | Choose microphone for calls | ✔️ |
+| | Indicator of participant's state<br/>*Idle, Early media, Connecting, Connected, On hold, In Lobby, Disconnected* | ✔️ |
+| | Indicator of call's state <br/>*Early Media, Incoming, Connecting, Ringing, Connected, Hold, Disconnecting, Disconnected* | ✔️ |
+| | Indicate participants being muted | ✔️ |
+| | Indicate participants' reasons for terminating the call | ✔️ |
+| Screen sharing | Share the entire screen from within the application | ✔️ |
+| | Share a specific application (from the list of running applications) | ✔️ |
+| | Share a web browser tab from the list of open tabs | ✔️ |
+| | Share content in "content-only" mode | ✔️ |
+| | Receive video stream with content for "content-only" screen sharing experience | ✔️ |
+| | Share content in "standout" mode | ❌ |
+| | Receive video stream with content for a "standout" screen sharing experience | ❌ |
+| | Share content in "side-by-side" mode | ❌ |
+| | Receive video stream with content for "side-by-side" screen sharing experience | ❌ |
+| | Share content in "reporter" mode | ❌ |
+| | Receive video stream with content for "reporter" screen sharing experience | ❌ |
+| Roster | List participants | ✔️ |
+| | Add an Azure Communication Services user | ❌ |
+| | Add a Teams user | ✔️ |
+| | Adding Teams user honors Teams external access configuration | ✔️ |
+| | Adding Teams user honors Teams guest access configuration | ✔️ |
+| | Add a phone number | ✔️ |
+| | Remove a participant | ✔️ |
+| | Manage breakout rooms | ❌ |
+| | Participation in breakout rooms | ❌ |
+| | Admit participants in the lobby into the Teams meeting | ❌ |
+| | Be admitted from the lobby into the Teams meeting | ✔️ |
+| | Promote participant to a presenter or attendee | ❌ |
+| | Be promoted to presenter or attendee | ✔️ |
+| | Disable or enable mic for attendees | ❌ |
+| | Honor disabling or enabling a mic as an attendee | ✔️ |
+| | Disable or enable camera for attendees | ❌ |
+| | Honor disabling or enabling a camera as an attendee | ✔️ |
+| | Adding Teams user honors information barriers | ✔️ |
+| Device Management | Ask for permission to use audio and/or video | ✔️ |
+| | Get camera list | ✔️ |
+| | Set camera | ✔️ |
+| | Get selected camera | ✔️ |
+| | Get microphone list | ✔️ |
+| | Set microphone | ✔️ |
+| | Get selected microphone | ✔️ |
+| | Get speakers list | ✔️ |
+| | Set speaker | ✔️ |
+| | Get selected speaker | ✔️ |
+| Video Rendering | Render single video in many places (local camera or remote stream) | ✔️ |
+| | Set / update scaling mode | ✔️ |
+| | Render remote video stream | ✔️ |
+| | See together mode video stream | ❌ |
+| | See Large gallery view | ❌ |
+| | Receive video stream from Teams media bot | ❌ |
+| | Receive adjusted stream for "content from Camera" | ❌ |
+| | Add and remove video stream from spotlight | ❌ |
+| | Allow video stream to be selected for spotlight | ❌ |
+| | Apply Teams background effects | ❌ |
+| Recording & transcription | Manage Teams convenient recording | ❌ |
+| | Receive information of call being recorded | ✔️ |
+| | Manage Teams transcription | ❌ |
+| | Receive information of call being transcribed | ✔️ |
+| | Manage Teams closed captions | ❌ |
+| | Support for compliance recording | ✔️ |
+| | [Azure Communication Services recording](../../voice-video-calling/call-recording.md) | ❌ |
+| Engagement | Raise and lower hand | ❌ |
+| | Indicate other participants' raised and lowered hands | ❌ |
+| | Trigger reactions | ❌ |
+| | Indicate other participants' reactions | ❌ |
+| Integrations | Control Teams third-party applications | ❌ |
+| | Receive PowerPoint Live stream | ❌ |
+| | Receive Whiteboard stream | ❌ |
+| | Interact with a poll | ❌ |
+| | Interact with a Q&A | ❌ |
+| | Interact with a OneNote | ❌ |
+| | Manage SpeakerCoach | ❌ |
+| | [Include participant in Teams meeting attendance report](/office/view-and-download-meeting-attendance-reports-in-teams-ae7cf170-530c-47d3-84c1-3aedac74d310) | ❌
+| Accessibility | Receive closed captions | ❌ |
+| | Communication access real-time translation (CART) | ❌ |
+| | Language interpretation | ❌ |
+| Advanced call routing | Does meeting dial-out honor forwarding rules | ✔️ |
+| | Read and configure call forwarding rules | ❌ |
+| | Does meeting dial-out honor simultaneous ringing | ✔️ |
+| | Read and configure simultaneous ringing | ❌ |
+| | Does meeting dial-out honor shared line configuration | ✔️ |
+| | Dial-out from meeting on behalf of the Teams user | ❌ |
+| | Read and configure shared line configuration | ❌ |
+| Teams meeting policy | Honor setting "Let anonymous people join a meeting" | ✔️ |
+| | Honor setting "Mode for IP audio" | ❌ |
+| | Honor setting "Mode for IP video" | ❌ |
+| | Honor setting "IP video" | ❌ |
+| | Honor setting "Local broadcasting" | ❌ |
+| | Honor setting "Media bit rate (Kbs)" | ❌ |
+| | Honor setting "Network configuration lookup" | ❌ |
+| | Honor setting "Transcription" | No API available |
+| | Honor setting "Cloud recording" | No API available |
+| | Honor setting "Meetings automatically expire" | ✔️ |
+| | Honor setting "Default expiration time" | ✔️ |
+| | Honor setting "Store recordings outside of your country or region" | ✔️ |
+| | Honor setting "Screen sharing mode" | No API available |
+| | Honor setting "Participants can give or request control" | No API available |
+| | Honor setting "External participants can give or request control" | No API available |
+| | Honor setting "PowerPoint Live" | No API available |
+| | Honor setting "Whiteboard" | No API available |
+| | Honor setting "Shared notes" | No API available |
+| | Honor setting "Select video filters" | ❌ |
+| | Honor setting "Let anonymous people start a meeting" | ✔️ |
+| | Honor setting "Who can present in meetings" | ❌ |
+| | Honor setting "Automatically admit people" | ✔️ |
+| | Honor setting "Dial-in users can bypass the lobby" | ✔️ |
+| | Honor setting "Meet now in private meetings" | ✔️ |
+| | Honor setting "Live captions" | No API available |
+| | Honor setting "Chat in meetings" | ✔️ |
+| | Honor setting "Teams Q&A" | No API available |
+| | Honor setting "Meeting reactions" | No API available |
+| DevOps | [Azure Metrics](../../metrics.md) | ✔️ |
+| | [Azure Monitor](../../logging-and-diagnostics.md) | ✔️ |
+| | [Azure Communication Services Insights](../../analytics/insights.md) | ✔️ |
+| | [Azure Communication Services Voice and video calling events](../../../../event-grid/communication-services-voice-video-events.md) | ❌ |
+| | [Teams Call Analytics](/MicrosoftTeams/use-call-analytics-to-troubleshoot-poor-call-quality) | ✔️ |
+| | [Teams real-time Analytics](/microsoftteams/use-real-time-telemetry-to-troubleshoot-poor-meeting-quality) | ❌ |
++
+## Teams meeting options
+
+Teams meeting organizers can configure the Teams meeting options to adjust the experience for participants. The following options are supported in Azure Communication Services for Teams users:
+
+|Option name|Description| Supported |
+| | | |
+| [Automatically admit people](/microsoftteams/meeting-policies-participants-and-guests#automatically-admit-people) | Teams user can bypass the lobby, if Teams meeting organizer set value to include "people in my organization" for single tenant meetings and "people in trusted organizations" for cross-tenant meetings. Otherwise, Teams users have to wait in the lobby until an authenticated user admits them.| ✔️ |
+| [Always let callers bypass the lobby](/microsoftteams/meeting-policies-participants-and-guests#allow-dial-in-users-to-bypass-the-lobby)| Participants joining through phone can bypass lobby | Not applicable |
+| Announce when callers join or leave| Participants hear announcement sounds when phone participants join and leave the meeting | ✔️ |
+| [Choose co-organizers](/office/add-co-organizers-to-a-meeting-in-teams-0de2c31c-8207-47ff-ae2a-fc1792d466e2)| Teams user can be selected as co-organizer. It affects the availability of actions in Teams meetings. | ✔️ |
+| [Who can present in meetings](/microsoftteams/meeting-policies-in-teams-general#designated-presenter-role-mode) | Controls who in the Teams meeting can share screen. | ❌ |
+|[Manage what attendees see](/office/spotlight-someone-s-video-in-a-teams-meeting-58be74a4-efac-4e89-a212-8d198182081e)|Teams organizer, co-organizer and presenter can spotlight videos for everyone. Azure Communication Services does not receive the spotlight signals. |❌|
+|[Allow mic for attendees](/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If Teams user is attendee, then this option controls whether Teams user can send local audio |✔️|
+|[Allow camera for attendees](/office/manage-attendee-audio-and-video-permissions-in-teams-meetings-f9db15e1-f46f-46da-95c6-34f9f39e671a)|If Teams user is attendee, then this option controls whether Teams user can send local video |✔️|
+|[Record automatically](/graph/api/resources/onlinemeeting)|Records meeting when anyone starts the meeting. The user in the lobby does not start a recording.|✔️|
+|Allow meeting chat|If enabled, Teams users can use the chat associated with the Teams meeting.|✔️|
+|[Allow reactions](/microsoftteams/meeting-policies-in-teams-general#meeting-reactions)|If enabled, Teams users can use reactions in the Teams meeting. Azure Communication Services don't support reactions. |❌|
+|[RTMP-IN](/microsoftteams/stream-teams-meetings)|If enabled, organizers can stream meetings and webinars to external endpoints by providing a Real-Time Messaging Protocol (RTMP) URL and key to the built-in Custom Streaming app in Teams. |Not applicable|
+|[Provide CART Captions](/office/use-cart-captions-in-a-microsoft-teams-meeting-human-generated-captions-2dd889e8-32a8-4582-98b8-6c96cf14eb47)|Communication access real-time translation (CART) is a service in which a trained CART captioner listens to the speech and instantaneously translates all speech to text. As a meeting organizer, you can set up and offer CART captioning to your audience instead of the Microsoft Teams built-in live captions that are automatically generated.|❌|
++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Get started with calling](../../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md)
+
+For more information, see the following articles:
+- Familiarize yourself with general [call flows](../../call-flows.md)
+- Learn about [call types](../../voice-video-calling/about-call-types.md)
communication-services Phone Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/phone-capabilities.md
+
+ Title: Phone capabilities supported for Teams users
+
+description: Provides an overview of phone capabilities supported for Teams users in Azure Communication Services Calling SDK.
+++++ Last updated : 12/01/2021++++
+# Phone capabilities for Teams user in Calling SDK
+
+The Azure Communication Services Calling SDK for JavaScript enables Teams user devices to drive voice and video communication experiences. This page provides detailed descriptions of phone calling features. To get started right away, check out [Calling quickstarts](../../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md).
+
+## Phone capabilities
+The following list of capabilities is supported for scenarios where at least one phone number participates in 1:1 or group call:
+
+| Group of features | Capability | JavaScript |
+| -- | - | - |
+| Core Capabilities | Place a one-to-one call to the phone number | ✔️ |
+| | Place a group call with at least one phone number | ✔️ |
+| | Promote a one-to-one call with a phone number into a group call | ✔️ |
+| | User can dial into a group call | ❌ |
+| | Dial out from a group call to a phone number | ✔️ |
+| | Make an Emergency call | ✔️ |
+| | Honor Security desk policy for emergency calls | ✔️ |
+| | Provide a static emergency location for Teams calling plans in case of emergency calls | ✔️ |
+| Connectivity | Teams calling plans | ✔️ |
+| | Teams direct routings | ✔️ |
+| | Teams operator connect | ✔️ |
+| | Azure Communication Services direct offers | ✔️ |
+| | Azure Communication Services direct routing | ✔️ |
+| Mid call control | Turn your video on/off | ✔️* |
+| | Mute/Unmute mic | ✔️ |
+| | Switch between cameras | ✔️* |
+| | Local hold/un-hold | ✔️ |
+| | Indicator of dominant speakers in the call | ✔️ |
+| | Choose speaker device for calls | ✔️ |
+| | Choose microphone for calls | ✔️ |
+| | Indicator of participant's state<br/>*Idle, Early media, Connecting, Connected, On hold, In Lobby, Disconnected* | ✔️ |
+| | Indicator of call's state <br/>*Early Media, Incoming, Connecting, Ringing, Connected, Hold, Disconnecting, Disconnected* | ✔️ |
+| | Indicate participants being muted | ✔️ |
+| | Indicate participants' reasons for terminating the call | ✔️ |
+| Screen sharing | Share the entire screen from within the application | ✔️* |
+| | Share a specific application (from the list of running applications) | ✔️* |
+| | Share a web browser tab from the list of open tabs | ✔️* |
+| | Share content in "content-only" mode | ✔️* |
+| | Receive video stream with content for "content-only" screen sharing experience | ✔️* |
+| | Share content in "standout" mode | ❌ |
+| | Receive video stream with content for a "standout" screen sharing experience | ❌ |
+| | Share content in "side-by-side" mode | ❌ |
+| | Receive video stream with content for "side-by-side" screen sharing experience | ❌ |
+| | Share content in "reporter" mode | ❌ |
+| | Receive video stream with content for "reporter" screen sharing experience | ❌ |
+| Roster | List participants | ✔️ |
+| | Add an Azure Communication Services user | ❌ |
+| | Add a Teams user | ✔️ |
+| | Adding Teams user honors Teams external access configuration | ✔️ |
+| | Adding Teams user honors Teams guest access configuration | ✔️ |
+| | Add a phone number | ✔️ |
+| | Remove a participant | ✔️ |
+| | Adding Teams user honors information barriers | ✔️ |
+| Device Management | Ask for permission to use audio and/or video | ✔️* |
+| | Get camera list | ✔️* |
+| | Set camera | ✔️* |
+| | Get selected camera | ✔️* |
+| | Get microphone list | ✔️ |
+| | Set microphone | ✔️ |
+| | Get selected microphone | ✔️ |
+| | Get speakers list | ✔️ |
+| | Set speaker | ✔️ |
+| | Get selected speaker | ✔️ |
+| Video Rendering | Render single video in many places (local camera or remote stream) | ✔️* |
+| | Set / update scaling mode | ✔️* |
+| | Render remote video stream | ✔️* |
+| Recording & transcription | Manage Teams convenient recording | ❌ |
+| | Receive information of call being recorded | ✔️ |
+| | Manage Teams transcription | ❌ |
+| | Receive information of call being transcribed | ✔️ |
+| | Support for compliance recording | ✔️ |
+| Media | Support for early media | ❌ |
+| | Place a phone call honors location-based routing | ❌ |
+| | Support for survivable branch appliance | ❌ |
+| Accessibility | Receive closed captions | ❌ |
+| Advanced call routing | Does start a call and add user operations honor forwarding rules | ✔️ |
+| | Read and configure call forwarding rules | ❌ |
+| | Does start a call and add user operations honor simultaneous ringing | ✔️ |
+| | Read and configure simultaneous ringing | ❌ |
+| | Placing participant on hold plays music on hold | ❌ |
+| | Being placed by Teams user on Teams client on hold plays music on hold | ✔️ |
+| | Park a call | ❌ |
+| | Be parked | ✔️ |
+| | Transfer a call to a user | ✔️ |
+| | Be transferred to a user or call | ✔️ |
+| | Transfer a call to a call | ✔️ |
+| | Transfer a call to Voicemail | ❌ |
+| | Be transferred to voicemail | ✔️ |
+| | Merge ongoing calls | ❌ |
+| | Does start a call and add user operations honor shared line configuration | ✔️ |
+| | Start a call on behalf of the Teams user | ❌ |
+| | Read and configure shared line configuration | ❌ |
+| | Receive a call from Teams auto attendant | ✔️ |
+| | Transfer a call to Teams auto attendant | ✔️ |
+| | Receive a call from Teams call queue | ✔️ |
+| | Transfer a call from Teams call queue | ✔️ |
+| Teams caller ID policies | Block incoming caller ID | ❌ |
+| | Override the caller ID policy | ❌ |
+| | Calling Party Name | ❌ |
+| | Replace the caller ID with | ❌ |
+| | Replace the caller ID with this service number | ❌ |
+| Teams dial out plan policies | Start a phone call honoring dial plan policy | ❌ |
+| DevOps | [Azure Metrics](../../metrics.md) | ✔️ |
+| | [Azure Monitor](../../logging-and-diagnostics.md) | ✔️ |
+| | [Azure Communication Services Insights](../../analytics/insights.md) | ✔️ |
+| | [Azure Communication Services Voice and video calling events](../../../../event-grid/communication-services-voice-video-events.md) | ❌ |
+| | [Teams Call Analytics](/MicrosoftTeams/use-call-analytics-to-troubleshoot-poor-call-quality) | ✔️ |
+| | [Teams real-time Analytics](/microsoftteams/use-real-time-telemetry-to-troubleshoot-poor-meeting-quality) | ❌ |
+
+Note: Participants joining via phone number can't see video content. Therefore actions involving video do not impact them but can apply when VoIP participants join.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Get started with calling](../../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md)
+
+For more information, see the following articles:
+- Familiarize yourself with general [call flows](../../call-flows.md)
+- Learn about [call types](../../voice-video-calling/about-call-types.md)
communication-services Teams Client Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/teams-user/teams-client-experience.md
The following image illustrates the experience of a Teams user using Teams clien
The following image illustrates the experience of a Teams user using Teams client interacting with another Teams user from a different organization using Azure Communication Services SDK who joined Teams meeting. ![A diagram that shows how Teams user on Azure Communication Services connects to Teams meetings organized by a different organization.](../media/desktop-client-external-user-joins-teams-meeting.png)
+## Start a call to Teams user within the organization
+The following image illustrates the experience of a Teams user using Teams client calling another Teams user from the same organization using Azure Communication Services SDK. First, the user opens a chat with the person and selects the call button.
+![A diagram shows a chat between two Teams users in the same organization.](../media/desktop-client-teams-user-calls-within-org.png)
+
+If callee accepts the call, both users are connected via a 1:1 VoIP call.
+![A diagram shows the in-call experience of Teams user using Teams client to call another Teams user in the same organization using Azure Communication Services SDKs.](../media/desktop-client-teams-user-in-call-within-org.png)
+
+## Start a call to Teams users from different organization
+The following image illustrates the experience of a Teams user using Teams client calling another Teams user from a different organization using Azure Communication Services SDK. First, the user opens a chat with the person and selects the call button.
+![A diagram shows a chat between two Teams users in a different organization.](../media/desktop-client-teams-user-calls-outside-org.png)
+
+If callee accepts the call, both users are connected via a 1:1 VoIP call.
+![A diagram shows the in-call experience of Teams user using Teams client to call another Teams user in a different organization using Azure Communication Services SDKs.](../media/desktop-client-teams-user-in-calls-outside-org.png)
+
+## Incoming call from Teams user within the organization
+The following image illustrates the experience of a Teams user using Teams client receiving a notification of an incoming call from another Teams user from the same organization. The caller is using Azure Communication Services SDK.
+![A diagram shows incoming call notifications for Teams users using the Teams client. The caller is from the same organization.](../media/desktop-client-teams-user-incoming-call-from-within-org.png)
+
+## Incoming call from Teams user from a different organization
+The following image illustrates the experience of a Teams user using Teams client receiving a notification of an incoming call from another Teams user from a different organization. The caller is using Azure Communication Services SDK.
+![A diagram shows incoming call notifications for Teams users using the Teams client. The caller is from a different organization.](../media/desktop-client-teams-user-incoming-call-from-external-org.png)
++ ## Next steps > [!div class="nextstepaction"]
communication-services Sdk Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sdk-options.md
The Calling package supports UWP apps build with .NET Native or C++/WinRT on:
Communication Services APIs are documented alongside other [Azure REST APIs](/rest/api/azure/). This documentation will tell you how to structure your HTTP messages and offers guidance for using [Postman](../tutorials/postman-tutorial.md). REST interface documentation is also published in Swagger format on [GitHub](https://github.com/Azure/azure-rest-api-specs). You can find throttling limits for individual APIs on [service limits page](./service-limits.md).
-### REST API Throttles
-
-Certain REST APIs and corresponding SDK methods have throttle limits you should be mindful of. Exceeding these throttle limits will trigger a`429 - Too Many Requests` error response. These limits can be increased through [a request to Azure Support](../../azure-portal/supportability/how-to-create-azure-support-request.md).
-
-| API| Throttle|
-|||
-| [All Search Telephone Number Plan APIs](/rest/api/communication/phonenumbers) | 4 requests/day|
-| [Purchase Telephone Number Plan](/rest/api/communication/phonenumbers/purchasephonenumbers) | 1 purchase a month|
-| [Send SMS](/rest/api/communication/sms/send) | 200 requests/minute |
- ## API stability expectations > [!IMPORTANT]
communication-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/service-limits.md
# Service limits for Azure Communication Services
-This document explains some of the limitations of Azure Communication Services and what to do if you are running into these limitations.
+This document explains the limitations of Azure Communication Services APIs and possible resolutions.
## Throttling patterns and architecture
-When you hit service limitations you will generally receive an HTTP status code 429 (Too many requests). In general, the following are best practices for handling throttling:
+When you hit service limitations, you'll generally receive an HTTP status code 429 (Too many requests). In general, the following are best practices for handling throttling:
- Reduce the number of operations per request. - Reduce the frequency of calls.-- Avoid immediate retries, because all requests accrue against your usage limits.
+- Avoid immediate retries because all requests accrue against your usage limits.
-You can find more general guidance on how to set up your service architecture to handle throttling and limitations in the [Azure Architecture](/azure/architecture) documentation for [throttling patterns](/azure/architecture/patterns/throttling).
+You can find more general guidance on how to set up your service architecture to handle throttling and limitations in the [Azure Architecture](/azure/architecture) documentation for [throttling patterns](/azure/architecture/patterns/throttling). Throttling limits can be increased through [a request to Azure Support](../../azure-portal/supportability/how-to-create-azure-support-request.md).
## Acquiring phone numbers
-Before trying to acquire a phone number, make sure your subscription meets the [geographic and subscription](./telephony/plan-solution.md) requirements, otherwise you can't purchase a phone number. The below limitations apply to purchasing numbers through the [Phone Numbers SDK](./reference.md) and the [Azure portal](https://portal.azure.com/).
+Before acquiring a phone number, make sure your subscription meets the [geographic and subscription](./telephony/plan-solution.md) requirements. Otherwise, you can't purchase a phone number. The below limitations apply to purchasing numbers through the [Phone Numbers SDK](./reference.md) and the [Azure portal](https://portal.azure.com/).
| Operation | Scope | Timeframe | Limit (number of requests) |
-| | -- | -- | -- |
+||--|--|--|
| Purchase phone number | Azure tenant | - | 1 |
-| Search for phone numbers | Azure tenant | 1 week | 5 |
+| Search for phone numbers | Azure tenant | one week | 5 |
### Action to take For more information, see the [phone number types](./telephony/plan-solution.md) concept page and the [telephony concept](./telephony/telephony-concept.md) overview page.
-If you would like to purchase more phone numbers or put in a special order, follow the [instructions here](https://github.com/Azure/Communication/blob/master/special-order-numbers.md). If you would like to port toll-free phone numbers from external accounts to their Azure Communication Services account follow the [instructions here](https://github.com/Azure/Communication/blob/master/port-numbers.md).
+If you want to purchase more phone numbers or place a special order, follow the [instructions here](https://github.com/Azure/Communication/blob/master/special-order-numbers.md). If you would like to port toll-free phone numbers from external accounts to their Azure Communication Services account, follow the [instructions here](https://github.com/Azure/Communication/blob/master/port-numbers.md).
## Identity
If you would like to purchase more phone numbers or put in a special order, foll
| **exchangeTokens**| 30 | 500 | ### Action to take
-We always recommend you acquire identities and tokens in advance of starting other transactions like creating chat threads or starting calls, for example, right when your webpage is initially loaded or when the app is starting up.
+We recommend acquiring identities and tokens before creating chat threads or starting calls. For example, when the webpage loads or the application starts.
For more information, see the [identity concept overview](./authentication.md) page. ## SMS
-When sending or receiving a high volume of messages, you might receive a ```429``` error. This indicates you are hitting the service limitations and your messages will be queued to be sent once the number of requests is below the threshold.
+When sending or receiving a high volume of messages, you might receive a ```429``` error. This error indicates you're hitting the service limitations, and your messages will be queued to be sent once the number of requests is below the threshold.
Rate Limits for SMS:
Rate Limits for SMS:
|Send Message|Per Number|60|200|200| ### Action to take
-If you require sending an amount of messages that exceeds the rate-limits, please email us at phone@microsoft.com.
+If you require to send a volume of messages that exceed the rate limits, email us at phone@microsoft.com.
For more information on the SMS SDK and service, see the [SMS SDK overview](./sms/sdk-features.md) page or the [SMS FAQ](./sms/sms-faq.md) page. ## Email
-Sending high volume of messages has a set of limitation on the number of email messages that you can send. If you hit these limits, your messages will not be queued to be sent. You can submit these requests again, once the Retry-After time expires.
+Sending a high volume of messages has a set of limitations on the number of email messages you can send. If you hit these limits, your messages won't be queued to be sent. You can submit these requests again, once the Retry-After time expires.
### Rate Limits
-|Operation|Scope|Timeframe (minutes)| Limit (number of email) |
+|Operation|Scope|Timeframe (minutes)| Limit (number of emails) |
||--|-|-| |Send Email|Per Subscription|1|10| |Send Email|Per Subscription|60|25|
Sending high volume of messages has a set of limitation on the number of email m
| **Name** | Limit | |--|--| |Number of recipients in Email|50 |
-|Attachment size - per messages|10 MB |
+|Attachment size - per message |10 MB |
### Action to take
-This sandbox setup is to help developers to start building the application and gradually you can request to increase the sending volume as soon as the application is ready to go live. If you require sending a number of messages that exceeds the rate-limits, please submit a support request to increase to your desired sending limit.
+This sandbox setup is to help developers start building the application. You can gradually request to increase the sending volume once the application is ready to go live. Submit a support request to raise your desired sending limit if you require sending a volume of messages exceeding the rate limits.
## Chat
The Communication Services Calling SDK supports the following streaming configur
| Limit | Web | Windows/Android/iOS | | - | | -- |
-| **Maximum # of outgoing local streams that can be sent simultaneously** | 1 video or 1 screen sharing | 1 video + 1 screen sharing |
-| **Maximum # of incoming remote streams that can be rendered simultaneously** | 4 videos + 1 screen sharing | 6 videos + 1 screen sharing |
+| **Maximum # of outgoing local streams that you can send simultaneously** | one video or one screen sharing | one video + one screen sharing |
+| **Maximum # of incoming remote streams that you can render simultaneously** | four videos + one screen sharing | six videos + one screen sharing |
While the Calling SDK won't enforce these limits, your users may experience performance degradation if they're exceeded.
The following timeouts apply to the Communication Services Calling SDKs:
For more information about the voice and video calling SDK and service, see the [calling SDK overview](./voice-video-calling/calling-sdk-features.md) page or [known issues](./known-issues.md). ## Teams Interoperability and Microsoft Graph
-If you are using a Teams interoperability scenario, you will likely end up using some Microsoft Graph APIs to create [meetings](/graph/cloud-communications-online-meetings).
+Using a Teams interoperability scenario, you'll likely use some Microsoft Graph APIs to create [meetings](/graph/cloud-communications-online-meetings).
Each service offered through Microsoft Graph has different limitations; service-specific limits are [described here](/graph/throttling) in more detail.
You can find more information on Microsoft Graph [throttling](/graph/throttling)
| **Issue Relay Configuration** | 5 | 30000| ### Action to take
-We always recommend you acquire tokens in advance of starting other transactions like creating a relay connection.
+We recommend acquiring tokens before starting other transactions, like creating a relay connection.
For more information, see the [network traversal concept overview](./network-traversal.md) page.
-## Still need help?
-See the [help and support](../support.md) options available to you.
- ## Next steps
+See the [help and support](../support.md) options.
communication-services Teams Interop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/teams-interop.md
Applications can implement both authentication models and leave the choice of au
|||| |Target user base|Customers|Enterprise| |Identity provider|Any|Azure Active Directory|
+| Display name |Any with the suffix "(External)"| Azure Active Directory user's value of the property "Display name" |
|Authentication & authorization|Custom*| Azure Active Directory and custom*| |Calling available via | Communication Services Calling SDKs | Communication Services Calling SDKs | |Chat is available via | Communication Services Chat SDKs | Graph API |
confidential-computing Overview Azure Products https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/overview-azure-products.md
Verifying that applications are running confidentially form the very foundation
Technologies like [Intel Software Guard Extensions](https://www.intel.com.au/content/www/au/en/architecture-and-technology/software-guard-extensions-enhanced-data-protection.html) (Intel SGX), or [AMD Secure Encrypted Virtualization](https://www.amd.com/en/processors/amd-secure-encrypted-virtualization) (SEV-SNP) are recent CPU improvements supporting confidential computing implementations. These technologies are designed as virtualization extensions and provide feature sets including memory encryption and integrity, CPU-state confidentiality and integrity, and attestation, for building the confidential computing threat model. Azure Computational Computing leverages these technologies in the following computation resources: -- [Confidential VMs with Intel SGX application enclaves](confidential-computing-enclaves.md). Azure offers the [DCsv2](../virtual-machines/dcv2-series.md), [DCsv3, and DCdsv3](../virtual-machines/dcv3-series.md) series built on Intel SGX technology for hardware-based enclave creation. You can build secure enclave-based applications to run in a series of VMs to protect your application data and code in use.
+- [VMs with Intel SGX application enclaves](confidential-computing-enclaves.md). Azure offers the [DCsv2](../virtual-machines/dcv2-series.md), [DCsv3, and DCdsv3](../virtual-machines/dcv3-series.md) series built on Intel SGX technology for hardware-based enclave creation. You can build secure enclave-based applications to run in a series of VMs to protect your application data and code in use.
- [App-enclave aware containers](enclave-aware-containers.md) running on Azure Kubernetes Service (AKS). Confidential computing nodes on AKS use Intel SGX to create isolated enclave environments in the nodes between each container application.
container-instances Container Instances Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-github-action.md
Get the resource ID of your container registry. Substitute the name of your regi
```azurecli registryId=$(az acr show \ --name <registry-name> \
+ --resource-group <resource-group-name> \
--query id --output tsv) ```
cosmos-db Linux Emulator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/linux-emulator.md
Use the following steps to run the emulator on Linux:
| Ports: `-p` | | Currently, only ports 8081 and 10251-10255 are needed by the emulator endpoint. | | `AZURE_COSMOS_EMULATOR_PARTITION_COUNT` | 10 | Controls the total number of physical partitions, which in return controls the number of containers that can be created and can exist at a given point in time. We recommend starting small to improve the emulator start up time, i.e 3. | | Memory: `-m` | | On memory, 3 GB or more is required. |
-| Cores: `--cpus` | | Make sure to allocate enough memory and CPU cores. At least two cores are recommended. |
+| Cores: `--cpus` | | Make sure to allocate enough memory and CPU cores. At least four cores are recommended. |
|`AZURE_COSMOS_EMULATOR_ENABLE_DATA_PERSISTENCE` | false | This setting used by itself will help persist the data between container restarts. |
-|`AZURE_COSMOS_EMULATOR_ENABLE_MONGODB_ENDPOINT` | | This setting enables the MongoDB API endpoint for the emulator and configures the MongoDB server version. (Valid server version values include ``3.2``, ``3.6``, and ``4.0``) |
+|`AZURE_COSMOS_EMULATOR_ENABLE_MONGODB_ENDPOINT` | | This setting enables the MongoDB API endpoint for the emulator and configures the MongoDB server version. (Valid server version values include ``3.2``, ``3.6``, ``4.0`` and ``4.2``) |
## Troubleshoot issues
cosmos-db Resource Locks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/resource-locks.md
Previously updated : 05/13/2021 Last updated : 08/31/2022
ms.devlang: azurecli
# Prevent Azure Cosmos DB resources from being deleted or changed+ [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
-As an administrator, you may need to lock an Azure Cosmos account, database or container to prevent other users in your organization from accidentally deleting or modifying critical resources. You can set the lock level to CanNotDelete or ReadOnly.
+As an administrator, you may need to lock an Azure Cosmos account, database or container. Locks prevent other users in your organization from accidentally deleting or modifying critical resources. You can set the lock level to ``CanNotDelete`` or ``ReadOnly``.
-- **CanNotDelete** means authorized users can still read and modify a resource, but they can't delete the resource.-- **ReadOnly** means authorized users can read a resource, but they can't delete or update the resource. Applying this lock is similar to restricting all authorized users to the permissions granted by the Reader role.
+| Level | Description |
+| | |
+| ``CanNotDelete`` | Authorized users can still read and modify a resource, but they can't delete the resource. |
+| ``ReadOnly`` | Authorized users can read a resource, but they can't delete or update the resource. Applying this lock is similar to restricting all authorized users to the permissions granted by the **Reader** role. |
## How locks are applied When you apply a lock at a parent scope, all resources within that scope inherit the same lock. Even resources you add later inherit the lock from the parent. The most restrictive lock in the inheritance takes precedence.
-Unlike Azure role-based access control, you use management locks to apply a restriction across all users and roles. To learn about Azure RBAC for Azure Cosmos DB see, [Azure role-based access control in Azure Cosmos DB](role-based-access-control.md).
+Unlike Azure role-based access control, you use management locks to apply a restriction across all users and roles. To learn about role-based access control for Azure Cosmos DB see, [Azure role-based access control in Azure Cosmos DB](role-based-access-control.md).
-Resource Manager locks apply only to operations that happen in the management plane, which consists of operations sent to https://management.azure.com. The locks don't restrict how resources perform their own functions. Resource changes are restricted, but resource operations aren't restricted. For example, a ReadOnly lock on an Azure Cosmos container prevents you from deleting or modifying the container. It doesn't prevent you from creating, updating, or deleting data in the container. Data transactions are permitted because those operations aren't sent to https://management.azure.com.
+Resource Manager locks apply only to operations that happen in the management plane, which consists of operations sent to <https://management.azure.com>. The locks don't restrict how resources perform their own functions. Resource changes are restricted, but resource operations aren't restricted. For example, a ReadOnly lock on an Azure Cosmos container prevents you from deleting or modifying the container. It doesn't prevent you from creating, updating, or deleting data in the container. Data transactions are permitted because those operations aren't sent to <https://management.azure.com>.
## Manage locks
-> [!WARNING]
-> Resource locks do not work for changes made by users accessing Azure Cosmos DB using account keys unless the Azure Cosmos account is first locked by enabling the disableKeyBasedMetadataWriteAccess property. Care should be taken before enabling this property to ensure it does not break existing applications that make changes to resources using any SDK, Azure portal or 3rd party tools that connect via account keys and modify resources such as changing throughput, updating index policies, etc. To learn more and to go through a checklist to ensure your applications continue to function see, [Preventing changes from the Azure Cosmos DB SDKs](role-based-access-control.md#prevent-sdk-changes)
+Resource locks don't work for changes made by users accessing Azure Cosmos DB using account keys unless the Azure Cosmos account is first locked by enabling the ``disableKeyBasedMetadataWriteAccess`` property. Ensure this property doesn't break existing applications that make changes to resources using any SDK, Azure portal, or third party tools. Enabling this property will break applications that connect via account keys and modify resources such as changing throughput, updating index policies, etc. To learn more and to go through a checklist to ensure your applications continue to function, see [preventing changes from the Azure Cosmos DB SDKs](role-based-access-control.md#prevent-sdk-changes)
+
+### [PowerShell](#tab/powershell)
+
+```powershell-interactive
+$RESOURCE_GROUP_NAME = "myResourceGroup"
+$ACCOUNT_NAME = "my-cosmos-account"
+$LOCK_NAME = "$accountName-Lock"
+```
+
+First, update the account to prevent changes by anything that connects via account keys.
-### PowerShell
+```powershell-interactive
+$parameters = @{
+ Name = $ACCOUNT_NAME
+ ResourceGroupName = $RESOURCE_GROUP_NAME
+ DisableKeyBasedMetadataWriteAccess = true
+}
+Update-AzCosmosDBAccount @parameters
+```
+
+Create a Delete Lock on an Azure Cosmos account resource and all child resources.
```powershell-interactive
-$resourceGroupName = "myResourceGroup"
-$accountName = "my-cosmos-account"
-$lockName = "$accountName-Lock"
-
-# First, update the account to prevent changes by anything that connects via account keys
-Update-AzCosmosDBAccount -ResourceGroupName $resourceGroupName -Name $accountName -DisableKeyBasedMetadataWriteAccess true
-
-# Create a Delete Lock on an Azure Cosmos account resource and all child resources
-New-AzResourceLock `
- -ApiVersion "2020-04-01" `
- -ResourceType "Microsoft.DocumentDB/databaseAccounts" `
- -ResourceGroupName $resourceGroupName `
- -ResourceName $accountName `
- -LockName $lockName `
- -LockLevel "CanNotDelete" # CanNotDelete or ReadOnly
+$parameters = @{
+ ResourceGroupName = $RESOURCE_GROUP_NAME
+ ResourceName = $ACCOUNT_NAME
+ LockName = $LOCK_NAME
+ ApiVersion = "2020-04-01"
+ ResourceType = "Microsoft.DocumentDB/databaseAccounts"
+ LockLevel = "CanNotDelete"
+}
+New-AzResourceLock @parameters
```
-### Azure CLI
+### [Azure CLI](#tab/azure-cli)
```azurecli-interactive resourceGroupName='myResourceGroup' accountName='my-cosmos-account'
-$lockName="$accountName-Lock"
+lockName="$accountName-Lock"
+```
+
+First, update the account to prevent changes by anything that connects via account keys.
+
+```azurecli-interactive
+az cosmosdb update \
+ --name $accountName \
+ --resource-group $resourceGroupName \
+ --disable-key-based-metadata-write-access true
+```
-# First, update the account to prevent changes by anything that connects via account keys
-az cosmosdb update --name $accountName --resource-group $resourceGroupName --disable-key-based-metadata-write-access true
+Create a Delete Lock on an Azure Cosmos account resource
-# Create a Delete Lock on an Azure Cosmos account resource
-az lock create --name $lockName \
+```azurecli-interactive
+az lock create \
+ --name $lockName \
--resource-group $resourceGroupName \
+ --lock-type 'CanNotDelete' \
--resource-type Microsoft.DocumentDB/databaseAccount \
- --lock-type 'CanNotDelete' # CanNotDelete or ReadOnly \
--resource $accountName ```
-### Template
+
-When applying a lock to an Azure Cosmos DB resource, use the following formats:
+### Template
-- name - `{resourceName}/Microsoft.Authorization/{lockName}`-- type - `{resourceProviderNamespace}/{resourceType}/providers/locks`
+When applying a lock to an Azure Cosmos DB resource, use the [``Microsoft.Authorization/locks``](/azure/templates/microsoft.authorization/2017-04-01/locks) Azure Resource Manager (ARM) resource.
-> [!IMPORTANT]
-> When modifying an existing Azure Cosmos account, make sure to include the other properties for your account and child resources when redploying with this property. Do not deploy this template as is or it will reset all of your account properties.
+#### [JSON](#tab/json)
```json
-"resources": [
- {
- "type": "Microsoft.DocumentDB/databaseAccounts",
- "name": "[variables('accountName')]",
- "apiVersion": "2020-04-01",
- "kind": "GlobalDocumentDB",
- "location": "[parameters('location')]",
- "properties": {
- "consistencyPolicy": "[variables('consistencyPolicy')[parameters('defaultConsistencyLevel')]]",
- "locations": "[variables('locations')]",
- "databaseAccountOfferType": "Standard",
- "enableAutomaticFailover": "[parameters('automaticFailover')]",
- "disableKeyBasedMetadataWriteAccess": true
- }
- },
- {
- "type": "Microsoft.DocumentDB/databaseAccounts/providers/locks",
- "apiVersion": "2020-04-01",
- "name": "[concat(variables('accountName'), '/Microsoft.Authorization/siteLock')]",
- "dependsOn": [
- "[resourceId('Microsoft.DocumentDB/databaseAccounts', variables('accountName'))]"
- ],
- "properties": {
+{
+ "type": "Microsoft.Authorization/locks",
+ "apiVersion": "2017-04-01",
+ "name": "cosmoslock",
+ "dependsOn": [
+ "[resourceId('Microsoft.DocumentDB/databaseAccounts', parameters('accountName'))]"
+ ],
+ "properties": {
"level": "CanNotDelete",
- "notes": "Cosmos account should not be deleted."
- }
- }
-]
+ "notes": "Do not delete Azure Cosmos DB account."
+ },
+ "scope": "[resourceId('Microsoft.DocumentDB/databaseAccounts', parameters('accountName'))]"
+}
+```
+
+#### [Bicep](#tab/bicep)
+
+```bicep
+resource lock 'Microsoft.Authorization/locks@2017-04-01' = {
+ name: 'cosmoslock'
+ scope: account
+ properties: {
+ level: 'CanNotDelete'
+ notes: 'Do not delete Azure Cosmos DB SQL API account.'
+ }
+}
``` ++ ## Samples Manage resource locks for Azure Cosmos DB:
cosmos-db Best Practice Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/best-practice-dotnet.md
Previously updated : 04/01/2022 Last updated : 08/30/2022
# Best practices for Azure Cosmos DB .NET SDK [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-This article walks through the best practices for using the Azure Cosmos DB .NET SDK. Following these practices, will help improve your latency, availability, and boost overall performance.
+This article walks through the best practices for using the Azure Cosmos DB .NET SDK. Following these practices, will help improve your latency, availability, and boost overall performance.
Watch the video below to learn more about using the .NET SDK from a Cosmos DB engineer!
Watch the video below to learn more about using the .NET SDK from a Cosmos DB en
|<input type="checkbox"/> | Ephemeral Port Exhaustion | For sparse or sporadic connections, we set the [`IdleConnectionTimeout`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.idletcpconnectiontimeout?view=azure-dotnet&preserve-view=true) and [`PortReuseMode`](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.portreusemode?view=azure-dotnet&preserve-view=true) to `PrivatePortPool`. The `IdleConnectionTimeout` property helps which control the time unused connections are closed. This will reduce the number of unused connections. By default, idle connections are kept open indefinitely. The value set must be greater than or equal to 10 minutes. We recommended values between 20 minutes and 24 hours. The `PortReuseMode` property allows the SDK to use a small pool of ephemeral ports for various Azure Cosmos DB destination endpoints. | |<input type="checkbox"/> | Use Async/Await | Avoid blocking calls: `Task.Result`, `Task.Wait`, and `Task.GetAwaiter().GetResult()`. The entire call stack is asynchronous in order to benefit from [async/await](/dotnet/csharp/programming-guide/concepts/async/) patterns. Many synchronous blocking calls lead to [Thread Pool starvation](/archive/blogs/vancem/diagnosing-net-core-threadpool-starvation-with-perfview-why-my-service-is-not-saturating-all-cores-or-seems-to-stall) and degraded response times. | |<input type="checkbox"/> | End-to-End Timeouts | To get end-to-end timeouts, you'll need to use both `RequestTimeout` and `CancellationToken` parameters. For more details on timeouts with Cosmos DB [visit](troubleshoot-dot-net-sdk-request-timeout.md) |
-|<input type="checkbox"/> | Retry Logic | A transient error is an error that has an underlying cause that soon resolves itself. Applications that connect to your database should be built to expect these transient errors. To handle them, implement retry logic in your code instead of surfacing them to users as application errors. The SDK has built-in logic to handle these transient failures on retryable requests like read or query operations. The SDK won't retry on writes for transient failures as writes aren't idempotent. The SDK does allow users to configure retry logic for throttles. For details on which errors to retry on [visit](troubleshoot-dot-net-sdk.md#retry-logics) |
+|<input type="checkbox"/> | Retry Logic | A transient error is an error that has an underlying cause that soon resolves itself. Applications that connect to your database should be built to expect these transient errors. To handle them, implement retry logic in your code instead of surfacing them to users as application errors. The SDK has built-in logic to handle these transient failures on retryable requests like read or query operations. The SDK won't retry on writes for transient failures as writes aren't idempotent. The SDK does allow users to configure retry logic for throttles. For details on which errors to retry on [visit](conceptual-resilient-sdk-applications.md#should-my-application-retry-on-errors) |
|<input type="checkbox"/> | Caching database/collection names | Retrieve the names of your databases and containers from configuration or cache them on start. Calls like `ReadDatabaseAsync` or `ReadDocumentCollectionAsync` and `CreateDatabaseQuery` or `CreateDocumentCollectionQuery` will result in metadata calls to the service, which consume from the system-reserved RU limit. `CreateIfNotExist` should also only be used once for setting up the database. Overall, these operations should be performed infrequently. | |<input type="checkbox"/> | Bulk Support | In scenarios where you may not need to optimize for latency, we recommend enabling [Bulk support](https://devblogs.microsoft.com/cosmosdb/introducing-bulk-support-in-the-net-sdk/) for dumping large volumes of data. | | <input type="checkbox"/> | Parallel Queries | The Cosmos DB SDK supports [running queries in parallel](performance-tips-query-sdk.md?pivots=programming-language-csharp) for better latency and throughput on your queries. We recommend setting the `MaxConcurrency` property within the `QueryRequestsOptions` to the number of partitions you have. If you aren't aware of the number of partitions, start by using `int.MaxValue`, which will give you the best latency. Then decrease the number until it fits the resource restrictions of the environment to avoid high CPU issues. Also, set the `MaxBufferedItemCount` to the expected number of results returned to limit the number of pre-fetched results. |
Watch the video below to learn more about using the .NET SDK from a Cosmos DB en
| <input type="checkbox"/> | Document Size | The request charge of a specified operation correlates directly to the size of the document. We recommend reducing the size of your documents as operations on large documents cost more than operations on smaller documents. | | <input type="checkbox"/> | Increase the number of threads/tasks | Because calls to Azure Cosmos DB are made over the network, you might need to vary the degree of concurrency of your requests so that the client application spends minimal time waiting between requests. For example, if you're using the [.NET Task Parallel Library](/dotnet/standard/parallel-programming/task-parallel-library-tpl), create on the order of hundreds of tasks that read from or write to Azure Cosmos DB. | | <input type="checkbox"/> | Enabling Query Metrics | For more logging of your backend query executions, you can enable SQL Query Metrics using our .NET SDK. For instructions on how to collect SQL Query Metrics [visit](profile-sql-api-query.md) |
-| <input type="checkbox"/> | SDK Logging | Use SDK logging to capture extra diagnostics information and troubleshoot latency issues. Log the [diagnostics string](/dotnet/api/microsoft.azure.documents.client.resourceresponsebase.requestdiagnosticsstring?view=azure-dotnet&preserve-view=true) in the V2 SDK or [`Diagnostics`](/dotnet/api/microsoft.azure.cosmos.responsemessage.diagnostics?view=azure-dotnet&preserve-view=true) in v3 SDK for more detailed cosmos diagnostic information for the current request to the service. As an example use case, capture Diagnostics on any exception and on completed operations if the `Diagnostics.ElapsedTime` is greater than a designated threshold value (that is, if you have an SLA of 10 seconds, then capture diagnostics when `ElapsedTime` > 10 seconds). It's advised to only use these diagnostics during performance testing. |
+| <input type="checkbox"/> | SDK Logging | Log [SDK diagnostics](#capture-diagnostics) for outstanding scenarios, such as exceptions or when requests go beyond an expected latency.
| <input type="checkbox"/> | DefaultTraceListener | The DefaultTraceListener poses performance issues on production environments causing high CPU and I/O bottlenecks. Make sure you're using the latest SDK versions or [remove the DefaultTraceListener from your application](performance-tips-dotnet-sdk-v3-sql.md#logging-and-tracing) |
+## Capture diagnostics
++ ## Best practices when using Gateway mode+ Increase `System.Net MaxConnections` per host when you use Gateway mode. Azure Cosmos DB requests are made over HTTPS/REST when you use Gateway mode. They're subject to the default connection limit per hostname or IP address. You might need to set `MaxConnections` to a higher value (from 100 through 1,000) so that the client library can use multiple simultaneous connections to Azure Cosmos DB. In .NET SDK 1.8.0 and later, the default value for `ServicePointManager.DefaultConnectionLimit` is 50. To change the value, you can set `Documents.Client.ConnectionPolicy.MaxConnectionLimit` to a higher value. ## Best practices for write-heavy workloads+ For workloads that have heavy create payloads, set the `EnableContentResponseOnWrite` request option to `false`. The service will no longer return the created or updated resource to the SDK. Normally, because the application has the object that's being created, it doesn't need the service to return it. The header values are still accessible, like a request charge. Disabling the content response can help improve performance, because the SDK no longer needs to allocate memory or serialize the body of the response. It also reduces the network bandwidth usage to further help performance. ## Next steps+ For a sample application that's used to evaluate Azure Cosmos DB for high-performance scenarios on a few client machines, see [Performance and scale testing with Azure Cosmos DB](performance-testing.md). To learn more about designing your application for scale and high performance, see [Partitioning and scaling in Azure Cosmos DB](../partitioning-overview.md).
cosmos-db Create Cosmosdb Resources Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-cosmosdb-resources-portal.md
Azure Cosmos DB is Microsoft's globally distributed multi-model database service. You can use Azure Cosmos DB to quickly create and query key/value databases, document databases, and graph databases, all of which benefit from the global distribution and horizontal scale capabilities at the core of Azure Cosmos DB.
-This quickstart demonstrates how to use the Azure portal to create an Azure Cosmos DB [SQL API](../introduction.md) account, create a document database, and container, and add data to the container.
+This quickstart demonstrates how to use the Azure portal to create an Azure Cosmos DB [SQL API](../introduction.md) account, create a document database, and container, and add data to the container. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb)
## Prerequisites
cosmos-db Odbc Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/odbc-driver.md
Follow these steps to create a view for your data:
You can create as many views as you like. Once you're done defining the views, select **Sample** to sample the data.
+> [!IMPORTANT]
+> The query text in the view definition should not contain line breaks. Otherwise, we will get a generic error when previewing the view.
++ ## Query with SQL Server Management Studio Once you set up an Azure Cosmos DB ODBC Driver User DSN, you can query Azure Cosmos DB from SQL Server Management Studio (SSMS) by setting up a linked server connection.
You can use your DSN to connect to Azure Cosmos DB with any ODBC-compliant tools
## Next steps - To learn more about Azure Cosmos DB, see [Welcome to Azure Cosmos DB](../introduction.md).-- For more information about creating visualizations in Power BI Desktop, see [Visualization types in Power BI](https://powerbi.microsoft.com/documentation/powerbi-service-visualization-types-for-reports-and-q-and-a/).
+- For more information about creating visualizations in Power BI Desktop, see [Visualization types in Power BI](https://powerbi.microsoft.com/documentation/powerbi-service-visualization-types-for-reports-and-q-and-a/).
cosmos-db Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/quick-create-bicep.md
[!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
-Azure Cosmos DB is MicrosoftΓÇÖs fast NoSQL database with open APIs for any scale. You can use Azure Cosmos DB to quickly create and query key/value databases, document databases, and graph databases. This quickstart focuses on the process of deploying a Bicep file to create an Azure Cosmos database and a container within that database. You can later store data in this container.
+Azure Cosmos DB is MicrosoftΓÇÖs fast NoSQL database with open APIs for any scale. You can use Azure Cosmos DB to quickly create and query key/value databases, document databases, and graph databases. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). This quickstart focuses on the process of deploying a Bicep file to create an Azure Cosmos database and a container within that database. You can later store data in this container.
[!INCLUDE [About Bicep](../../../includes/resource-manager-quickstart-bicep-introduction.md)]
cosmos-db Sql Api Dotnet Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-dotnet-application.md
> * [Node.js](sql-api-nodejs-application.md) > * [Python](./create-sql-api-python.md)
-This tutorial shows you how to use Azure Cosmos DB to store and access data from an ASP.NET MVC application that is hosted on Azure. In this tutorial, you use the .NET SDK V3. The following image shows the web page that you'll build by using the sample in this article:
+This tutorial shows you how to use Azure Cosmos DB to store and access data from an ASP.NET MVC application that is hosted on Azure. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). In this tutorial, you use the .NET SDK V3. The following image shows the web page that you'll build by using the sample in this article:
:::image type="content" source="./media/sql-api-dotnet-application/asp-net-mvc-tutorial-image01.png" alt-text="Screenshot of the todo list MVC web application created by this tutorial - ASP NET Core MVC tutorial step by step":::
This tutorial covers:
Before following the instructions in this article, make sure that you have the following resources:
-* An active Azure account. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+* An active Azure account. If you don't have an Azure subscription, without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb)
[!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)]
cosmos-db Sql Api Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-get-started.md
> * [Node.js](sql-api-nodejs-get-started.md) >
-Welcome to the Azure Cosmos DB SQL API get started tutorial. After following this tutorial, you'll have a console application that creates and queries Azure Cosmos DB resources.
+Welcome to the Azure Cosmos DB SQL API get started tutorial. After following this tutorial, you'll have a console application that creates and queries Azure Cosmos DB resources.
This tutorial uses version 3.0 or later of the [Azure Cosmos DB .NET SDK](https://www.nuget.org/packages/Microsoft.Azure.Cosmos) and [.NET 6](https://dotnet.microsoft.com/download).
Now let's get started!
## Prerequisites
-An active Azure account. If you don't have one, you can sign up for a [free account](https://azure.microsoft.com/free/).
+An active Azure account. If you don't have one, without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb).
[!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)]
That's it, build it, and you're on your way!
* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md) * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
-[cosmos-db-create-account]: create-sql-api-java.md#create-a-database-account
+[cosmos-db-create-account]: create-sql-api-java.md#create-a-database-account
cosmos-db Sql Api Nodejs Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-nodejs-application.md
As a developer, you might have applications that use NoSQL document data. You can use a SQL API account in Azure Cosmos DB to store and access this document data. This Node.js tutorial shows you how to store and access data from a SQL API account in Azure Cosmos DB by using a Node.js Express application that is hosted on the Web Apps feature of Microsoft Azure App Service. In this tutorial, you will build a web-based application (Todo app) that allows you to create, retrieve, and complete tasks. The tasks are stored as JSON documents in Azure Cosmos DB.
-This tutorial demonstrates how to create a SQL API account in Azure Cosmos DB by using the Azure portal. You then build and run a web app that is built on the Node.js SDK to create a database and container, and add items to the container. This tutorial uses JavaScript SDK version 3.0.
+This tutorial demonstrates how to create a SQL API account in Azure Cosmos DB by using the Azure portal. Without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb). You then build and run a web app that is built on the Node.js SDK to create a database and container, and add items to the container. This tutorial uses JavaScript SDK version 3.0.
This tutorial covers the following tasks:
This tutorial covers the following tasks:
Before following the instructions in this article, ensure that you have the following resources:
-* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+* If you don't have an Azure subscription, without a credit card or an Azure subscription, you can set up a free [Try Azure Cosmos DB account](https://aka.ms/trycosmosdb).
[!INCLUDE [cosmos-db-emulator-docdb-api](../includes/cosmos-db-emulator-docdb-api.md)]
When these resources are no longer needed, you can delete the resource group, Az
[Node.js]: https://nodejs.org/ [Git]: https://git-scm.com/
-[GitHub]: https://github.com/Azure-Samples/azure-cosmos-db-sql-api-nodejs-todo-app
+[GitHub]: https://github.com/Azure-Samples/azure-cosmos-db-sql-api-nodejs-todo-app
cosmos-db Troubleshoot Dot Net Sdk Slow Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-dot-net-sdk-slow-request.md
description: Learn how to diagnose and fix slow requests when you use Azure Cosm
Previously updated : 08/19/2022 Last updated : 08/30/2022
When you design your application, [follow the .NET SDK best practices](performan
Consider the following when developing your application: * The application should be in the same region as your Azure Cosmos DB account.
-* Your [ApplicationRegion](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationregion), [ApplicationPreferredRegions](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationpreferredregions), or [PreferredLocations](/dotnet/api/microsoft.azure.documents.client.connectionpolicy.preferredlocations) for V2 SDK configuration is should reflect your regional preference and point to the region your application is deployed on.
+* Your [ApplicationRegion](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationregion) or [ApplicationPreferredRegions](/dotnet/api/microsoft.azure.cosmos.cosmosclientoptions.applicationpreferredregions) should reflect your regional preference and point to the region your application is deployed on.
* There might be a bottleneck on the Network interface because of high traffic. If the application is running on Azure Virtual Machines, there are possible workarounds: * Consider using a [Virtual Machine with Accelerated Networking enabled](../../virtual-network/create-vm-accelerated-networking-powershell.md). * Enable [Accelerated Networking on an existing Virtual Machine](../../virtual-network/create-vm-accelerated-networking-powershell.md#enable-accelerated-networking-on-existing-vms).
If you need to verify that a database or container exists, don't do so by callin
## Slow requests on bulk mode
-[Bulk mode](tutorial-sql-api-dotnet-bulk-import.md) is a throughput optimized mode meant for high data volume operations, not a latency optimized mode; it's meant to saturate the available throughput. If you are experiencing slow requests when using bulk mode make sure that:
+[Bulk mode](tutorial-sql-api-dotnet-bulk-import.md) is a throughput optimized mode meant for high data volume operations, not a latency optimized mode; it's meant to saturate the available throughput. If you're experiencing slow requests when using bulk mode make sure that:
* Your application is compiled in Release configuration.
-* You are not measuring latency while debugging the application (no debuggers attached).
+* You aren't measuring latency while debugging the application (no debuggers attached).
* The volume of operations is high, don't use bulk for less than 1000 operations. Your provisioned throughput dictates how many operations per second you can process, your goal with bulk would be to utilize as much of it as possible.
-* Monitor the container for [throttling scenarios](troubleshoot-request-rate-too-large.md). If the container is getting heavily throttled it means the volume of data is larger than your provisioned throughput, you need to either scale up the container or reduce the volume of data (maybe create smaller batches of data at a time).
-* You are correctly using the `async/await` pattern to [process all concurrent Tasks](tutorial-sql-api-dotnet-bulk-import.md#step-6-populate-a-list-of-concurrent-tasks) and not [blocking any async operation](https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AsyncGuidance.md#avoid-using-taskresult-and-taskwait).
+* Monitor the container for [throttling scenarios](troubleshoot-request-rate-too-large.md). If the container is getting heavily throttled it means the volume of data is larger than your provisioned throughput, you need to either scale up the container, or reduce the volume of data (maybe create smaller batches of data at a time).
+* You're correctly using the `async/await` pattern to [process all concurrent Tasks](tutorial-sql-api-dotnet-bulk-import.md#step-6-populate-a-list-of-concurrent-tasks) and not [blocking any async operation](https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AsyncGuidance.md#avoid-using-taskresult-and-taskwait).
-## <a name="capture-diagnostics"></a>Capture the diagnostics
+## Capture diagnostics
-All the responses in the SDK, including `CosmosException`, have a `Diagnostics` property. This property records all the information related to the single request, including if there were retries or any transient failures.
-
-The diagnostics are returned as a string. The string changes with each version, as it's improved for troubleshooting different scenarios. With each version of the SDK, the string will have breaking changes to the formatting. Don't parse the string to avoid breaking changes. The following code sample shows how to read diagnostic logs by using the .NET SDK:
-
-```c#
-try
-{
- ItemResponse<Book> response = await this.Container.CreateItemAsync<Book>(item: testItem);
- if (response.Diagnostics.GetClientElapsedTime() > ConfigurableSlowRequestTimeSpan)
- {
- // Log the response.Diagnostics.ToString() and add any additional info necessary to correlate to other logs
- }
-}
-catch (CosmosException cosmosException)
-{
- // Log the full exception including the stack trace with: cosmosException.ToString()
-
- // The Diagnostics can be logged separately if required with: cosmosException.Diagnostics.ToString()
-}
-
-// When using Stream APIs
-ResponseMessage response = await this.Container.CreateItemStreamAsync(partitionKey, stream);
-if (response.Diagnostics.GetClientElapsedTime() > ConfigurableSlowRequestTimeSpan || !response.IsSuccessStatusCode)
-{
- // Log the diagnostics and add any additional info necessary to correlate to other logs with: response.Diagnostics.ToString()
-}
-```
## Diagnostics in version 3.19 and later The JSON structure has breaking changes with each version of the SDK. This makes it unsafe to be parsed. The JSON represents a tree structure of the request going through the SDK. The following sections cover a few key things to look at.
-### <a name="cpu-history"></a>CPU history
+### CPU history
High CPU utilization is the most common cause for slow requests. For optimal latency, CPU usage should be roughly 40 percent. Use 10 seconds as the interval to monitor maximum (not average) CPU utilization. CPU spikes are more common with cross-partition queries, where the requests might do multiple connections for a single query.
The timeouts include diagnostics, which contain the following, for example:
* If the `cpu` values are over 70 percent, the timeout is likely to be caused by CPU exhaustion. In this case, the solution is to investigate the source of the high CPU utilization and reduce it, or scale the machine to a larger resource size. * If the `threadInfo/isThreadStarving` nodes have `True` values, the cause is thread starvation. In this case, the solution is to investigate the source or sources of the thread starvation (potentially locked threads), or scale the machine or machines to a larger resource size.
-* If the `dateUtc` time between measurements is not approximately 10 seconds, it also indicates contention on the thread pool. CPU is measured as an independent task that is enqueued in the thread pool every 10 seconds. If the time between measurements is longer, it indicates that the async tasks aren't able to be processed in a timely fashion. The most common scenario is when your application code is [blocking calls over async code](https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AsyncGuidance.md#avoid-using-taskresult-and-taskwait).
+* If the `dateUtc` time between measurements isn't approximately 10 seconds, it also indicates contention on the thread pool. CPU is measured as an independent task that is enqueued in the thread pool every 10 seconds. If the time between measurements is longer, it indicates that the async tasks aren't able to be processed in a timely fashion. The most common scenario is when your application code is [blocking calls over async code](https://github.com/davidfowl/AspNetCoreDiagnosticScenarios/blob/master/AsyncGuidance.md#avoid-using-taskresult-and-taskwait).
# [Older SDK](#tab/cpu-old)
CPU count: 8)
``` * If the CPU measurements are over 70 percent, the timeout is likely to be caused by CPU exhaustion. In this case, the solution is to investigate the source of the high CPU utilization and reduce it, or scale the machine to a larger resource size.
-* If the CPU measurements are not happening every 10 seconds (for example, there are gaps, or measurement times indicate longer times in between measurements), the cause is thread starvation. In this case the solution is to investigate the source or sources of the thread starvation (potentially locked threads), or scale the machine or machines to a larger resource size.
+* If the CPU measurements aren't happening every 10 seconds (for example, there are gaps, or measurement times indicate longer times in between measurements), the cause is thread starvation. In this case the solution is to investigate the source or sources of the thread starvation (potentially locked threads), or scale the machine or machines to a larger resource size.
CPU count: 8)
The client application that uses the SDK should be scaled up or out.
-### <a name="httpResponseStats"></a>HttpResponseStats
+### HttpResponseStats
`HttpResponseStats` are requests that go to the [gateway](sql-sdk-connection-modes.md). Even in direct mode, the SDK gets all the metadata information from the gateway.
If the request is slow, first verify that none of the previous suggestions yield
] ```
-### <a name="storeResult"></a>StoreResult
+### StoreResult
`StoreResult` represents a single request to Azure Cosmos DB, by using direct mode with the TCP protocol.
For multiple store results for a single request, be aware of the following:
* Strong consistency and bounded staleness consistency always have at least two store results. * Check the status code of each `StoreResult`. The SDK retries automatically on multiple different [transient failures](troubleshoot-dot-net-sdk-request-timeout.md). The SDK is constantly improved to cover more scenarios.
-### <a name="rntbdRequestStats"></a>RntbdRequestStats
+### RntbdRequestStats
Show the time for the different stages of sending and receiving a request in the transport layer.
Show the time for the different stages of sending and receiving a request in the
* *Transit time is large*, which leads to a networking problem. Compare this number to the `BELatencyInMs`. If `BELatencyInMs` is small, then the time was spent on the network, and not on the Azure Cosmos DB service. * *Received time is large* might be caused by a thread starvation problem. This is the time between having the response and returning the result.
-### <a name="ServiceEndpointStatistics"></a>ServiceEndpointStatistics
+### ServiceEndpointStatistics
+ Information about a particular backend server. The SDK can open multiple connections to a single backend server depending upon the number of pending requests and the MaxConcurrentRequestsPerConnection.
-* `inflightRequests` The number of pending requests to a backend server (maybe from different partitions). A high number may to lead to more traffic and higher latencies.
-* `openConnections` is the total Number of connections open to a single backend server. This can be useful to show SNAT port exhausion if this number is very high.
+* `inflightRequests` The number of pending requests to a backend server (maybe from different partitions). A high number may lead to more traffic and higher latencies.
+* `openConnections` is the total Number of connections open to a single backend server. This can be useful to show SNAT port exhaustion if this number is very high.
+
+### ConnectionStatistics
-### <a name="ConnectionStatistics"></a>ConnectionStatistics
Information about the particular connection (new or old) the request get's assigned to. * `waitforConnectionInit`: The current request was waiting for new connection initialization to complete. This will lead to higher latencies.
-* `callsPendingReceive`: Number of calls that was pending receive before this call was sent. A high number can show us that there were a lot of calls before this call and it may lead to higher latencies. If this number is high it points to a head of line blocking issue possibly caused by another request like query or feed operation that is taking a long time to process. Try lowering the CosmosClientOptions.MaxRequestsPerTcpConnection to increase the number of channels.
-* `LastSentTime`: Time of last request that was sent to this server. This along with LastReceivedTime can be used to see connectivity or endpoint issues. For example if there are a lot of receive timeouts, Sent time will be much larger than the Receive time.
+* `callsPendingReceive`: Number of calls that was pending receive before this call was sent. A high number can show us that there were many calls before this call and it may lead to higher latencies. If this number is high it points to a head of line blocking issue possibly caused by another request like query or feed operation that is taking a long time to process. Try lowering the CosmosClientOptions.MaxRequestsPerTcpConnection to increase the number of channels.
+* `LastSentTime`: Time of last request that was sent to this server. This along with LastReceivedTime can be used to see connectivity or endpoint issues. For example if there are many receive timeouts, Sent time will be much larger than the Receive time.
* `lastReceive`: Time of last request that was received from this server * `lastSendAttempt`: Time of the last send attempt
-### <a name="Request and response sizes"></a>Request and response sizes
+### Request and response sizes
+ * `requestSizeInBytes`: The total size of the request sent to Cosmos DB * `responseMetadataSizeInBytes`: The size of headers returned from Cosmos DB * `responseBodySizeInBytes`: The size of content returned from Cosmos DB
Contact [Azure support](https://aka.ms/azure-support).
## Next steps * [Diagnose and troubleshoot](troubleshoot-dot-net-sdk.md) problems when you use the Azure Cosmos DB .NET SDK.
-* Learn about performance guidelines for [.NET v3](performance-tips-dotnet-sdk-v3-sql.md) and [.NET v2](performance-tips.md).
+* Learn about performance guidelines for the [.NET SDK](performance-tips-dotnet-sdk-v3-sql.md).
+* Learn about the best practices for the [.NET SDK](best-practice-dotnet.md)
cosmos-db Troubleshoot Dot Net Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-dot-net-sdk.md
Title: Diagnose and troubleshoot issues when using Azure Cosmos DB .NET SDK
description: Use features like client-side logging and other third-party tools to identify, diagnose, and troubleshoot Azure Cosmos DB issues when using .NET SDK. Previously updated : 08/19/2022 Last updated : 08/30/2022
The .NET SDK provides client-side logical representation to access the Azure Cos
Consider the following checklist before you move your application to production. Using the checklist will prevent several common issues you might see. You can also quickly diagnose when an issue occurs:
-* Use the latest [SDK](sql-api-sdk-dotnet-standard.md). Preview SDKs should not be used for production. This will prevent hitting known issues that are already fixed.
-* Review the [performance tips](performance-tips.md), and follow the suggested practices. This will help prevent scaling, latency, and other performance issues.
-* Enable the SDK logging to help you troubleshoot an issue. Enabling the logging may affect performance so it's best to enable it only when troubleshooting issues. You can enable the following logs:
-* [Log metrics](../monitor-cosmos-db.md) by using the Azure portal. Portal metrics show the Azure Cosmos DB telemetry, which is helpful to determine if the issue corresponds to Azure Cosmos DB or if it's from the client side.
-* Log the [diagnostics string](/dotnet/api/microsoft.azure.documents.client.resourceresponsebase.requestdiagnosticsstring) in the V2 SDK or [diagnostics](/dotnet/api/microsoft.azure.cosmos.responsemessage.diagnostics) in V3 SDK from the point operation responses.
-* Log the [SQL Query Metrics](sql-api-query-metrics.md) from all the query responses
-* Follow the setup for [SDK logging]( https://github.com/Azure/azure-cosmos-dotnet-v2/blob/master/docs/documentdb-sdk_capture_etl.md)
+* Use the latest [SDK](sql-api-sdk-dotnet-standard.md). Preview SDKs shouldn't be used for production. This will prevent hitting known issues that are already fixed.
+* Review the [performance tips](performance-tips-dotnet-sdk-v3-sql.md), and follow the suggested practices. This will help prevent scaling, latency, and other performance issues.
+* Enable the SDK logging to help you troubleshoot an issue. Enabling the logging may affect performance so it's best to enable it only when troubleshooting issues. You can enable the following logs:
+ * [Log metrics](../monitor-cosmos-db.md) by using the Azure portal. Portal metrics show the Azure Cosmos DB telemetry, which is helpful to determine if the issue corresponds to Azure Cosmos DB or if it's from the client side.
+ * Log the [diagnostics string](#capture-diagnostics) from the operations and/or exceptions.
-Take a look at the [Common issues and workarounds](#common-issues-workarounds) section in this article.
+Take a look at the [Common issues and workarounds](#common-issues-and-workarounds) section in this article.
-Check the [GitHub issues section](https://github.com/Azure/azure-cosmos-dotnet-v2/issues) that's actively monitored. Check to see if any similar issue with a workaround is already filed. If you didn't find a solution, then file a GitHub issue. You can open a support tick for urgent issues.
+Check the [GitHub issues section](https://github.com/Azure/azure-cosmos-dotnet-v3/issues) that's actively monitored. Check to see if any similar issue with a workaround is already filed. If you didn't find a solution, then file a GitHub issue. You can open a support tick for urgent issues.
+## Capture diagnostics
-## <a name="common-issues-workarounds"></a>Common issues and workarounds
+
+## Common issues and workarounds
### General suggestions
-* Run your app in the same Azure region as your Azure Cosmos DB account, whenever possible.
+
+* Follow any `aka.ms` link included in the exception details.
+* Run your app in the same Azure region as your Azure Cosmos DB account, whenever possible.
* You may run into connectivity/availability issues due to lack of resources on your client machine. We recommend monitoring your CPU utilization on nodes running the Azure Cosmos DB client, and scaling up/out if they're running at high load. ### Check the portal metrics
-Checking the [portal metrics](../monitor-cosmos-db.md) will help determine if it's a client-side issue or if there is an issue with the service. For example, if the metrics contain a high rate of rate-limited requests (HTTP status code 429) which means the request is getting throttled then check the [Request rate too large](troubleshoot-request-rate-too-large.md) section.
-## Retry design <a id="retry-logics"></a><a id="retry-design"></a><a id="error-codes"></a>
+Checking the [portal metrics](../monitor-cosmos-db.md) will help determine if it's a client-side issue or if there's an issue with the service. For example, if the metrics contain a high rate of rate-limited requests (HTTP status code 429) which means the request is getting throttled then check the [Request rate too large](troubleshoot-request-rate-too-large.md) section.
+
+### Retry design
+ See our guide to [designing resilient applications with Azure Cosmos SDKs](conceptual-resilient-sdk-applications.md) for guidance on how to design resilient applications and learn which are the retry semantics of the SDK.
-### <a name="snat"></a>Azure SNAT (PAT) port exhaustion
+### SNAT
If your app is deployed on [Azure Virtual Machines without a public IP address](../../load-balancer/load-balancer-outbound-connections.md), by default [Azure SNAT ports](../../load-balancer/load-balancer-outbound-connections.md#preallocatedports) establish connections to any endpoint outside of your VM. The number of connections allowed from the VM to the Azure Cosmos DB endpoint is limited by the [Azure SNAT configuration](../../load-balancer/load-balancer-outbound-connections.md#preallocatedports). This situation can lead to connection throttling, connection closure, or the above mentioned [Request timeouts](troubleshoot-dot-net-sdk-request-timeout.md). Azure SNAT ports are used only when your VM has a private IP address is connecting to a public IP address. There are two workarounds to avoid Azure SNAT limitation (provided you already are using a single client instance across the entire application):
-* Add your Azure Cosmos DB service endpoint to the subnet of your Azure Virtual Machines virtual network. For more information, see [Azure Virtual Network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md).
+* Add your Azure Cosmos DB service endpoint to the subnet of your Azure Virtual Machines virtual network. For more information, see [Azure Virtual Network service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md).
When the service endpoint is enabled, the requests are no longer sent from a public IP to Azure Cosmos DB. Instead, the virtual network and subnet identity are sent. This change might result in firewall drops if only public IPs are allowed. If you use a firewall, when you enable the service endpoint, add a subnet to the firewall by using [Virtual Network ACLs](/previous-versions/azure/virtual-network/virtual-networks-acl). * Assign a [public IP to your Azure VM](../../load-balancer/troubleshoot-outbound-connection.md#configure-an-individual-public-ip-on-vm).
-### <a name="high-network-latency"></a>High network latency
-
-High network latency can be identified by using the diagnostics.
-
-# [V3 SDK](#tab/diagnostics-v3)
-
-Diagnostics can be obtained from any `ResponseMessage`, `ItemResponse`, `FeedResponse`, or `CosmosException` by the `Diagnostics` property:
-
-```csharp
-ItemResponse<MyItem> response = await container.CreateItemAsync<MyItem>(item);
-Console.WriteLine(response.Diagnostics.ToString());
-```
-
-# [V2 SDK](#tab/diagnostics-v2)
-
-The diagnostics are available when the client is configured in [direct mode](sql-sdk-connection-modes.md), through the `RequestDiagnosticsString` property:
-
-```csharp
-ResourceResponse<Document> response = await client.ReadDocumentAsync(documentLink, new RequestOptions() { PartitionKey = new PartitionKey(partitionKey) });
-Console.WriteLine(response.RequestDiagnosticsString);
-```
-
+### High network latency
-Please see our [latency troubleshooting guide](troubleshoot-dot-net-sdk-slow-request.md) once you have obtained diagnostics for the affected operations.
+See our [latency troubleshooting guide](troubleshoot-dot-net-sdk-slow-request.md) for details on latency troubleshooting.
### Common query issues
-The [query metrics](sql-api-query-metrics.md) will help determine where the query is spending most of the time. From the query metrics, you can see how much of it is being spent on the back-end vs the client. Learn more on the [query performance guide](performance-tips-query-sdk.md?pivots=programming-language-csharp).
+The [query metrics](sql-api-query-metrics.md) will help determine where the query is spending most of the time. From the query metrics, you can see how much of it's being spent on the back-end vs the client. Learn more on the [query performance guide](performance-tips-query-sdk.md?pivots=programming-language-csharp).
If you encounter the following error: `Unable to load DLL 'Microsoft.Azure.Cosmos.ServiceInterop.dll' or one of its dependencies:` and are using Windows, you should upgrade to the latest Windows version. ## Next steps
-* Learn about Performance guidelines for [.NET V3](performance-tips-dotnet-sdk-v3-sql.md) and [.NET V2](performance-tips.md)
-* Learn about the [Reactor-based Java SDKs](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/reactor-pattern-guide.md)
+* Learn about Performance guidelines for the [.NET SDK](performance-tips-dotnet-sdk-v3-sql.md)
+* Learn about the best practices for the [.NET SDK](best-practice-dotnet.md)
<!--Anchors--> [Common issues and workarounds]: #common-issues-workarounds
-[Enable client SDK logging]: #logging
[Azure SNAT (PAT) port exhaustion]: #snat [Production check list]: #production-check-list
data-factory Wrangling Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/wrangling-overview.md
Last updated 07/29/2021
[!INCLUDE[appliesto-adf-xxx-md](includes/appliesto-adf-xxx-md.md)]
-Organizations need to the ability to explore their critical business data for data preparation and wrangling in order to provide accurate analysis of complex data that continues to grow every day. Data preparation is required so that organizations can use the data in various business processes and reduce the time to value.
+Organizations need to have the ability to explore their critical business data for data preparation and wrangling in order to provide accurate analysis of complex data that continues to grow every day. Data preparation is required so that organizations can use the data in various business processes and reduce the time to value.
Data Factory empowers you with code-free data preparation at cloud scale iteratively using Power Query. Data Factory integrates with [Power Query Online](/power-query/) and makes Power Query M functions available as a pipeline activity.
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-portal.md
Previously updated : 05/25/2022 Last updated : 08/31/2022 # Customer intent: As an IT admin, I need to understand how to configure compute on an Azure Stack Edge Pro GPU device so that I can use it to transform data before I send it to Azure.
You can create and manage virtual machines (VMs) on an Azure Stack Edge Pro GPU device by using the Azure portal, templates, and Azure PowerShell cmdlets, and via the Azure CLI or Python scripts. This article describes how to create and manage a VM on your Azure Stack Edge Pro GPU device by using the Azure portal. > [!IMPORTANT]
-> You will need to enable multifactor authentication for the user who manages the VMs and images that are deployed on your device from the cloud. The cloud operations will fail if the user doesn't have multifactor authentication enabled. For steps to enable multifactor authentication click [here](/articles/active-directory/authentication/howto-mfa-userdevicesettings.md)
+> You will need to enable multifactor authentication for the user who manages the VMs and images that are deployed on your device from the cloud. The cloud operations will fail if the user doesn't have multifactor authentication enabled. For steps to enable multifactor authentication, see [Manage authentication methods for Azure AD Multi-Factor Authentication](/azure/active-directory/authentication/howto-mfa-userdevicesettings).
## VM deployment workflow
defender-for-cloud Quickstart Automation Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-automation-alert.md
Title: Create a security automation for specific security alerts by using an Azure Resource Manager template (ARM template) or Bicep description: Learn how to create a Microsoft Defender for Cloud automation to trigger a logic app, which will be triggered by specific Defender for Cloud alerts by using an Azure Resource Manager template (ARM template) or Bicep.-- Previously updated : 05/16/2022 Last updated : 08/31/2022 # Quickstart: Create an automatic response to a specific security alert using an ARM template or Bicep
If you don't have an Azure subscription, create a [free account](https://azure.m
For a list of the roles and permissions required to work with Microsoft Defender for Cloud's workflow automation feature, see [workflow automation](workflow-automation.md).
+The examples in this quickstart assume you have an existing Logic App. To deploy the example, you pass in parameters that contain the logic app name and resource group. For information about deploying a logic app, see [Quickstart: Create and deploy a Consumption logic app workflow in multi-tenant Azure Logic Apps with Bicep](../logic-apps/quickstart-create-deploy-bicep.md) or [Quickstart: Create and deploy a Consumption logic app workflow in multi-tenant Azure Logic Apps with an ARM template](../logic-apps/quickstart-create-deploy-azure-resource-manager-template.md).
+ ## ARM template tutorial [!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Updates in August include:
- [Vulnerabilities for running images are now visible with Defender for Containers on your Windows containers](#vulnerabilities-for-running-images-are-now-visible-with-defender-for-containers-on-your-windows-containers) - [Auto-deployment of Azure Monitor Agent (Preview)](#auto-deployment-of-azure-monitor-agent-preview)
+- [Deprecated VM alerts regarding suspicious activity related to a Kubernetes cluster](#deprecated-vm-alerts-regarding-suspicious-activity-related-to-a-kubernetes-cluster)
### Vulnerabilities for running images are now visible with Defender for Containers on your Windows containers
The [Azure Monitor Agent](../azure-monitor/agents/agents-overview.md) (AMA) coll
The [Azure Monitor Agent is now integrated](enable-data-collection.md?tabs=autoprovision-ama#tabpanel_1_autoprovision-ama) into Microsoft Defender for Cloud. You can [auto-provision Azure Monitor Agent](auto-deploy-azure-monitoring-agent.md) to all of your cloud and on-premises servers with Defender for Cloud. Also, Defender for Cloud protections can use data collected by the Azure Monitor Agent.
+### Deprecated VM alerts regarding suspicious activity related to a Kubernetes cluster
+
+The following table lists the alerts that were deprecated:
+
+| Alert name | Description | Tactics | Severity |
+|--|--|--|--|
+| **Docker build operation detected on a Kubernetes node** <br>(VM_ImageBuildOnNode) | Machine logs indicate a build operation of a container image on a Kubernetes node. While this behavior might be legitimate, attackers might build their malicious images locally to avoid detection. | Defense Evasion | Low |
+| **Suspicious request to Kubernetes API** <br>(VM_KubernetesAPI) | Machine logs indicate that a suspicious request was made to the Kubernetes API. The request was sent from a Kubernetes node, possibly from one of the containers running in the node. Although this behavior can be intentional, it might indicate that the node is running a compromised container. | LateralMovement | Medium |
+| **SSH server is running inside a container** <br>(VM_ContainerSSH) | Machine logs indicate that an SSH server is running inside a Docker container. While this behavior can be intentional, it frequently indicates that a container is misconfigured or breached. | Execution | Medium |
+
+These alerts are used to notify a user about suspicious activity connected to a Kubernetes cluster. The alerts will be replaced with matching alerts that are part of the Microsoft Defender for Cloud Container alerts (`K8S.NODE_ImageBuildOnNode`, `K8S.NODE_ KubernetesAPI` and `K8S.NODE_ ContainerSSH`) which will provide improved fidelity and comprehensive context to investigate and act on the alerts. Learn more about alerts for [Kubernetes Clusters](alerts-reference.md).
+ ## July 2022 Updates in July include:
defender-for-iot Concept Micro Agent Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-micro-agent-configuration.md
Title: Micro agent configurations (Preview)
-description: The collector sends all current data immediately after any configuration change is made. The changes are then applied.
+description: The collector sends all current data immediately after any configuration change is made. The configuration changes are then applied.
Last updated 05/03/2022
defender-for-iot How To Create And Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-create-and-manage-users.md
This section describes permissions available to sensor Administrators, Security
| Manage alerts: acknowledge, learn, and pin | | Γ£ô | Γ£ô | | View events in a timeline | | Γ£ô | Γ£ô | | Authorize devices, known scanning devices, programming devices | | Γ£ô | Γ£ô |
-| Delete devices | | | Γ£ô |
+| Merge and delete devices | | | Γ£ô |
| View investigation data | Γ£ô | Γ£ô | Γ£ô | | Manage system settings | | | Γ£ô | | Manage users | | | Γ£ô |
defender-for-iot How To Manage Cloud Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-cloud-alerts.md
Title: View and manage alerts in the Microsoft Defender for IoT portal on Azure
-description: View and manage alerts detected by cloud-connected network sensors in the Microsoft Defender for IoT portal on Azure.
+ Title: View and manage alerts in Microsoft Defender for IoT on the Azure portal
+description: View and manage alerts detected by cloud-connected network sensors in Microsoft Defender for IoT on the Azure portal.
Last updated 06/30/2022
> [!IMPORTANT] > The **Alerts** page is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. - This article describes how to manage your alerts from Microsoft Defender for IoT on the Azure portal. If you're integrating with Microsoft Sentinel, the alert details and entity information are also sent to Microsoft Sentinel, where you can also view them from the **Alerts** page.
event-grid Subscribe To Partner Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-partner-events.md
You must grant your consent to the partner to create partner topics in a resourc
> For a greater security stance, specify the minimum expiration time that offers the partner enough time to configure your events to flow to Event Grid and to provision your partner topic. > [!NOTE]
-> Event Grid will started enforcing authorization checks to create partner topics or partner destinations around June 30th, 2022.
+> Event Grid started enforcing authorization checks to create partner topics or partner destinations around June 30th, 2022.
Following example shows the way to create a partner configuration resource that contains the partner authorization. You must identify the partner by providing either its **partner registration ID** or the **partner name**. Both can be obtained from your partner, but only one of them is required. For your convenience, the following examples leave a sample expiration time in the UTC format.
governance Logic App Calling Arg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/tutorials/logic-app-calling-arg.md
Keep the query above handy as we will need it later when we configure our Logic
1. From the portal menu, select **Logic Apps**, or use the Azure search box at the top of all pages to search for and select **Logic Apps**.
-2. Click the **Create** button on upper left of your screen and continue with creating your Logic App.
+2. Click the **Add** button on upper left of your screen and continue with creating your Logic App.
## Setup a Managed Identity
hpc-cache Troubleshoot Nas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/troubleshoot-nas.md
description: Tips to avoid and fix configuration errors and other problems that
Previously updated : 05/27/2022 Last updated : 08/29/2022
This article includes details about how to check ports and how to enable needed
If the solution to your problem is not included here, please [open a support ticket](hpc-cache-support-ticket.md) so that Microsoft Service and Support can work with you to investigate and solve the problem.
+## Provide sufficient connection threads
+
+Large HPC Cache systems make multiple connection requests to a storage target. For example, if your storage target uses the Ubuntu Linux `nfs-kernel-server` module, the default number of NFS daemon threads can be as low as eight. Increase the number of threads to 128 or 256, which are more reasonable numbers to support a medium or large HPC Cache.
+
+You can check or set the number of threads in Ubuntu by using the RPCNFSDCOUNT value in `/etc/init.d/nfs-kernel-server`.
+ ## Check port settings Azure HPC Cache needs read/write access to several UDP/TCP ports on the back-end NAS storage system. Make sure these ports are accessible on the NAS system and also that traffic is permitted to these ports through any firewalls between the storage system and the cache subnet. You might need to work with firewall and network administrators for your data center to verify this configuration.
iot-central Concepts Device Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-device-templates.md
A device template in Azure IoT Central is a blueprint that defines the characteristics and behaviors of a type of device that connects to your application. For example, the device template defines the telemetry that a device sends so that IoT Central can create visualizations that use the correct units and data types.
-A solution builder adds device templates to an IoT Central application. A device developer writes the device code that implements the behaviors defined in the device template.
+A solution builder adds device templates to an IoT Central application. A device developer writes the device code that implements the behaviors defined in the device template. To learn more about the data that a device exchanges with IoT Central, see [Telemetry, property, and command payloads](concepts-telemetry-properties-commands.md).
A device template includes the following sections:
iot-central Concepts Telemetry Properties Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-telemetry-properties-commands.md
# Telemetry, property, and command payloads
-A device template in Azure IoT Central is a blueprint that defines the:
+A [device template](concepts-device-templates.md) in Azure IoT Central is a blueprint that defines the:
* Telemetry a device sends to IoT Central. * Properties a device synchronizes with IoT Central.
iot-central Howto Use Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-use-properties.md
Properties represent point-in-time values. For example, a device can use a prope
You can also define cloud properties in an Azure IoT Central application. Cloud property values are never exchanged with a device and are out of scope for this article.
-To learn how to manage properties by using the IoT Central REST API, see [How to use the IoT Central REST API to control devices.](../core/howto-control-devices-with-rest-api.md)
+To learn how to manage properties by using the IoT Central REST API, see [How to use the IoT Central REST API to control devices.](../core/howto-control-devices-with-rest-api.md).
+
+To learn more about the property data that a device exchanges with IoT Central, see [Telemetry, property, and command payloads](concepts-telemetry-properties-commands.md).
## Define your properties
iot-dps Tutorial Custom Hsm Enrollment Group X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-custom-hsm-enrollment-group-x509.md
In this section, you create two device certificates and their full chain certifi
1. To create the private key, X.509 certificate, and full chain certificate for the second device, copy and paste this script into your GitBash command prompt. To create certificates for more devices, you can modify the `registration_id` variable declared at the beginning of the script.
+ # [Windows](#tab/windows)
+ ```bash registration_id=device-02 echo $registration_id
In this section, you create two device certificates and their full chain certifi
cat ./certs/${registration_id}.cert.pem ./certs/azure-iot-test-only.intermediate.cert.pem ./certs/azure-iot-test-only.root.ca.cert.pem > ./certs/${registration_id}-full-chain.cert.pem ```
+ # [Linux](#tab/linux)
+
+ ```bash
+ registration_id=device-02
+ echo $registration_id
+ openssl genrsa -out ./private/${registration_id}.key.pem 4096
+ openssl req -config ./openssl_device_intermediate_ca.cnf -key ./private/${registration_id}.key.pem -subj "/CN=$registration_id" -new -sha256 -out ./csr/${registration_id}.csr.pem
+ openssl ca -batch -config ./openssl_device_intermediate_ca.cnf -passin pass:1234 -extensions usr_cert -days 30 -notext -md sha256 -in ./csr/${registration_id}.csr.pem -out ./certs/${registration_id}.cert.pem
+ cat ./certs/${registration_id}.cert.pem ./certs/azure-iot-test-only.intermediate.cert.pem ./certs/azure-iot-test-only.root.ca.cert.pem > ./certs/${registration_id}-full-chain.cert.pem
+ ```
+
+
+ >[!NOTE] > This script uses the registration ID as the base filename for the private key and certificate files. If your registration ID contains characters that aren't valid filename characters, you'll need to modify the script accordingly.
You'll use the following files in the rest of this tutorial:
| device-01 private key | *private/device-01.key.pem* | Used by the device to verify ownership of the device certificate during authentication with DPS. | | device-01 full chain certificate | *certs/device-01-full-chain.cert.pem* | Presented by the device to authenticate and register with DPS. | | device-02 private key | *private/device-02.key.pem* | Used by the device to verify ownership of the device certificate during authentication with DPS. |
-| device-02 full chain certificate | *certs/device-01-full-chain.cert.pem* | Presented by the device to authenticate and register with DPS. |
+| device-02 full chain certificate | *certs/device-02-full-chain.cert.pem* | Presented by the device to authenticate and register with DPS. |
## Verify ownership of the root certificate
iot-hub-device-update Device Update Region Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-region-mapping.md
+
+ Title: Region mapping for BCDR for Device Update for Azure IoT Hub | Microsoft Docs
+description: Regional mapping for BCDR for Device Update for IoT Hub.
++ Last updated : 08/31/2022++++
+# Regional mapping for BCDR for Device Update for IoT Hub
+
+In cases where an Azure region is unavailable due to an outage, data contained in the update files submitted to the Device Update for IoT Hub service may be sent to a specific Azure region for the duration of the outage, for the purpose of anti-malware scanning and making the updates available on the Device Update service endpoints.
+
+## Failover region mapping
+
+| Region name | Fails over to
+| | |
+| North Europe | West Europe |
+| West Europe | North Europe |
+| UK South | North Europe |
+| Sweden Central | North Europe |
+| East US | West US 2 |
+| East US 2 | West US 2 |
+| West US 2 | East US |
+| West US 3 | East US |
+| South Central US | East US |
+| East US 2 (EUAP) | West US 2 |
+| Australia East | Southeast Asia |
+| Southeast Asia | Australia East |
++
+## Next steps
+
+[Learn about importing updates](.\import-update.md)
++
iot-hub Iot Hub Rm Template Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-rm-template-powershell.md
Learn how to use an Azure Resource Manager template to create an IoT Hub and a consumer group. Resource Manager templates are JSON files that define the resources you need to deploy for your solution. For more information about developing Resource Manager templates, see [Azure Resource Manager documentation](../azure-resource-manager/index.yml).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
- ## Create an IoT hub
-The Resource Manager template used in this quickstart is from [Azure Quickstart templates](https://azure.microsoft.com/resources/templates/iothub-with-consumergroup-create/). Here is a copy of the template:
+The following [Resource Manager JSON template](https://azure.microsoft.com/resources/templates/iothub-with-consumergroup-create/) used in this article is one of many templates from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/). This template creates an Azure Iot hub with three endpoints (eventhub, cloud-to-device, and messaging) and a consumer group. For more information on the Iot Hub template schema, see [Microsoft.Devices (IoT Hub) resource types](/azure/templates/microsoft.devices/iothub-allversions).
[!code-json[iothub-creation](~/quickstart-templates/quickstarts/microsoft.devices/iothub-with-consumergroup-create/azuredeploy.json)]
-The template creates an Azure Iot hub with three endpoints (eventhub, cloud-to-device, and messaging), and a consumer group. For more template samples, see [Azure Quickstart templates](https://azure.microsoft.com/resources/templates/?resourceType=Microsoft.Devices&pageNumber=1&sort=Popular). The Iot Hub template schema can be found [here](/azure/templates/microsoft.devices/iothub-allversions).
-
-There are several methods for deploying a template. You use Azure PowerShell in this tutorial.
+There are several methods for deploying a template. You use Azure PowerShell in this article.
-To run the PowerShell script, Select **Try it** to open the Azure Cloud shell. To paste the script, right-click the shell, and then select Paste:
+To run the following PowerShell script, select **Try it** to open the Azure Cloud Shell. Copy the script, paste it into the shell, and answer the prompts to create a new resource, choose a region, and create a new IoT hub.
```azurepowershell-interactive $resourceGroupName = Read-Host -Prompt "Enter the Resource Group name"
New-AzResourceGroupDeployment `
-iotHubName $iotHubName ```
-As you can see from the PowerShell script, the template used is from Azure Quickstart templates. To use your own, you need to first upload the template file to the Cloud shell, and then use the `-TemplateFile` switch to specify the file name. For an example, see [Deploy the template](../azure-resource-manager/templates/quickstart-create-templates-use-visual-studio-code.md?tabs=PowerShell#deploy-the-template).
+> [!NOTE]
+> To use your own template, upload your template file to the Cloud Shell, and then use the `-TemplateFile` switch to specify the file name. For example, see [Deploy the template](../azure-resource-manager/templates/quickstart-create-templates-use-visual-studio-code.md?tabs=PowerShell#deploy-the-template).
## Next steps
-Now you have deployed an IoT hub by using an Azure Resource Manager template, you may want to explore further:
+Since you've deployed an IoT hub, using an Azure Resource Manager template, you may want to explore:
-* Read about the capabilities of the [IoT Hub resource provider REST API][lnk-rest-api].
-* Read [Azure Resource Manager overview][lnk-azure-rm-overview] to learn more about the capabilities of Azure Resource Manager.
-* For the JSON syntax and properties to use in templates, see [Microsoft.Devices resource types](/azure/templates/microsoft.devices/iothub-allversions).
+* Capabilities of the [IoT Hub resource provider REST API][lnk-rest-api]
+* Capabilities of the [Azure Resource Manager][lnk-azure-rm-overview]
+* JSON syntax and properties to use in templates: [Microsoft.Devices resource types](/azure/templates/microsoft.devices/iothub-allversions)
-To learn more about developing for IoT Hub, see the following articles:
+To learn more about developing for IoT Hub, see:
* [Introduction to C SDK][lnk-c-sdk] * [Azure IoT SDKs][lnk-sdks]
-To further explore the capabilities of IoT Hub, see:
+To explore more capabilities of IoT Hub, see:
* [Deploying AI to edge devices with Azure IoT Edge][lnk-iotedge]
logic-apps Logic Apps Enterprise Integration Create Integration Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-create-integration-account.md
For this task, you can use the Azure portal, [Azure CLI](/cli/azure/resource#az-
|-|-|-|-| | **Subscription** | Yes | <*Azure-subscription-name*> | The name for your Azure subscription | | **Resource group** | Yes | <*Azure-resource-group-name*> | The name for the [Azure resource group](../azure-resource-manager/management/overview.md) to use for organizing related resources. For this example, create a new resource group named **FabrikamIntegration-RG**. |
- | **Integration account name** | Yes | <*integration-account-name*> | Your integration account's name, which can contain only letters, numbers, hyphens (`-`), underscores (`_`), parentheses (`(`, `)`), and periods (`.`). This example uses **Fabrikam-Integration**. |
+ | **Integration account name** | Yes | <*integration-account-name*> | Your integration account's name, which can contain only letters, numbers, hyphens (`-`), underscores (`_`), parentheses (`()`), and periods (`.`). This example uses **Fabrikam-Integration**. |
| **Region** | Yes | <*Azure-region*> | The Azure region where to store your integration account metadata. Either select the same location as your logic app resource, or create your logic apps in the same location as your integration account. For this example, use **West US**. <br><br>**Note**: To create an integration account inside an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md), select **Associate with integration service environment** and select your ISE as the location. For more information, see [Create integration accounts in an ISE](add-artifacts-integration-service-environment-ise.md#create-integration-account-environment). | | **Pricing Tier** | Yes | <*pricing-level*> | The pricing tier for the integration account, which you can change later. For this example, select **Free**. For more information, review the following documentation: <br><br>- [Logic Apps pricing model](logic-apps-pricing.md#integration-accounts) <br>- [Logic Apps limits and configuration](logic-apps-limits-and-config.md#integration-account-limits) <br>- [Logic Apps pricing](https://azure.microsoft.com/pricing/details/logic-apps/) | | **Enable log analytics** | No | Unselected | For this example, don't select this option. |
Before you can link your integration account to a Standard logic app resource, y
1. Find the **Generated Callback URL** property value, copy the value, and save the URL to use later for linking.
-#### Link your integration account to your Standard logic app resource
+#### Link integration account to Standard logic app
+
+##### Azure portal
1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource.
Before you can link your integration account to a Standard logic app resource, y
1. When you're done, select **OK**. When you return to the **Configuration** pane, make sure to save your changes. On the **Configuration** pane toolbar, select **Save**.
+##### Visual Studio Code
+
+1. From your Standard logic app project in Visual Studio Code, open the **local.settings.json** file.
+
+1. In the `Values` object, add an app setting that has the following properties and values, including the previously saved callback URL:
+
+ | Property | Value |
+ |-|-|
+ | **Name** | **WORKFLOW_INTEGRATION_ACCOUNT_CALLBACK_URL** |
+ | **Value** | <*integration-account-callback-URL*> |
+ |||
+
+ This example shows how a sample app setting might appear:
+
+ ```json
+ {
+ "IsEncrypted": false,
+ "Values": {
+ "AzureWebJobStorage": "UseDevelopmentStorage=true",
+ "FUNCTIONS_WORKER_RUNTIME": "node",
+ "WORKFLOW_INTEGRATION_ACCOUNT_CALLBACK_URL": "https://prod-03.westus.logic.azure.com:443/integrationAccounts/...."
+ }
+ }
+ ```
+
+1. When you're done, save your changes.
+ <a name="change-pricing-tier"></a>
If you want to link your logic app to another integration account, or no longer
### [Standard](#tab/standard)
+#### Azure portal
+ 1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource. 1. On your logic app's navigation menu, under **Settings**, select **Configuration**.
If you want to link your logic app to another integration account, or no longer
1. On the **Configuration** pane toolbar, select **Save**.
+#### Visual Studio Code
+
+1. From your Standard logic app project in Visual Studio Code, open the **local.settings.json** file.
+
+1. In the `Values` object, find and delete the app setting that has the following properties and values:
+
+ | Property | Value |
+ |-|-|
+ | **Name** | **WORKFLOW_INTEGRATION_ACCOUNT_CALLBACK_URL** |
+ | **Value** | <*integration-account-callback-URL*> |
+ |||
+
+1. When you're done, save your changes.
+ ## Move integration account
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-training-vnet.md
When the creation process finishes, you train your model by using the cluster in
When you enable **No public IP**, your compute cluster doesn't use a public IP for communication with any dependencies. Instead, it communicates solely within the virtual network using Azure Private Link ecosystem and service/private endpoints, eliminating the need for a public IP entirely. No public IP removes access and discoverability of compute cluster nodes from the internet thus eliminating a significant threat vector. **No public IP** clusters help comply with no public IP policies many enterprises have.
-> [!WARNING]
-> By default, you do not have public internet access from No Public IP Compute Cluster. You need to configure User Defined Routing (UDR) to reach to a public IP to access the internet. For example, you can use a public IP of your firewall, or you can use [Virtual Network NAT](../virtual-network/nat-gateway/nat-overview.md) with a public IP.
- A compute cluster with **No public IP** enabled has **no inbound communication requirements** from public internet. Specifically, neither inbound NSG rule (`BatchNodeManagement`, `AzureMachineLearning`) is required. You still need to allow inbound from source of **VirtualNetwork** and any port source, to destination of **VirtualNetwork**, and destination port of **29876, 29877** and inbound from source **AzureLoadBalancer** and any port source to destination **VirtualNetwork** and port **44224** destination.
+> [!WARNING]
+> By default, you do not have public internet access from No Public IP Compute Cluster. This prevents *outbound* access to required resources such as Azure Active Directory, Azure Resource Manager, Microsoft Container Registry, and other outbound resources as listed in the [Required public internet access](#required-public-internet-access) section. Or to non-Microsoft resources such as Pypi or Conda repositories. To resolve this problem, you need to configure User Defined Routing (UDR) to reach to a public IP to access the internet. For example, you can use a public IP of your firewall, or you can use [Virtual Network NAT](../virtual-network/nat-gateway/nat-overview.md) with a public IP.
+ **No public IP** clusters are dependent on [Azure Private Link](how-to-configure-private-link.md) for Azure Machine Learning workspace. A compute cluster with **No public IP** also requires you to disable private endpoint network policies and private link service network policies. These requirements come from Azure private link service and private endpoints and aren't Azure Machine Learning specific. Follow instruction from [Disable network policies for Private Link service](../private-link/disable-private-link-service-network-policy.md) to set the parameters `disable-private-endpoint-network-policies` and `disable-private-link-service-network-policies` on the virtual network subnet.
For steps on how to create a compute instance deployed in a virtual network, see
When you enable **No public IP**, your compute instance doesn't use a public IP for communication with any dependencies. Instead, it communicates solely within the virtual network using Azure Private Link ecosystem and service/private endpoints, eliminating the need for a public IP entirely. No public IP removes access and discoverability of compute instance node from the internet thus eliminating a significant threat vector. Compute instances will also do packet filtering to reject any traffic from outside virtual network. **No public IP** instances are dependent on [Azure Private Link](how-to-configure-private-link.md) for Azure Machine Learning workspace. > [!WARNING]
-> By default, you do not have public internet access from No Public IP Compute Instance. You need to configure User Defined Routing (UDR) to reach to a public IP to access the internet. For example, you can use a public IP of your firewall, or you can use [Virtual Network NAT](../virtual-network/nat-gateway/nat-overview.md) with a public IP.
+> By default, you do not have public internet access from No Public IP Compute Instance. You need to configure User Defined Routing (UDR) to reach to a public IP to access the internet. For example, you can use a public IP of your firewall, or you can use [Virtual Network NAT](../virtual-network/nat-gateway/nat-overview.md) with a public IP. Specifically, you need access to Azure Active Directory, Azure Resource Manager, Microsoft Container Registry, and other outbound resources as listed in the [Required public internet access](#required-public-internet-access) section. You may also need outbound access to non-Microsoft resources such as Pypi or Conda repositories.
For **outbound connections** to work, you need to set up an egress firewall such as Azure firewall with user defined routes. For instance, you can use a firewall set up with [inbound/outbound configuration](how-to-access-azureml-behind-firewall.md) and route traffic there by defining a route table on the subnet in which the compute instance is deployed. The route table entry can set up the next hop of the private IP address of the firewall with the address prefix of 0.0.0.0/0.
machine-learning How To Troubleshoot Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-data-access.md
+
+ Title: Troubleshoot data access
+
+description: Learn how to troubleshoot and resolve data access issues.
++++++ Last updated : 08/03/2022++++
+# Troubleshoot data access errors
++
+In this guide, learn how to identify and resolve known issues with data access with the [Azure Machine Learning SDK](/python/api/overview/azure/ml/intro).
+
+## Error Codes
+
+Data access error codes are hierarchical. Error codes are delimited by the full stop character `.` and are more specific the more segments there are.
+
+## ScriptExecution.DatabaseConnection
+
+### ScriptExecution.DatabaseConnection.NotFound
+
+The database or server defined in the datastore couldn't be found or no longer exists. Check if the database still exists in Azure portal or linked to from the Azure Machine Learning studio datastore details page. If it doesn't exist, recreating it with the same name will enable the existing datastore to be used. If a new server name or database is used, the datastore will have to be deleted, and recreated to use the new name.
+
+### ScriptExecution.DatabaseConnection.Authentication
+
+The authentication failed while trying to connect to the database. The authentication method is stored inside the datastore and supports SQL authentication, service principal, or no stored credential (identity based access). Enabling workspace MSI makes the authentication use the workspace MSI when previewing data in Azure Machine Learning studio. A SQL server user needs to be created for the service principal and workspace MSI (if applicable) and granted classic database permissions. More info can be found [here](/azure/azure-sql/database/authentication-aad-service-principal-tutorial#create-the-service-principal-user-in-azure-sql-database).
+
+Contact your data admin to verify or add the correct permissions to the service principal or user identity.
+
+Errors also include:
+
+- ScriptExecution.DatabaseConnection.Authentication.AzureIdentityAccessTokenResolution.InvalidResource
+ - The server under the subscription and resource group couldn't be found. Check that the subscription ID and resource group defined in the datastore matches that of the server and update the values if needed.
+ > [!NOTE]
+ > Use the subscription ID and resource group of the server and not of the workspace. If the datastore is cross subscription or cross resource group server, these will be different.
+- ScriptExecution.DatabaseConnection.Authentication.AzureIdentityAccessTokenResolution.FirewallSettingsResolutionFailure
+ - The identity doesn't have permission to read firewall settings of the target server. Contact your data admin to the Reader role to the workspace MSI.
+
+## ScriptExecution.DatabaseQuery
+
+### ScriptExecution.DatabaseQuery.TimeoutExpired
+
+The executed SQL query took too long and timed out. The timeout can be specified at time of data asset creation. If a new timeout is needed, a new asset must be created, or a new version of the current asset must be created. In Azure Machine Learning studio SQL preview, there will have a fixed query timeout, but the defined value will always be honored for jobs.
+
+## ScriptExecution.StreamAccess
+
+### ScriptExecution.StreamAccess.Authentication
+
+The authentication failed while trying to connect to the storage account. The authentication method is stored inside the datastore and depending on the datastore type, can support account key, SAS token, service principal or no stored credential (identity based access). Enabling workspace MSI makes the authentication use the workspace MSI when previewing data in Azure Machine Learning studio.
+
+Contact your data admin to verify or add the correct permissions to the service principal or user identity.
+
+> [!IMPORTANT]
+> If identity based access is used, the required RBAC role is Storage Blob Data Reader. If workspace MSI is used for Azure Machine Learning studio preview, the required RBAC roles are Storage Blob Data Reader and Reader.
+
+Errors also include:
+
+- ScriptExecution.StreamAccess.Authentication.AzureIdentityAccessTokenResolution.FirewallSettingsResolutionFailure
+ - The identity doesn't have permission to read firewall settings of the target storage account. Contact your data admin to the Reader role to the workspace MSI.
+- ScriptExecution.StreamAccess.Authentication.AzureIdentityAccessTokenResolution.PrivateEndpointResolutionFailure
+ - The target storage account is using a virtual network but the logged in session isn't connecting to the workspace via a private endpoint. Add a private endpoint to the workspace and ensure that the virtual network or subnet of the private endpoint is allowed by the storage virtual network settings. Add the logged in session's public IP to the storage firewall allowlist.
+- ScriptExecution.StreamAccess.Authentication.AzureIdentityAccessTokenResolution.NetworkIsolationViolated
+ - The target storage account's firewall settings don't permit this data access. Check that your logged in session is within compatible network settings with the storage account. If Workspace MSI is used, check that it has Reader access to the storage account and to the private endpoints associated with the storage account.
+- ScriptExecution.StreamAccess.Authentication.AzureIdentityAccessTokenResolution.InvalidResource
+ - The storage account under the subscription and resource group couldn't be found. Check that the subscription ID and resource group defined in the datastore matches that of the storage account and update the values if needed.
+ > [!NOTE]
+ > Use the subscription ID and resource group of the server and not of the workspace. If the datastore is cross subscription or cross resource group server, these will be different.
+
+### ScriptExecution.StreamAccess.NotFound
+
+The specified file or folder path doesn't exist. Check that the provided path exists in Azure portal or if using a datastore, that the right datastore is used (including the datastore's account and container). If the storage account is an HNS enabled Blob storage, otherwise known as ADLS Gen2, or an `abfs[s]` URI, that storage ACLs may restrict particular folders or paths. This error will appear as a "NotFound" error instead of an "Authentication" error.
+
+### ScriptExecution.StreamAccess.Validation
+
+There were validation errors in the request for data access.
+
+Errors also include:
+
+- ScriptExecution.StreamAccess.Validation.TextFile-InvalidEncoding
+ - The defined encoding for delimited file parsing isn't applicable for the underlying data. Update the encoding of the MLTable to match the encoding of the file(s).
+- ScriptExecution.StreamAccess.Validation.StorageRequest-InvalidUri
+ - The requested URI isn't well formatted. We support `abfs[s]`, `wasb[s]`, `https`, and `azureml` URIs.
+
+## Next steps
+
+- See more information on [data concepts in Azure Machine Learning](concept-data.md)
+
+- [Identity-based data access to storage services on Azure](how-to-identity-based-data-access.md).
+
+- [Read and write data in a job](how-to-read-write-data-v2.md)
machine-learning How To Use Batch Endpoint Sdk V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-endpoint-sdk-v2.md
A deployment is a set of resources required for hosting the model that does the
Using the `MLClient` created earlier, we'll get a handle to the endpoint. The endpoint can be invoked using the `invoke` command with the following parameters:
-* `name` - Name of the endpoint
+* `endpoint_name` - Name of the endpoint
* `input` - Path where input data is present * `deployment_name` - Name of the specific deployment to test in an endpoint
Delete endpoint
ml_client.batch_endpoints.begin_delete(name=batch_endpoint_name) ```
+Delete compute: optional, as you may choose to reuse your compute cluster with later deployments.
+
+```python
+ml_client.compute.begin_delete(name=compute_name)
+```
+ ## Next steps If you encounter problems using batch endpoints, see [Troubleshooting batch endpoints](how-to-troubleshoot-batch-endpoints.md).
managed-grafana How To Create Api Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-create-api-keys.md
+
+ Title: Create and manage Grafana API keys in Azure Managed Grafana
+description: Learn how to generate and manage Grafana API keys, and start making API calls for Azure Managed Grafana.
++++ Last updated : 08/31/2022++
+# Generate and manage Grafana API keys in Azure Managed Grafana
+
+In this guide, learn how to generate and manage API keys, and start making API calls to the Grafana server. Grafana API keys will enable you to create integrations between Azure Managed Grafana and other services.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- An Azure Managed Grafana instance. If you don't have one yet, [create an Azure Managed Grafana instance](./quickstart-managed-grafana-portal.md).
+
+## Enable API keys
+
+API keys are disabled by default in Azure Managed Grafana. You can enable this feature during the creation of the instance on the Azure portal, or you can activate it on an existing instance, using the Azure portal or the CLI.
+
+### Create an Azure Managed Grafana workspace with API key creation enabled
+
+During the creation of the Azure Managed Grafana workspace, enable the creation of API keys in the **Advanced** tab, by setting **Enable API key creation** to **Enabled**. For more information about creating a new instance using the Azure portal, go to [Quickstart: Create an Azure Managed Grafana instance](quickstart-managed-grafana-portal.md).
+
+### Enable API key creation on an existing Azure Managed Grafana instance
+
+#### [Portal](#tab/azure-portal)
+
+ 1. In the Azure portal, under **Settings**, select **Configuration**, and then under **API keys**, select **Enable**.
+
+ :::image type="content" source="media/create-api-keys/enable-api-keys.png" alt-text="Screenshot of the Azure platform. Enable API keys.":::
+ 1. Select **Save** to confirm that you want to activate the creation of API keys in Azure Managed Grafana.
+
+#### [Azure CLI](#tab/azure-cli)
+
+Run the [az grafana update](/cli/azure/grafana#az-grafana-update) command to enable the creation of API keys in an existing Azure Managed Grafana instance. In the command below, replace `<azure-managed-grafana-name>` with the name of the Azure Managed Grafana instance to update.
+
+```azurecli-interactive
+az grafana update --name <azure-managed-grafana-name> --api-keys Enabled
+```
+++
+## Generate an API key
+
+### [Portal](#tab/azure-portal)
+
+1. Open your Azure Managed Grafana instance and from the left menu, select **Configuration > API keys**.
+ :::image type="content" source="media/create-api-keys/access-page.png" alt-text="Screenshot of the Grafana dashboard. Access API keys page.":::
+1. Select **New API key**.
+1. Fill out the form, and select **Add** to generate the new API key.
+
+ | Parameter | Description | Example |
+ |--||-|
+ | **Key name** | Enter a name for your new Grafana API key. | *api-key-1* |
+ | **Managed Grafana role** | Choose a Managed Grafana role: Viewer, Editor or Admin. | *Editor* |
+ | **Time to live** | Enter a time before your API key expires. Use *s* for seconds, *m* for minutes, *h* for hours, *d* for days, *w* for weeks, *M* for months, *y* for years. | 7d |
+
+ :::image type="content" source="media/create-api-keys/form.png" alt-text="Screenshot of the Grafana dashboard. API creation form filled out.":::
+
+1. Once the key has been generated, a message pops up with the new key and a curl command including your key. Copy this information and save it in your records now, as it will be hidden once you leave this page. If you close this page without save the new API key, you'll need to generate a new one.
+
+ :::image type="content" source="media/create-api-keys/api-key-created.png" alt-text="Screenshot of the Grafana dashboard. API key is displayed.":::
+
+You can now use this Grafana API key to call the Grafana server.
+
+### [Azure CLI](#tab/azure-cli)
+
+1. Run the [az grafana api-key create](/cli/azure/grafana/api-key#az-grafana-api-key-create) command to create an API key for Azure Managed Grafana. Replace `<azure-managed-grafana-name>` and `<key>` with the name of the Azure Managed Grafana instance to update and a name for the new API key.
+
+ Optionally also add more parameters, such as `--role` and `--time-to-live`.
+
+ | Parameter | Description | Example |
+ ||--|-|
+ | `--role` | Select an Azure Managed Grafana role by entering `Admin`, `Editor` or `Viewer`. The default value is `Viewer`. | *Editor* |
+ | `--time-to-live` | Enter a time before your API key expires. Use `s` for seconds, `m` for minutes, `h` for hours, `d` for days, `w` for weeks, `M` for months or `y` for years. The default value is `1d`. | 7d |
+
+ ```azurecli-interactive
+ az grafana api-key create --name <azure-managed-grafana-name> --key <key>
+ ```
+
+1. The terminal returns a key ID, a key and a key name. Copy this information and save it in your records now, as you'll only be able to view this key once.
+++
+## Test the API key
+
+Run the [az grafana dashboard list](/cli/azure/grafana/dashboard#az-grafana-dashboard-list) command below to check if your API key is working. Replace the placeholders `<azure-managed-grafana-name>` and `<api-key>` with the name of your Azure Managed Grafana instance and your API key.
+
+```azurecli-interactive
+az grafana dashboard list --name <azure-managed-grafana-name> --api-key <api-key>
+```
+
+The terminal's output lists all the dashboards your API key can access in the specified Azure Managed Grafana instance.
+
+## Manage API keys
+
+### [Portal](#tab/azure-portal)
+
+Existing API keys are listed in **Configuration > API keys**. By default, only active API keys are displayed. Select **Include expired keys** to view all created keys, and select **X** (Delete) to delete the API key.
++
+### [Azure CLI](#tab/azure-cli)
+
+#### List API keys
+
+Run the [az grafana api-key list](/cli/azure/grafana/api-key#az-grafana-api-key-list) command to list the API keys in an existing Azure Managed Grafana instance. In the command below, replace `<azure-managed-grafana-name>` with the name of the Azure Managed Grafana instance.
+
+```azurecli-interactive
+az grafana api-key list --name <azure-managed-grafana-name> --output table
+```
+
+Example of output:
+
+```Output
+Name Role Expiration
+-- --
+key01 Viewer
+key02 Viewer 2022-08-31T17:14:44Z
+```
+
+#### Delete API keys
+
+Run the [az grafana api-key delete](/cli/azure/grafana/api-key#az-grafana-api-key-delete) command to delete API keys. In the command below, replace `<azure-managed-grafana-name>` and `<key>`with the name of the Azure Managed Grafana instance and the ID or the name of the API key to delete.
+
+```azurecli-interactive
+az grafana api-key delete --name <azure-managed-grafana-name> --key <key>
+```
+++
+## Next steps
+
+In this how-to guide, you learned how to create an API key for Azure Managed Grafana. To learn how to call Grafana APIs, see:
+
+> [!div class="nextstepaction"]
+> [Call Grafana APIs](how-to-api-calls.md)
marketplace Gtm Your Marketplace Benefits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/gtm-your-marketplace-benefits.md
description: Go-To-Market Services - Describes Microsoft resources that publishe
Previously updated : 06/29/2022 Last updated : 08/31/2022
marketplace Gtm Your Marketplace Benefits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nodejs-use-node-modules-azure-apps.md
editor: '' ms.assetid: c0e6cd3d-932d-433e-b72d-e513e23b4eb6- na ms.devlang: javascript
openshift Howto Gpu Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-gpu-workloads.md
+
+ Title: Use GPU workloads with Azure Red Hat OpenShift (ARO)
+description: Discover how to utilize GPU workloads with Azure Red Hat OpenShift (ARO)
+++
+keywords: aro, gpu, openshift, red hat
+ Last updated : 08/30/2022+++
+# Use GPU workloads with Azure Red Hat OpenShift
+
+This article shows you how to use Nvidia GPU workloads with Azure Red Hat OpenShift (ARO).
+
+## Prerequisites
+
+* OpenShift CLI
+* jq, moreutils, and gettext package
+* Azure Red Hat OpenShift 4.10
+
+If you need to install an ARO cluster, see [Tutorial: Create an Azure Red Hat OpenShift 4 cluster](tutorial-create-cluster.md). ARO clusters must be version 4.10.x or higher.
+
+> [!NOTE]
+> As of ARO 4.10, it is no longer necessary to set up entitlements to use the Nvidia Operator. This has greatly simplified the setup of the cluster for GPU workloads.
+
+Linux:
+
+```bash
+sudo dnf install jq moreutils gettext
+```
+
+macOS
+```bash
+brew install jq moreutils gettext
+```
+
+## Request GPU quota
+
+All GPU quotas in Azure are 0 by default. You will need to sign in to the Azure portal and request GPU quota. Due to competition for GPU workers, you may have to provision an ARO cluster in a region where you can actually reserve GPU.
+
+ARO supports the following GPU workers:
+
+* NC4as T4 v3
+* NC8as T4 v3
+* NC16as T4 v3
+* NC464as T4 v3
+
+> [!NOTE]
+> When requesting quota, remember that Azure is per core. To request a single NC4as T4 v3 node, you will need to request quota in groups of 4. If you wish to request an NC16as T4 v3, you will need to request quota of 16.
+
+1. Sign in to [Azure portal](https://portal.azure.com).
+
+1. Enter **quotas** in the search box, then select **Compute**.
+
+1. In the search box, enter **NCAv3_T4**, check the box for the region your cluster is in, and then select **Request quota increase**.
+
+1. Configure quota.
+
+ :::image type="content" source="media/howto-gpu-workloads/gpu-quota-azure.png" alt-text="Screenshot of quotas page on Azure portal.":::
+
+## Sign in to your ARO cluster
+
+Sign in to OpenShift with a user account with cluster-admin privileges. The example below uses an account named **kubadmin**:
+
+ ```bash
+ oc login <apiserver> -u kubeadmin -p <kubeadminpass>
+ ```
+
+## Pull secret (conditional)
+
+Update your pull secret to make sure you can install operators and connect to [cloud.redhat.com](https://cloud.redhat.com/).
+
+> [!NOTE]
+> Skip this step if you have already recreated a full pull secret with cloud.redhat.com enabled.
+
+1. Log into to [cloud.redhat.com](https://cloud.redhat.com/).
+
+1. Browse to https://cloud.redhat.com/openshift/install/azure/aro-provisioned.
+
+1. Select **Download pull secret** and save the pull secret as `pull-secret.txt`.
+
+ > [!IMPORTANT]
+ > The remaining steps in this section must be run in the same working directory as `pull-secret.txt`.
+
+1. Export the existing pull secret.
+
+ ```bash
+ oc get secret pull-secret -n openshift-config -o json | jq -r '.data.".dockerconfigjson"' | base64 --decode > export-pull.json
+ ```
+
+1. Merge the downloaded pull secret with the system pull secret to add `cloud.redhat.com`.
+
+ ```bash
+ jq -s '.[0] * .[1]' export-pull.json pull-secret.txt | tr -d "\n\r" > new-pull-secret.json
+ ```
+
+1. Upload the new secret file.
+
+ ```bash
+ oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=new-pull-secret.json
+ ```
+
+ > You may need to wait about 1 hour for everything to sync up with cloud.redhat.com.
+
+1. Delete secrets.
+
+ ```bash
+ rm pull-secret.txt export-pull.json new-pull-secret.json
+ ```
+
+## GPU machine set
+
+ARO uses Kubernetes MachineSet to create machine sets. The procedure below explains how to export the first machine set in a cluster and use that as a template to build a single GPU machine.
+
+1. View existing machine sets.
+
+ For ease of setup, this example uses the first machine set as the one to clone to create a new GPU machine set.
+
+ ```bash
+ MACHINESET=$(oc get machineset -n openshift-machine-api -o=jsonpath='{.items[0]}' | jq -r '[.metadata.name] | @tsv')
+ ```
+
+1. Save a copy of the example machine set.
+
+ ```bash
+ oc get machineset -n openshift-machine-api $MACHINESET -o json > gpu_machineset.json
+ ```
+
+1. Change the `.metadata.name` field to a new unique name.
+
+ <!--I'm going to create a unique name for this single node machine set, that shows nvidia-worker-<region><az>, that follows a similar pattern as all the other machine sets.-->
+
+ ```bash
+ jq '.metadata.name = "nvidia-worker-<region><az>"' gpu_machineset.json| sponge gpu_machineset.json
+ ```
+
+1. Ensure `spec.replicas` matches the desired replica count for the machine set.
+
+ ```bash
+ jq '.spec.replicas = 1' gpu_machineset.json| sponge gpu_machineset.json
+ ```
+
+1. Change the `.spec.selector.matchLabels.machine.openshift.io/cluster-api-machineset` field to match the `.metadata.name` field.
+
+ ```bash
+ jq '.spec.selector.matchLabels."machine.openshift.io/cluster-api-machineset" = "nvidia-worker-southcentralus1"' gpu_machineset.json| sponge gpu_machineset.json
+ ```
+
+1. Change the `.spec.template.metadata.labels.machine.openshift.io/cluster-api-machineset` to match the `.metadata.name` field.
+
+ ```bash
+ jq '.spec.template.metadata.labels."machine.openshift.io/cluster-api-machineset" = "nvidia-worker-southcentralus1"' gpu_machineset.json| sponge gpu_machineset.json
+ ```
+
+1. Change the `spec.template.spec.providerSpec.value.vmSize` to match the desired GPU instance type from Azure.
+
+ The machine used in this example is **Standard_NC4as_T4_v3**.
+
+ ```bash
+ jq '.spec.template.spec.providerSpec.value.vmSize = "Standard_NC4as_T4_v3"' gpu_machineset.json | sponge gpu_machineset.json
+ ```
+
+1. Change the `spec.template.spec.providerSpec.value.zone` to match the desired zone from Azure.
+
+ ```bash
+ jq '.spec.template.spec.providerSpec.value.zone = "1"' gpu_machineset.json | sponge gpu_machineset.json
+ ```
+
+1. Delete the `.status` section of the yaml file.
+
+ ```bash
+ jq 'del(.status)' gpu_machineset.json | sponge gpu_machineset.json
+ ```
+
+1. Verify the other data in the yaml file.
+
+#### Create GPU machine set
+
+Use the following steps to create the new GPU machine. It may take 10-15 minutes to provision a new GPU machine. If this step fails, sign in to [Azure portal](https://portal.azure.com) and ensure there are no availability issues. To do so, go to **Virtual Machines** and search for the worker name you created previously to see the status of VMs.
+
+1. Create the GPU Machine set.
+
+ ```bash
+ oc create -f gpu_machineset.json
+ ```
+
+ This command will take a few minutes to complete.
+
+1. Verify GPU machine set.
+
+ Machines should be deploying. You can view the status of the machine set with the following commands:
+
+ ```bash
+ oc get machineset -n openshift-machine-api
+ oc get machine -n openshift-machine-api
+ ```
+
+ Once the machines are provisioned (which could take 5-15 minutes), machines will show as nodes in the node list:
+
+ ```bash
+ oc get nodes
+ ```
+
+ You should see a node with the `nvidia-worker-southcentralus1` name that was created previously.
+
+## Install Nvidia GPU Operator
+
+This section explains how to create the `nvidia-gpu-operator` namespace, set up the operator group, and install the Nvidia GPU operator.
+
+1. Create Nvidia namespace.
+
+ ```yaml
+ cat <<EOF | oc apply -f -
+ apiVersion: v1
+ kind: Namespace
+ metadata:
+ name: nvidia-gpu-operator
+ EOF
+ ```
+
+1. Create Operator Group.
+
+ ```yaml
+ cat <<EOF | oc apply -f -
+ apiVersion: operators.coreos.com/v1
+ kind: OperatorGroup
+ metadata:
+ name: nvidia-gpu-operator-group
+ namespace: nvidia-gpu-operator
+ spec:
+ targetNamespaces:
+ - nvidia-gpu-operator
+ EOF
+ ```
+
+1. Get the latest Nvidia channel using the following command:
+
+ ```bash
+ CHANNEL=$(oc get packagemanifest gpu-operator-certified -n openshift-marketplace -o jsonpath='{.status.defaultChannel}')
+ ```
+
+1. Get latest Nvidia package using the following command:
+
+ ```bash
+ PACKAGE=$(oc get packagemanifests/gpu-operator-certified -n openshift-marketplace -ojson | jq -r '.status.channels[] | select(.name == "'$CHANNEL'") | .currentCSV')
+ ```
+
+1. Create Subscription.
+
+ ```yaml
+ envsubst <<EOF | oc apply -f -
+ apiVersion: operators.coreos.com/v1alpha1
+ kind: Subscription
+ metadata:
+ name: gpu-operator-certified
+ namespace: nvidia-gpu-operator
+ spec:
+ channel: "$CHANNEL"
+ installPlanApproval: Automatic
+ name: gpu-operator-certified
+ source: certified-operators
+ sourceNamespace: openshift-marketplace
+ startingCSV: "$PACKAGE"
+ EOF
+ ```
+
+1. Wait for Operator to finish installing.
+
+ Don't proceed until you have verified that the operator has finished installing. Also, ensure that your GPU worker is online.
+
+ :::image type="content" source="media/howto-gpu-workloads/nvidia-installed.png" alt-text="Screenshot of installed operators on namespace.":::
+
+#### Install node feature discovery operator
+
+The node feature discovery operator will discover the GPU on your nodes and appropriately label the nodes so you can target them for workloads.
+
+This example installs the NFD operator into the `openshift-ndf` namespace and creates the "subscription" which is the configuration for NFD.
+
+Official Documentation for Installing [Node Feature Discovery Operator](https://docs.openshift.com/container-platform/4.10/hardware_enablement/psap-node-feature-discovery-operator.html).
+
+1. Set up `Namespace`.
+
+ ```yaml
+ cat <<EOF | oc apply -f -
+ apiVersion: v1
+ kind: Namespace
+ metadata:
+ name: openshift-nfd
+ EOF
+ ```
+
+1. Create `OperatorGroup`.
+
+ ```yaml
+ cat <<EOF | oc apply -f -
+ apiVersion: operators.coreos.com/v1
+ kind: OperatorGroup
+ metadata:
+ generateName: openshift-nfd-
+ name: openshift-nfd
+ namespace: openshift-nfd
+ EOF
+ ```
+
+1. Create `Subscription`.
+
+ ```yaml
+ cat <<EOF | oc apply -f -
+ apiVersion: operators.coreos.com/v1alpha1
+ kind: Subscription
+ metadata:
+ name: nfd
+ namespace: openshift-nfd
+ spec:
+ channel: "stable"
+ installPlanApproval: Automatic
+ name: nfd
+ source: redhat-operators
+ sourceNamespace: openshift-marketplace
+ EOF
+ ```
+1. Wait for Node Feature discovery to complete installation.
+
+ You can log in to your OpenShift console to view operators or simply wait a few minutes. Failure to wait for the operator to install will result in an error in the next step.
+
+1. Create NFD Instance.
+
+ ```yaml
+ cat <<EOF | oc apply -f -
+ kind: NodeFeatureDiscovery
+ apiVersion: nfd.openshift.io/v1
+ metadata:
+ name: nfd-instance
+ namespace: openshift-nfd
+ spec:
+ customConfig:
+ configData: |
+ # - name: "more.kernel.features"
+ # matchOn:
+ # - loadedKMod: ["example_kmod3"]
+ # - name: "more.features.by.nodename"
+ # value: customValue
+ # matchOn:
+ # - nodename: ["special-.*-node-.*"]
+ operand:
+ image: >-
+ registry.redhat.io/openshift4/ose-node-feature-discovery@sha256:07658ef3df4b264b02396e67af813a52ba416b47ab6e1d2d08025a350ccd2b7b
+ servicePort: 12000
+ workerConfig:
+ configData: |
+ core:
+ # labelWhiteList:
+ # noPublish: false
+ sleepInterval: 60s
+ # sources: [all]
+ # klog:
+ # addDirHeader: false
+ # alsologtostderr: false
+ # logBacktraceAt:
+ # logtostderr: true
+ # skipHeaders: false
+ # stderrthreshold: 2
+ # v: 0
+ # vmodule:
+ ## NOTE: the following options are not dynamically run-time
+ ## configurable and require a nfd-worker restart to take effect
+ ## after being changed
+ # logDir:
+ # logFile:
+ # logFileMaxSize: 1800
+ # skipLogHeaders: false
+ sources:
+ # cpu:
+ # cpuid:
+ ## NOTE: attributeWhitelist has priority over attributeBlacklist
+ # attributeBlacklist:
+ # - "BMI1"
+ # - "BMI2"
+ # - "CLMUL"
+ # - "CMOV"
+ # - "CX16"
+ # - "ERMS"
+ # - "F16C"
+ # - "HTT"
+ # - "LZCNT"
+ # - "MMX"
+ # - "MMXEXT"
+ # - "NX"
+ # - "POPCNT"
+ # - "RDRAND"
+ # - "RDSEED"
+ # - "RDTSCP"
+ # - "SGX"
+ # - "SSE"
+ # - "SSE2"
+ # - "SSE3"
+ # - "SSE4.1"
+ # - "SSE4.2"
+ # - "SSSE3"
+ # attributeWhitelist:
+ # kernel:
+ # kconfigFile: "/path/to/kconfig"
+ # configOpts:
+ # - "NO_HZ"
+ # - "X86"
+ # - "DMI"
+ pci:
+ deviceClassWhitelist:
+ - "0200"
+ - "03"
+ - "12"
+ deviceLabelFields:
+ # - "class"
+ - "vendor"
+ # - "device"
+ # - "subsystem_vendor"
+ # - "subsystem_device"
+ # usb:
+ # deviceClassWhitelist:
+ # - "0e"
+ # - "ef"
+ # - "fe"
+ # - "ff"
+ # deviceLabelFields:
+ # - "class"
+ # - "vendor"
+ # - "device"
+ # custom:
+ # - name: "my.kernel.feature"
+ # matchOn:
+ # - loadedKMod: ["example_kmod1", "example_kmod2"]
+ # - name: "my.pci.feature"
+ # matchOn:
+ # - pciId:
+ # class: ["0200"]
+ # vendor: ["15b3"]
+ # device: ["1014", "1017"]
+ # - pciId :
+ # vendor: ["8086"]
+ # device: ["1000", "1100"]
+ # - name: "my.usb.feature"
+ # matchOn:
+ # - usbId:
+ # class: ["ff"]
+ # vendor: ["03e7"]
+ # device: ["2485"]
+ # - usbId:
+ # class: ["fe"]
+ # vendor: ["1a6e"]
+ # device: ["089a"]
+ # - name: "my.combined.feature"
+ # matchOn:
+ # - pciId:
+ # vendor: ["15b3"]
+ # device: ["1014", "1017"]
+ # loadedKMod : ["vendor_kmod1", "vendor_kmod2"]
+ EOF
+ ```
+
+1. Verify that NFD is ready.
+
+ The status of this operator should show as **Available**.
+
+ :::image type="content" source="media/howto-gpu-workloads/nfd-ready-for-use.png" alt-text="Screenshot of node feature discovery operator.":::
+
+#### Apply Nvidia Cluster Config
+
+This section explains how to apply the Nvidia cluster config. Please read the [Nvidia documentation](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/openshift/install-gpu-ocp.html) on customizing this if you have your own private repos or specific settings. This process may take several minutes to complete.
+
+1. Apply cluster config.
+
+ ```yaml
+ cat <<EOF | oc apply -f -
+ apiVersion: nvidia.com/v1
+ kind: ClusterPolicy
+ metadata:
+ name: gpu-cluster-policy
+ spec:
+ mig
+ enabled: true
+ operator:
+ defaultRuntime: crio
+ initContainer: {}
+ runtimeClass: nvidia
+ deployGFD: true
+ dcgm:
+ enabled: true
+ gfd: {}
+ dcgmExporter:
+ config:
+ name: ''
+ driver:
+ licensingConfig:
+ nlsEnabled: false
+ configMapName: ''
+ certConfig:
+ name: ''
+ kernelModuleConfig:
+ name: ''
+ repoConfig:
+ configMapName: ''
+ virtualTopology:
+ config: ''
+ enabled: true
+ use_ocp_driver_toolkit: true
+ devicePlugin: {}
+ mig:
+ strategy: single
+ validator:
+ plugin:
+ env:
+ - name: WITH_WORKLOAD
+ value: 'true'
+ nodeStatusExporter:
+ enabled: true
+ daemonsets: {}
+ toolkit:
+ enabled: true
+ EOF
+ ```
+
+1. Verify cluster policy.
+
+ Log in to OpenShift console and browse to operators. Ensure sure you're in the `nvidia-gpu-operator` namespace. It should say `State: Ready once everything is complete`.
+
+ :::image type="content" source="media/howto-gpu-workloads/nvidia-cluster-policy.png" alt-text="Screenshot of existing cluster policies on OpenShift console.":::
+
+## Validate GPU
+
+It may take some time for the Nvidia Operator and NFD to completely install and self-identify the machines. Run the following commands to validate that everything is running as expected:
+
+1. Verify that NFD can see your GPU(s).
+
+ ```bash
+ oc describe node | egrep 'Roles|pci-10de' | grep -v master
+ ```
+
+ The output should appear similar to the following:
+
+ ```bash
+ Roles: worker
+ feature.node.kubernetes.io/pci-10de.present=true
+ ```
+
+1. Verify node labels.
+
+ You can see the node labels by logging into the OpenShift console -> Compute -> Nodes -> nvidia-worker-southcentralus1-. You should see multiple Nvidia GPU labels and the pci-10de device from above.
+
+ :::image type="content" source="media/howto-gpu-workloads/node-labels.png" alt-text="Screenshot of GPU labels on OpenShift console.":::
+
+1. Nvidia SMI tool verification.
+
+ ```bash
+ oc project nvidia-gpu-operator
+ for i in $(oc get pod -lopenshift.driver-toolkit=true --no-headers |awk '{print $1}'); do echo $i; oc exec -it $i -- nvidia-smi ; echo -e '\n' ; done
+ ```
+
+ You should see output that shows the GPUs available on the host such as this example screenshot. (Varies depending on GPU worker type)
+
+ :::image type="content" source="media/howto-gpu-workloads/test-gpu.png" alt-text="Screenshot of output showing available GPUs.":::
+
+1. Create Pod to run a GPU workload
+
+ ```yaml
+ oc project nvidia-gpu-operator
+ cat <<EOF | oc apply -f -
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ name: cuda-vector-add
+ spec:
+ restartPolicy: OnFailure
+ containers:
+ - name: cuda-vector-add
+ image: "quay.io/giantswarm/nvidia-gpu-demo:latest"
+ resources:
+ limits:
+ nvidia.com/gpu: 1
+ nodeSelector:
+ nvidia.com/gpu.present: true
+ EOF
+ ```
+
+1. View logs.
+
+ ```bash
+ oc logs cuda-vector-add --tail=-1
+ ```
+
+> [!NOTE]
+> If you get an error `Error from server (BadRequest): container "cuda-vector-add" in pod "cuda-vector-add" is waiting to start: ContainerCreating`, try running `oc delete pod cuda-vector-add` and then re-run the create statement above.
+
+The output should be similar to the following (depending on GPU):
+
+ ```bash
+ [Vector addition of 5000 elements]
+ Copy input data from the host memory to the CUDA device
+ CUDA kernel launch with 196 blocks of 256 threads
+ Copy output data from the CUDA device to the host memory
+ Test PASSED
+ Done
+ ```
+If successful, the pod can be deleted:
+
+ ```bash
+ oc delete pod cuda-vector-add
+ ```
++
openshift Support Policies V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/support-policies-v4.md
Azure Red Hat OpenShift 4 supports node instances on the following virtual machi
|G*|Standard_G5|32|448| |G|Standard_GS5|32|448| |Mms|Standard_M128ms|128|3892|
+|NC4asT4v3|Standard_NC4as_T4_v3|4|28|
+|NC8asT4v3|Standard_NC8as_T4_v3|8|56|
+|NC16asT4v3|Standard_NC16as_T4_v3|16|110|
+|NC64asT4v3|Standard_NC64as_T4_v3|64|440|
\*Does not support Premium_LRS OS Disk, StandardSSD_LRS is used instead
orbital Concepts Contact Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/concepts-contact-profile.md
At the moment autotrack is disabled and autotracking options are not applied.
## Understanding links and channels
-A whole band, unique in direction, and unique in polarity is called a link. Channels, which are children under links, specify center frequency, bandwidth, and endpoints. Typically there's only one channel per link but some applications require multiple channels per link. Refer to the Ground Station manual for a full list of supported bands and antenna capabilities.
+A whole band, unique in direction, and unique in polarity is called a link. Channels, which are children under links, specify center frequency, bandwidth, and endpoints. Typically there's only one channel per link but some applications require multiple channels per link.
-You can specify an EIRP and G/T requirement for each link. EIRP applies to uplinks and G/T applies to downlinks. You can give a name to each link and channel to keep track of these properties.
+You can specify an EIRP and G/T requirement for each link. EIRP applies to uplinks and G/T applies to downlinks. You can give a name to each link and channel to keep track of these properties and each channel has a modem associated with it. Follow the steps in [how to setup software modem](modem-chain.md) to understand the options.
Look at the example below to see how to specify an RHCP channel and an LHCP channel if your mission requires dual-polarization on downlink.
Look at the example below to see how to specify an RHCP channel and an LHCP chan
} ``` -
-## Applying modems or bring your own
-
-We recommend taking advantage of Orbital's GSaaS software modem functionality if possible. This modem is managed by the service and is inserted between your endpoint and the incoming or outgoing virtual RF stream per channel. We have a library of modems that will be available in the marketplace for you to utilize. If there is no modem that can be used with your application then utilize the virtual RF delivery feature to bring your own modem.
-
-There are 4 parameters related to modem configurations. The table below shows you how to configure these parameters.
-
-| Parameter | Options |
-||--|
-| modulationConfiguration | 1. Null for virtual RF<br />2. JSON escaped modem config for software modem |
-| demodulationConfiguration | 1. Null for virtual RF<br />2. JSON escaped modem config for software modem |
-| encodingConfiguration | Null (not used) |
-| decodingConfiguration | Null (not used) |
-
-Use the same modem config file in uplink and downlink channels for full-duplex communications in the same band.
-
-The modem config should be a JSON escaped raw save file from a software modem. Please see the marketplace for modem options.
- ## Modifying or deleting a contact profile You can modify or delete the contact profile via the Portal or through the API.
When you onboard a third party network, you'll receive a token that identifies y
## Next steps - [Schedule a contact](schedule-contact.md)
+- [Configure the RF chain](modem-chain.md)
- [Update the Spacecraft TLE](update-tle.md)
orbital Contact Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/contact-profile.md
Sign in to the [Azure portal - Orbital](https://aka.ms/orbital/portal).
| IP Address | Specify the IP Address for data retrieval/delivery | | Port | Specify the Port for data retrieval/delivery | | Protocol | Select TCP or UDP protocol for data retrieval/delivery |
- | Demodulation Configuration (Downlink only) | If applicable, paste your modem demodulation configuration |
+ | Demodulation Configuration (Downlink only) | Refer to [configure the modem chain](modem-chain.md) for options. |
| Decoding Configuration (Downlink only)| If applicable, paste your decoding configuration |
- | Modulation Configuration (Uplink only) | If applicable, paste your modem modulation configuration |
+ | Modulation Configuration (Uplink only) | Refer to [configure the modem chain](modem-chain.md) for options. |
| Encoding Configuration (Uplink only)| If applicable, paste your encoding configuration | :::image type="content" source="media/orbital-eos-contact-link.png" alt-text="Contact Profile Links Page" lightbox="media/orbital-eos-contact-link.png":::
Sign in to the [Azure portal - Orbital](https://aka.ms/orbital/portal).
## Next steps - [How-to Receive real-time telemetry](receive-real-time-telemetry.md)
+- [Configure the RF chain](modem-chain.md)
- [Schedule a contact](schedule-contact.md) - [Cancel a contact](delete-contact.md)
orbital Downlink Aqua https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/downlink-aqua.md
sudo apt install socat
| IP Address | Enter the Private IP address of the virtual machine you created above (VM) | | Port | **56001** | | Protocol | **TCP** |
- | Demodulation Configuration | Leave this field **blank** or request a demodulation configuration from the [Azure Orbital team](mailto:msazureorbital@microsoft.com) to use a software modem. Include your Subscription ID, Spacecraft resource ID, and Contact Profile resource ID in your email request.|
+ | Demodulation Configuration | Select the 'Preset Named Modem Configuration' option and choose **Aqua Direct Broadcast**|
| Decoding Configuration | Leave this field **blank** |
orbital Modem Chain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/modem-chain.md
+
+ Title: Configure the RF chain - Azure Orbital
+description: Learn more about how to configure modems.
++++ Last updated : 08/30/2022+
+#Customer intent: As a satellite operator or user, I want to understand how to use software modems to establish RF connections with my satellite.
++
+# How to configure the RF chain
+
+You have the flexibility to choose between managed modem or virtual RF functionality using the Azure Orbital Ground Station service. These operational modes are specified on a per channel basis in the contact profile. See [ground station contact profile](concepts-contact-profile.md) to learn more about channels and links.
+
+## Managed modems vs virtual RF delivery
+
+We recommend taking advantage of Orbital Ground Station's managed modem functionality if possible. The modem is managed by the service and is inserted between your endpoint and the incoming or outgoing virtual RF stream for each pass. You can specify the modem setup using a modem configuration file or apply one of the in-built named modem configurations for commonly used public satellites such as Aqua.
+
+Use virtual RF delivery if you wish to have tighter control on the modem setup or bring your own modem to the resource group. Orbital Ground Station will connect to your channel endpoint specified in the contact profile.
+
+## How to configure your channels
+
+The table below shows you how to configure the modem or virtual RF parameters.
+
+| Parameter | Options |
+||--|
+| modulationConfiguration | 1. Null/empty for virtual RF<br />2. Modem config for software modem <br /> 3. Named modem string |
+| demodulationConfiguration | 1. Null/empty for virtual RF<br />2. Modem config for software modem <br /> 3. Named modem string |
+| encodingConfiguration | Null (not used) |
+| decodingConfiguration | Null (not used) |
+
+> [!NOTE]
+> Endpoint specified for the channel will apply to whichever option is selected. Please review [how to prepare network](prepare-network.md) for more details on setting up endpoints.
+
+### For full-duplex cases
+Use the same modem config file in uplink and downlink channels for full-duplex communications in the same band.
+
+### How to input the modem config
+You can enter the modem config when creating a contact profile object or add it in later. Modifications to existing modem configs are also allowed.
+
+#### Entering the modem config using the API
+Enter the modem config as a JSON escaped string from the desired modem config file when using the API.
+
+#### Entering the modem config using the portal
+Select 'Raw XML' and then **paste the modem config raw (without JSON escapement)** into the field shown below when entering channel details using the portal.
++
+### Named modem configuration
+We currently support the following named modem configurations.
+
+| Public Satellite Service | Named modem string | Note |
+|--|--|--|
+| Aqua Direct Broadcast | aqua_direct_broadcast | This is NASA AQUA's 15-Mbps direct broadcast service |
+| Aqua Direct Playback | aqua_direct_playback | This is NASA's AQUA's 150-Mbps direct broadcast service |
+
+> [!NOTE]
+> We recommend using the Aqua Direct Broadcast modem configuration when testing with Aqua.
+
+#### Specifying a named modem configuration using the API
+Enter the named modem string into the demodulationConfiguration parameter when using the API.
+
+```javascript
+{
+ "location": "westus2",
+ "tags": null,
+ "id": "/subscriptions/c098d0b9-106a-472d-83d7-eb2421cfcfc2/resourcegroups/Demo/providers/Microsoft.Orbital/contactProfiles/Aqua-directbroadcast",
+ "name": "Aqua-directbroadcast",
+ "type": "Microsoft.Orbital/contactProfiles",
+ "properties": {
+ "minimumViableContactDuration": "PT1M",
+ "minimumElevationDegrees": 5,
+ "autoTrackingConfiguration": "disabled",
+ "eventHubUri": "/subscriptions/c098d0b9-106a-472d-83d7-eb2421cfcfc2/resourceGroups/Demo/providers/Microsoft.EventHub/namespaces/demo-orbital-eventhub/eventhubs/antenna-metrics-stream",
+ "links": [
+ {
+ "polarization": "RHCP",
+ "direction": "Downlink",
+ "gainOverTemperature": 0,
+ "eirpdBW": 0,
+ "channels": [
+ {
+ "centerFrequencyMHz": 8160,
+ "bandwidthMHz": 15,
+ "endPoint": {
+ "ipAddress": "10.6.0.4",
+ "endPointName": "my-endpoint",
+ "port": "50001",
+ "protocol": "TCP"
+ },
+ "modulationConfiguration": null,
+ "demodulationConfiguration": "aqua_direct_broadcast",
+ "encodingConfiguration": null,
+ "decodingConfiguration": null
+ }
+ ]
+ }
+ ]
+ }
+}
+```
+
+#### Specifying a named modem configuration using the portal
+
+Select 'Preset Named Modem Configuration'and chose a configuration as shown below when entering channel details using the portal.
++
+### How to use virtual RF
+
+Leave the modulationConfiguration or demodulationConfiguration parameters blank in the channel parameters to use the virtual RF delivery feature. Azure Orbital Ground Station uses the [Digital Intermediate Frequency Interoperability](https://dificonsortium.org/) or DIFI format for transport of virtual RF.
+
+>[!Note]
+>Azure Orbital Ground Station will provide an RF stream in accordance with the channel bandwidth setting to the endpoint for downlink.
+>
+>Azure Orbital Ground Station expects an RF stream in accordance with the channel bandwidth setting from the endpoint for uplink.
+
+## Next steps
+
+- [Register Spacecraft](register-spacecraft.md)
+- [Prepare the network](prepare-network.md)
+- [Schedule a contact](schedule-contact.md)
orbital Overview Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/overview-analytics.md
Previously updated : 08/18/2022 Last updated : 08/31/2022
Azure Orbital Analytics are Azure capabilities using spaceborne data and AI that
## What it provides
-Azure Orbital Analytics provides the ability to downlink spaceborne data from Azure Orbital Ground Station (AOGS), first- or third-party archives, or customer-acquired data directly into Azure. This data is efficiently stored using Azure Data and Storage services. From there, you can convert raw spaceborne sensor data into analysis-ready data using Azure Orbital Analytics processing pipelines.
+Azure Orbital Analytics informs customers of the ability to downlink spaceborne data from Azure Orbital Ground Station (AOGS), first- or third-party archives, or customer-acquired data directly into Azure. Data is efficiently stored using Azure Data Platform components. From there, raw spaceborne sensor data can be converted into analysis-ready data using Azure Orbital Analytics processing pipeline reference architectures.
## Integrations
orbital Prepare Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/prepare-network.md
Here's how to set up the link flows based on direction on tcp or udp preference.
## Next steps - [Register Spacecraft](register-spacecraft.md)
+- [Configure the modem chain](modem-chain.md)
- [Schedule a contact](schedule-contact.md)
postgresql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-supported-versions.md
Automation for major version upgrade isn't yet supported. For example, there's c
Microsoft has a team of committers and contributors who work full time on the open source Postgres project and are long term members of the community. Our contributions include but aren't limited to features, performance enhancements, bug fixes, security patches among other things. Our open source team also incorporates feedback from our Azure fleet (and customers) when prioritizing work, however please keep in mind that Postgres project has its own independent contribution guidelines, review process and release schedule.
-If you encounter a PostgreSQL engine defect while working with Azure Database for PostgreSQL, Microsoft will take immediate action to mitigate the issue when possible, Microsoft will fix the defect to address the production issue and work with the community to incorporate it.
+When an defect with PostgreSQL engine is identified, Microsoft will take immediate action to mitigate the issue. If it requires code change, Microsoft will fix the defect to address the production issue, if possible, and work with the community to incorporate the fix as quickly as possible.
<!--
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
The service runs the community version of PostgreSQL. This allows full applicati
We continue to support Single Server and encourage you to adopt Flexible Server which has richer capabilities such as zone resilient HA, predictable performance, maximum control, custom maintenance window, cost optimization controls and simplified developer experience suitable for your enterprise workloads. If we decide to retire any service, feature, API or SKU, you'll receive advance notice including a migration or transition path. Learn more about Microsoft Lifecycle policies [here](/lifecycle/faq/general-lifecycle).
+**2. What is MicrosoftΓÇÖs policy to address PostgreSQL engine defects?**
+
+Please refer to MicrosoftΓÇÖs current policy [here](../../postgresql/flexible-server/concepts-supported-versions.md#managing-postgresql-engine-defects)
+ ## Contacts For any questions or suggestions you might have on Azure Database for PostgreSQL flexible server, send an email to the Azure Database for PostgreSQL Team ([@Ask Azure DB for PostgreSQL](mailto:AskAzureDBforPostgreSQL@service.microsoft.com)). Please note that this email address isn't a technical support alias.
postgresql Concepts Single To Flexible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/concepts-single-to-flexible.md
Previously updated : 08/17/2022 Last updated : 08/31/2022
The migration tool is agnostic of source and target PostgreSQL versions. Here ar
| Postgres 11 | Postgres 14 | Verify your application compatibility. | | Postgres 11 | Postgres 11 | You can choose to migrate to the same version in Flexible Server. You can then upgrade to a higher version in Flexible Server |
+>[!IMPORTANT]
+> If Flexible Server is not available in your Single Server region, you may choose to deploy Flexible server in an [alternate region](../flexible-server/overview.md#azure-regions). We continue to add support for more regions with Flexible server.
++ ## Overview The migration tool provides an inline experience to migrate databases from Single Server (source) to Flexible Server (target).
search Search Get Started Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-python.md
ms.devlang: python Previously updated : 08/29/2022 Last updated : 08/31/2022
> * [Portal](search-get-started-portal.md) >
-Build a Jupyter Notebook that creates, loads, and queries an Azure Cognitive Search index using Python and the [azure-search-documents library](/python/api/overview/azure/search-documents-readme) in the Azure SDK for Python. This article explains how to build a notebook step by step. Alternatively, you can [download and run a finished Jupyter Python notebook](https://github.com/Azure-Samples/azure-search-python-samples).
+In this exercise, build a Jupyter Notebook that creates, loads, and queries an Azure Cognitive Search index using Python and the [azure-search-documents library](/python/api/overview/azure/search-documents-readme) in the Azure SDK for Python. This article explains how to build a notebook step by step. Alternatively, you can [download and run a finished Jupyter Python notebook](https://github.com/Azure-Samples/azure-search-python-samples).
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prerequisites
-The following services and tools are required for this quickstart.
+The following services and tools are used in this quickstart.
* Visual Studio Code with the Python extension (or equivalent tool), with Python 3.7 or later
The following services and tools are required for this quickstart.
## Copy a key and URL
-REST calls require the service URL and an access key on every request. A search service is created with both, so if you added Azure Cognitive Search to your subscription, follow these steps to get the necessary information:
+To connect to your search service, provide the endpoint and an access key. A search service is created with both, so if you added Azure Cognitive Search to your subscription, follow these steps to get the necessary information:
1. [Sign in to the Azure portal](https://portal.azure.com/), and in your search service **Overview** page, get the URL. An example endpoint might look like `https://mydemo.search.windows.net`.
REST calls require the service URL and an access key on every request. A search
![Get an HTTP endpoint and access key](media/search-get-started-rest/get-url-key.png "Get an HTTP endpoint and access key")
-All requests require an api-key on every request sent to your service. Having a valid key establishes trust, on a per request basis, between the application sending the request and the service that handles it.
+You'll provide these values in the next section when you set up the connection.
## Connect to Azure Cognitive Search
-In this task, start Jupyter Notebook and verify that you can connect to Azure Cognitive Search. You'll do this step by requesting a list of indexes from your service.
+In this task, create the notebook, load the libraries, and set up your clients.
-1. Create a new Python3 notebook.
+1. Create a new Python3 notebook in Visual Studio Code:
+
+ 1. Press F1 and search for "Python Select Interpreter" and choose a version of Python 3.7 or later.
+ 1. Press F1 again and search for "Create: New Jupyter Notebook". You should have an empty, untitled `.ipynb` file open in the editor ready for the first entry.
1. In the first cell, load the libraries from the Azure SDK for Python, including [azure-search-documents](/python/api/azure-search-documents).
- ```python
+ ```python
%pip install azure-search-documents --pre %pip show azure-search-documents
In this task, start Jupyter Notebook and verify that you can connect to Azure Co
SimpleField, SearchableField )
- ```
+ ```
-1. In the second cell, input the request elements that will be constants on every request. Provide your search service name, admin API key, and query API key, copied in a previous step. This cell also sets up the clients you'll use for specific operations: [SearchIndexClient](/python/api/azure-search-documents/azure.search.documents.indexes.searchindexclient) to create an index, and [SearchClient](/python/api/azure-search-documents/azure.search.documents.searchclient) to query an index.
+1. Add a second cell and paste in the connection information. This cell also sets up the clients you'll use for specific operations: [SearchIndexClient](/python/api/azure-search-documents/azure.search.documents.indexes.searchindexclient) to create an index, and [SearchClient](/python/api/azure-search-documents/azure.search.documents.searchclient) to query an index.
- ```python
+ ```python
service_name = "YOUR-SEARCH-SERVICE-NAME" admin_key = "YOUR-SEARCH-SERVICE-ADMIN-API-KEY"
In this task, start Jupyter Notebook and verify that you can connect to Azure Co
search_client = SearchClient(endpoint=endpoint, index_name=index_name, credential=AzureKeyCredential(admin_key))
- ```
+ ```
1. In the third cell, run a delete_index operation to clear your service of any existing *hotels-quickstart* indexes. Deleting the index allows you to create another *hotels-quickstart* index of the same name.
- ```python
+ ```python
try: result = admin_client.delete_index(index_name) print ('Index', index_name, 'Deleted') except Exception as ex: print (ex)
- ```
+ ```
1. Run each step. ## 1 - Create an index
-Required elements of an index include a name, a fields collection, and a key. The fields collection defines the structure of a logical *search document*, used for both loading data and returning results.
+Required index elements include a name, a fields collection, and a document key that uniquely identifies each search document. The fields collection defines the structure of a logical *search document*, used for both loading data and returning results.
-Each field has a name, type, and attributes that determine how the field is used (for example, whether it's full-text searchable, filterable, or retrievable in search results). Within an index, one of the fields of type `Edm.String` must be designated as the *key* for document identity.
+Within the fields collection, each field has a name, type, and attributes that determine how the field is used (for example, whether it's full-text searchable, filterable, or retrievable in search results). Within an index, one of the fields of type `Edm.String` must be designated as the *key* for document identity.
This index is named "hotels-quickstart" and has the field definitions you see below. It's a subset of a larger [Hotels index](https://github.com/Azure-Samples/azure-search-sample-data/blob/master/hotels/Hotels_IndexDefinition.JSON) used in other walkthroughs. We trimmed it in this quickstart for brevity.
This step shows you how to query an index using the **search** method of the [se
print("Category: {}".format(result["Category"])) ```
-1. In this example, we'll use the autocomplete function. Autocomplete is typically used in a search box to provide potential matches as the user types into the search box.
+1. In the last example, we'll use the autocomplete function. Autocomplete is typically used in a search box to provide potential matches as the user types into the search box.
When the index was created, a suggester named `sg` was also created as part of the request. A suggester definition specifies which fields can be used to find potential matches to suggester requests. In this example, those fields are 'Tags', 'Address/City', 'Address/Country'. To simulate auto-complete, pass in the letters "sa" as a partial string. The autocomplete method of [SearchClient](/python/api/azure-search-documents/azure.search.documents.searchclient) sends back potential term matches.
If you're using a free service, remember that you're limited to three indexes, i
## Next steps
-In this JavaScript quickstart, you worked through a series of tasks to create an index, load it with documents, and run queries. To continue learning, try the following tutorial.
+In this Python quickstart, you worked through the fundamental workflow using the azure.search.documents library from the Python SDK. You performed tasks that created an index, loaded it with documents, and ran queries. To continue learning, try the following tutorial.
> [!div class="nextstepaction"] > [Tutorial: Add search to web apps](tutorial-python-overview.md)
search Tutorial Csharp Create Load Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-csharp-create-load-index.md
Previously updated : 05/21/2021 Last updated : 08/30/2022 ms.devlang: csharp
search Tutorial Csharp Deploy Static Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-csharp-deploy-static-web-app.md
Previously updated : 04/23/2021 Last updated : 08/30/2022 ms.devlang: csharp
The Static Web app pulls the information and files for deployment from GitHub us
## Create a Static Web App in Visual Studio Code
-1. Select **Azure** from the Activity Bar, then select **Static Web Apps** from the Side bar.
-1. Right-click on the subscription name then select **Create Static Web App (Advanced)**.
+1. Select **Azure** from the Activity Bar, then open **Resources** from the Side bar.
- :::image type="content" source="media/tutorial-javascript-create-load-index/visual-studio-code-create-static-web-app-resource-advanced.png" alt-text="Right-click on the subscription name then select **Create Static Web App (Advanced)**.":::
+1. Right-click **Static Web Apps** and then select **Create Static Web App (Advanced)**.
+
+ :::image type="content" source="media/tutorial-javascript-create-load-index/visual-studio-code-create-static-web-app-resource-advanced.png" alt-text="Right-click **Static Web Apps** and then select **Create Static Web App (Advanced)**":::
1. If you see a pop-up window in VS Code asking which branch you want to deploy from, select the default branch, usually **master** or **main**.
search Tutorial Csharp Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-csharp-overview.md
Previously updated : 04/23/2021 Last updated : 08/30/2022 ms.devlang: csharp
The application is available:
## What does the sample do?
-This sample website provides access to a catalog of 10,000 books. A user can search the catalog by entering text in the search bar. While the user enters text, the website uses the Search Index's suggest feature to complete the text. Once the query finishes, the list of books is displayed with a portion of the details. A user can select a book to see all the details, stored in the Search Index, of the book.
+This sample website provides access to a catalog of 10,000 books. A user can search the catalog by entering text in the search bar. While the user enters text, the website uses the search index's suggest feature to complete the text. Once the query finishes, the list of books is displayed with a portion of the details. A user can select a book to see all the details, stored in the search index, of the book.
The search experience includes:
The [sample](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/m
Install the following for your local development environment. -- [.NET 3](https://dotnet.microsoft.com/download/dotnet/5.0)
+- [.NET 5](https://dotnet.microsoft.com/download/dotnet/5.0)
- [Git](https://git-scm.com/downloads) - [Visual Studio Code](https://code.visualstudio.com/) and the following extensions - [Azure Resources](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureresourcegroups)
Forking the sample repository is critical to be able to deploy the Static Web Ap
## Create a resource group for your Azure resources 1. In Visual Studio Code, open the [Activity bar](https://code.visualstudio.com/docs/getstarted/userinterface), and select the Azure icon.
-1. In the Side bar, **right-click on your Azure subscription** under the `Resource Groups` area and select **Create resource group**.
+1. In Resources, select Add (**+**), and then select **Create Resource Group**.
- :::image type="content" source="./media/tutorial-javascript-overview/visual-studio-code-create-resource-group.png" alt-text="In the Side bar, **right-click on your Azure subscription** under the `Resource Groups` area and select **Create resource group**.":::
+ :::image type="content" source="./media/tutorial-javascript-overview/visual-studio-code-create-resource-group.png" alt-text="In Resources, select Add (**+**), and then select **Create Resource Group**.":::
1. Enter a resource group name, such as `cognitive-search-website-tutorial`. 1. Select a location close to you. 1. When you create the Cognitive Search and Static Web App resources, later in the tutorial, use this resource group.
search Tutorial Javascript Deploy Static Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-javascript-deploy-static-web-app.md
Previously updated : 03/18/2021 Last updated : 08/30/2022 ms.devlang: javascript
The Static Web app pulls the information and files for deployment from GitHub us
## Create a Static Web App in Visual Studio Code
-1. Select **Azure** from the Activity Bar, then select **Static Web Apps** from the Side bar.
+1. Select **Azure** from the Activity Bar, then open **Resources** from the Side bar.
+
+1. Right-click **Static Web Apps** and then select **Create Static Web App (Advanced)**.
+
+ :::image type="content" source="media/tutorial-javascript-create-load-index/visual-studio-code-create-static-web-app-resource-advanced.png" alt-text="Right-click **Static Web Apps** and then select **Create Static Web App (Advanced)**":::
1. If you see a pop-up window in VS Code asking which branch you want to deploy from, select the default branch, usually **master** or **main**.
The Static Web app pulls the information and files for deployment from GitHub us
To rollback the changes, in VS Code select the Source Control icon in the Activity bar, then select each changed file in the Changes list and select the **Discard changes** icon.
-1. Right-click on the subscription name then select **Create Static Web App (Advanced)**.
-
- :::image type="content" source="media/tutorial-javascript-create-load-index/visual-studio-code-create-static-web-app-resource-advanced.png" alt-text="Right-click on the subscription name then select **Create Static Web App (Advanced)**.":::
- 1. Follow the prompts to provide the following information: |Prompt|Enter|
search Tutorial Javascript Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-javascript-overview.md
Previously updated : 03/18/2021 Last updated : 08/30/2022 ms.devlang: javascript
Forking the sample repository is critical to be able to deploy the Static Web Ap
## Create a resource group for your Azure resources 1. In Visual Studio Code, open the [Activity bar](https://code.visualstudio.com/docs/getstarted/userinterface), and select the Azure icon.
-1. In the Side bar, **right-click on your Azure subscription** under the `Resource Groups` area and select **Create resource group**.
+1. In Resources, select Add (**+**), and then select **Create Resource Group**.
- :::image type="content" source="./media/tutorial-javascript-overview/visual-studio-code-create-resource-group.png" alt-text="In the Side bar, **right-click on your Azure subscription** under the `Resource Groups` area and select **Create resource group**.":::
+ :::image type="content" source="./media/tutorial-javascript-overview/visual-studio-code-create-resource-group.png" alt-text="In Resources, select Add (**+**), and then select **Create Resource Group**.":::
1. Enter a resource group name, such as `cognitive-search-website-tutorial`. 1. Select a location close to you. 1. When you create the Cognitive Search and Static Web App resources, later in the tutorial, use this resource group.
search Tutorial Python Deploy Static Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-python-deploy-static-web-app.md
Previously updated : 11/17/2021 Last updated : 08/30/2022 ms.devlang: python
The Static Web app pulls the information and files for deployment from GitHub us
## Create a Static Web App in Visual Studio Code
-1. Select **Azure** from the Activity Bar, then select **Static Web Apps** from the Side bar.
-1. Right-click on the subscription name then select **Create Static Web App (Advanced)**.
+1. Select **Azure** from the Activity Bar, then open **Resources** from the Side bar.
- :::image type="content" source="media/tutorial-javascript-create-load-index/visual-studio-code-create-static-web-app-resource-advanced.png" alt-text="Right-click on the subscription name then select **Create Static Web App (Advanced)**.":::
+1. Right-click **Static Web Apps** and then select **Create Static Web App (Advanced)**.
+
+ :::image type="content" source="media/tutorial-javascript-create-load-index/visual-studio-code-create-static-web-app-resource-advanced.png" alt-text="Right-click **Static Web Apps** and then select **Create Static Web App (Advanced)**":::
1. Follow the 8 prompts to provide the following information:
search Tutorial Python Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-python-overview.md
Previously updated : 11/17/2021 Last updated : 08/30/2022 ms.devlang: python
Forking the sample repository is critical to be able to deploy the static web ap
## Create a resource group for your Azure resources 1. In Visual Studio Code, open the [Activity bar](https://code.visualstudio.com/docs/getstarted/userinterface), and select the Azure icon.
-1. In the Side bar, **right-click on your Azure subscription** under the `Resource Groups` area and select **Create resource group**.
+1. In Resources, select Add (**+**), and then select **Create Resource Group**.
- :::image type="content" source="./media/tutorial-javascript-overview/visual-studio-code-create-resource-group.png" alt-text="In the Side bar, **right-click on your Azure subscription** under the `Resource Groups` area and select **Create resource group**.":::
+ :::image type="content" source="./media/tutorial-javascript-overview/visual-studio-code-create-resource-group.png" alt-text="In Resources, select Add (**+**), and then select **Create Resource Group**.":::
1. Enter a resource group name, such as `cognitive-search-website-tutorial`. 1. Select a location close to you. 1. When you create the Cognitive Search and Static Web App resources, later in the tutorial, use this resource group.
sentinel Sentinel Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-service-limits.md
This article lists the most common service limits you might encounter as you use
## Next steps
-[Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md)
+- [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md)
+- [Azure Monitor service limits](../azure-monitor/service-limits.md)
sentinel Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new-archive.md
Noted features are currently in PREVIEW. The [Azure Preview Supplemental Terms](
> You can also contribute! Join us in the [Azure Sentinel Threat Hunters GitHub community](https://github.com/Azure/Azure-Sentinel/wiki).
+## December 2021
+
+- [Apache Log4j Vulnerability Detection solution](#apache-log4j-vulnerability-detection-solution-public-preview)
+- [IoT OT Threat Monitoring with Defender for IoT solution](#iot-ot-threat-monitoring-with-defender-for-iot-solution-public-preview)
+- [Continuous Threat Monitoring for GitHub solution](#ingest-github-logs-into-your-microsoft-sentinel-workspace-public-preview)
++
+### Apache Log4j Vulnerability Detection solution
+
+Remote code execution vulnerabilities related to Apache Log4j were disclosed on 9 December 2021. The vulnerability allows for unauthenticated remote code execution, and it's triggered when a specially crafted string, provided by the attacker through a variety of different input vectors, is parsed and processed by the Log4j 2 vulnerable component.
+
+The [Apache Log4J Vulnerability Detection](sentinel-solutions-catalog.md#domain-solutions) solution was added to the Microsoft Sentinel content hub to help customers monitor, detect, and investigate signals related to the exploitation of this vulnerability, using Microsoft Sentinel.
+
+For more information, see the [Microsoft Security Response Center blog](https://msrc-blog.microsoft.com/2021/12/11/microsofts-response-to-cve-2021-44228-apache-log4j2/) and [Centrally discover and deploy Microsoft Sentinel out-of-the-box content and solutions](sentinel-solutions-deploy.md).
+
+### IoT OT Threat Monitoring with Defender for IoT solution (Public preview)
+
+The new **IoT OT Threat Monitoring with Defender for IoT** solution available in the [Microsoft Sentinel content hub](sentinel-solutions-catalog.md#microsoft) provides further support for the Microsoft Sentinel integration with Microsoft Defender for IoT, bridging gaps between IT and OT security challenges, and empowering SOC teams with enhanced abilities to efficiently and effectively detect and respond to OT threats.
+
+For more information, see [Tutorial: Integrate Microsoft Sentinel and Microsoft Defender for IoT](iot-solution.md).
++
+### Ingest GitHub logs into your Microsoft Sentinel workspace (Public preview)
+
+Use the new [Continuous Threat Monitoring for GitHub](sentinel-solutions-catalog.md#github) solution and [data connector](data-connectors-reference.md#github-preview) to ingest your GitHub logs into your Microsoft Sentinel workspace.
+
+The **Continuous Threat Monitoring for GitHub** solution includes a data connector, relevant analytics rules, and a workbook that you can use to visualize your log data.
+
+For example, view the number of users that were added or removed from GitHub repositories, how many repositories were created, forked, or cloned, in the selected time frame.
+
+> [!NOTE]
+> The **Continuous Threat Monitoring for GitHub** solution is supported for GitHub enterprise licenses only.
+>
+
+For more information, see [Centrally discover and deploy Microsoft Sentinel out-of-the-box content and solutions (Public preview)](sentinel-solutions-deploy.md) and [instructions](data-connectors-reference.md#github-preview) for installing the GitHub data connector.
+
+### Apache Log4j Vulnerability Detection solution (Public preview)
+
+Remote code execution vulnerabilities related to Apache Log4j were disclosed on 9 December 2021. The vulnerability allows for unauthenticated remote code execution, and it's triggered when a specially crafted string, provided by the attacker through a variety of different input vectors, is parsed and processed by the Log4j 2 vulnerable component.
+
+The [Apache Log4J Vulnerability Detection](sentinel-solutions-catalog.md#domain-solutions) solution was added to the Microsoft Sentinel content hub to help customers monitor, detect, and investigate signals related to the exploitation of this vulnerability, using Microsoft Sentinel.
+
+For more information, see the [Microsoft Security Response Center blog](https://msrc-blog.microsoft.com/2021/12/11/microsofts-response-to-cve-2021-44228-apache-log4j2/) and [Centrally discover and deploy Microsoft Sentinel out-of-the-box content and solutions](sentinel-solutions-deploy.md).
+
+## November 2021
+
+- [Incident advanced search now available in GA](#incident-advanced-search-now-available-in-ga)
+- [Amazon Web Services S3 connector now available (Public preview)](#amazon-web-services-s3-connector-now-available-public-preview)
+- [Windows Forwarded Events connector now available (Public preview)](#windows-forwarded-events-connector-now-available-public-preview)
+- [Near-real-time (NRT) threat detection rules now available (Public preview)](#near-real-time-nrt-threat-detection-rules-now-available-public-preview)
+- [Fusion engine now detects emerging and unknown threats (Public preview)](#fusion-engine-now-detects-emerging-and-unknown-threats-public-preview)
+- [Fine-tuning recommendations for your analytics rules (Public preview)](#get-fine-tuning-recommendations-for-your-analytics-rules-public-preview)
+- [Free trial updates](#free-trial-updates)
+- [Content hub and new solutions (Public preview)](#content-hub-and-new-solutions-public-preview)
+- [Continuous deployment from your content repositories (Public preview)](#enable-continuous-deployment-from-your-content-repositories-public-preview)
+- [Enriched threat intelligence with Geolocation and WhoIs data (Public preview)](#enriched-threat-intelligence-with-geolocation-and-whois-data-public-preview)
+- [Use notebooks with Azure Synapse Analytics in Microsoft Sentinel (Public preview)](#use-notebooks-with-azure-synapse-analytics-in-microsoft-sentinel-public-preview)
+- [Enhanced Notebooks area in Microsoft Sentinel](#enhanced-notebooks-area-in-microsoft-sentinel)
+- [Microsoft Sentinel renaming](#microsoft-sentinel-renaming)
+- [Deploy and monitor Azure Key Vault honeytokens with Microsoft Sentinel](#deploy-and-monitor-azure-key-vault-honeytokens-with-microsoft-sentinel)
+
+### Incident advanced search now available in GA
+
+Searching for incidents using the advanced search functionality is now generally available.
+
+The advanced incident search provides the ability to search across more data, including alert details, descriptions, entities, tactics, and more.
+
+For more information, see [Search for incidents](investigate-cases.md#search-for-incidents).
+
+### Amazon Web Services S3 connector now available (Public preview)
+
+You can now connect Microsoft Sentinel to your Amazon Web Services (AWS) S3 storage bucket, in order to ingest logs from a variety of AWS services.
+
+For now, you can use this connection to ingest VPC Flow Logs and GuardDuty findings, as well as AWS CloudTrail.
+
+For more information, see [Connect Microsoft Sentinel to S3 Buckets to get Amazon Web Services (AWS) data](connect-aws.md).
+
+### Windows Forwarded Events connector now available (Public preview)
+
+You can now stream event logs from Windows Servers connected to your Microsoft Sentinel workspace using Windows Event Collection / Windows Event Forwarding (WEC / WEF), thanks to this new data connector. The connector uses the new Azure Monitor Agent (AMA), which provides a number of advantages over the legacy Log Analytics agent (also known as the MMA):
+
+- **Scalability:** If you've enabled Windows Event Collection (WEC), you can install the Azure Monitor Agent (AMA) on the WEC machine to collect logs from many servers with a single connection point.
+
+- **Speed:** The AMA can send data at an improved rate of 5 K EPS, allowing for faster data refresh.
+
+- **Efficiency:** The AMA allows you to design complex Data Collection Rules (DCR) to filter the logs at their source, choosing the exact events to stream to your workspace. DCRs help lower your network traffic and your ingestion costs by leaving out undesired events.
+
+- **Coverage:** WEC / WEF enables the collection of Windows Event logs from legacy (on-premises and physical) servers and also from high-usage or sensitive machines, such as domain controllers, where installing an agent is undesired.
+
+We recommend using this connector with the [Microsoft Sentinel Information Model (ASIM)](normalization.md) parsers installed to ensure full support for data normalization.
+
+Learn more about the [Windows Forwarded Events connector](data-connectors-reference.md#windows-forwarded-events-preview).
+
+### Near-real-time (NRT) threat detection rules now available (Public preview)
+
+When you're faced with security threats, time and speed are of the essence. You need to be aware of threats as they materialize so you can analyze and respond quickly to contain them. Microsoft Sentinel's near-real-time (NRT) analytics rules offer you faster threat detection - closer to that of an on-premises SIEM - and the ability to shorten response times in specific scenarios.
+
+Microsoft SentinelΓÇÖs [near-real-time analytics rules](detect-threats-built-in.md#nrt) provide up-to-the-minute threat detection out-of-the-box. This type of rule was designed to be highly responsive by running its query at intervals just one minute apart.
+
+Learn more about [NRT rules](near-real-time-rules.md) and [how to use them](create-nrt-rules.md).
+
+### Fusion engine now detects emerging and unknown threats (Public preview)
+
+In addition to detecting attacks based on [predefined scenarios](fusion-scenario-reference.md), Microsoft Sentinel's ML-powered Fusion engine can help you find the emerging and unknown threats in your environment by applying extended ML analysis and by correlating a broader scope of anomalous signals, while keeping the alert fatigue low.
+
+The Fusion engine's ML algorithms constantly learn from existing attacks and apply analysis based on how security analysts think. It can therefore discover previously undetected threats from millions of anomalous behaviors across the kill-chain throughout your environment, which helps you stay one step ahead of the attackers.
+
+Learn more about [Fusion for emerging threats](fusion.md#fusion-for-emerging-threats).
+
+Also, the [Fusion analytics rule is now more configurable](configure-fusion-rules.md), reflecting its increased functionality.
+
+### Get fine-tuning recommendations for your analytics rules (Public preview)
+
+Fine-tuning threat detection rules in your SIEM can be a difficult, delicate, and continuous process of balancing between maximizing your threat detection coverage and minimizing false positive rates. Microsoft Sentinel simplifies and streamlines this process by using machine learning to analyze billions of signals from your data sources as well as your responses to incidents over time, deducing patterns and providing you with actionable recommendations and insights that can significantly lower your tuning overhead and allow you to focus on detecting and responding to actual threats.
+
+[Tuning recommendations and insights](detection-tuning.md) are now built in to your analytics rules.
+
+### Free trial updates
+
+Microsoft Sentinel's free trial continues to support new or existing Log Analytics workspaces at no additional cost for the first 31 days.
+
+We're evolving our free trial experience to include the following updates:
+
+- **New Log Analytics workspaces** can ingest up to 10 GB / day of log data for the first 31-days at no cost. New workspaces include workspaces that are less than three days old.
+
+ Both Log Analytics data ingestion and Microsoft Sentinel charges are waived during the 31-day trial period. This free trial is subject to a 20-workspace limit per Azure tenant.
+
+- **Existing Log Analytics workspaces** can enable Microsoft Sentinel at no additional cost. Existing workspaces include any workspaces created more than three days ago.
+
+ Only the Microsoft Sentinel charges are waived during the 31-day trial period.
+
+Usage beyond these limits will be charged per the pricing listed on the [Microsoft Sentinel pricing](https://azure.microsoft.com/pricing/details/azure-sentinel) page. Charges related to additional capabilities for [automation](automation.md) and [bring your own machine learning](bring-your-own-ml.md) are still applicable during the free trial.
+
+> [!TIP]
+> During your free trial, find resources for cost management, training, and more on the **News & guides > Free trial** tab in Microsoft Sentinel. This tab also displays details about the dates of your free trial, and how many days you've left until it expires.
+>
+
+For more information, see [Plan and manage costs for Microsoft Sentinel](billing.md).
+
+### Content hub and new solutions (Public preview)
+
+Microsoft Sentinel now provides a **Content hub**, a centralized location to find and deploy Microsoft Sentinel out-of-the-box (built-in) content and solutions to your Microsoft Sentinel workspace. Find the content you need by filtering for content type, support models, categories and more, or use the powerful text search.
+
+Under **Content management**, select **Content hub**. Select a solution to view more details on the right, and then click **Install** to install it in your workspace.
++
+The following list includes highlights of new, out-of-the-box solutions added to the Content hub:
+
+ :::column span="":::
+ - Microsoft Sentinel Training Lab
+ - Cisco ASA
+ - Cisco Duo Security
+ - Cisco Meraki
+ - Cisco StealthWatch
+ - Digital Guardian
+ - 365 Dynamics
+ - GCP Cloud DNS
+ :::column-end:::
+ :::column span="":::
+ - GCP CloudMonitor
+ - GCP Identity and Access Management
+ - FalconForce
+ - FireEye NX
+ - Flare Systems Firework
+ - Forescout
+ - Fortinet Fortigate
+ - Imperva Cloud FAW
+ :::column-end:::
+ :::column span="":::
+ - Insider Risk Management (IRM)
+ - IronNet CyberSecurity Iron Defense
+ - Lookout
+ - McAfee Network Security Platform
+ - Microsoft MITRE ATT&CK Solution for Cloud
+ - Palo Alto PAN-OS
+ :::column-end:::
+ :::column span="":::
+ - Rapid7 Nexpose / Insight VM
+ - ReversingLabs
+ - RSA SecurID
+ - Semperis
+ - Tenable Nessus Scanner
+ - Vectra Stream
+ - Zero Trust
+ :::column-end:::
+
+For more information, see:
+
+- [Learn about Microsoft Sentinel solutions](sentinel-solutions.md)
+- [Discover and deploy Microsoft Sentinel solutions](sentinel-solutions-deploy.md)
+- [Microsoft Sentinel solutions catalog](sentinel-solutions-catalog.md)
+
+### Enable continuous deployment from your content repositories (Public preview)
+
+The new Microsoft Sentinel **Repositories** page provides the ability to manage and deploy your custom content from GitHub or Azure DevOps repositories, as an alternative to managing them in the Azure portal. This capability introduces a more streamlined and automated approach for managing and deploying content across Microsoft Sentinel workspaces.
+
+If you store your custom content in an external repository in order to maintain it outside of Microsoft Sentinel, now you can connect that repository to your Microsoft Sentinel workspace. Content you add, create, or edit in your repository is automatically deployed to your Microsoft Sentinel workspaces, and will be visible from the various Microsoft Sentinel galleries, such as the **Analytics**, **Hunting**, or **Workbooks** pages.
+
+For more information, see [Deploy custom content from your repository](ci-cd.md).
+
+### Enriched threat intelligence with Geolocation and WhoIs data (Public preview)
+
+Now, any threat intelligence data that you bring in to Microsoft Sentinel via data connectors and logic app playbooks, or create in Microsoft Sentinel, is automatically enriched with GeoLocation and WhoIs information.
+
+GeoLocation and WhoIs data can provide more context for investigations where the selected indicator of compromise (IOC) is found.
+
+For example, use GeoLocation data to find details like *Organization* or *Country* for the indicator, and WhoIs data to find data like *Registrar* and *Record creation* data.
+
+You can view GeoLocation and WhoIs data on the **Threat Intelligence** pane for each indicator of compromise that you've imported into Microsoft Sentinel. Details for the indicator are shown on the right, including any Geolocation and WhoIs data available.
+
+For example:
++
+> [!TIP]
+> The Geolocation and WhoIs information come from the Microsoft Threat Intelligence service, which you can also access via API. For more information, see [Enrich entities with geolocation data via API](geolocation-data-api.md).
+>
+
+For more information, see:
+
+- [Understand threat intelligence in Microsoft Sentinel](understand-threat-intelligence.md)
+- [Understand threat intelligence integrations](threat-intelligence-integration.md)
+- [Work with threat indicators in Microsoft Sentinel](work-with-threat-indicators.md)
+- [Connect threat intelligence platforms](connect-threat-intelligence-tip.md)
+
+### Use notebooks with Azure Synapse Analytics in Microsoft Sentinel (Public preview)
+
+Microsoft Sentinel now integrates Jupyter notebooks with Azure Synapse for large-scale security analytics scenarios.
+
+Until now, Jupyter notebooks in Microsoft Sentinel have been integrated with Azure Machine Learning. This functionality supports users who want to incorporate notebooks, popular open-source machine learning toolkits, and libraries such as TensorFlow, as well as their own custom models, into security workflows.
+
+The new Azure Synapse integration provides extra analytic horsepower, such as:
+
+- **Security big data analytics**, using cost-optimized, fully managed Azure Synapse Apache Spark compute pool.
+
+- **Cost-effective Data Lake access** to build analytics on historical data via Azure Data Lake Storage Gen2, which is a set of capabilities dedicated to big data analytics, built on top of Azure Blob Storage.
+
+- **Flexibility to integrate data sources** into security operation workflows from multiple sources and formats.
+
+- **PySpark, a Python-based API** for using the Spark framework in combination with Python, reducing the need to learn a new programming language if you're already familiar with Python.
+
+To support this integration, we added the ability to create and launch an Azure Synapse workspace directly from Microsoft Sentinel. We also added new, sample notebooks to guide you through configuring the Azure Synapse environment, setting up a continuous data export pipeline from Log Analytics into Azure Data Lake Storage, and then hunting on that data at scale.
+
+For more information, see [Integrate notebooks with Azure Synapse](notebooks-with-synapse.md).
+
+### Enhanced Notebooks area in Microsoft Sentinel
+
+The **Notebooks** area in Microsoft Sentinel also now has an **Overview** tab, where you can find basic information about notebooks, and a new **Notebook types** column in the **Templates** tab to indicate the type of each notebook displayed. For example, notebooks might have types of **Getting started**, **Configuration**, **Hunting**, and now **Synapse**.
+
+For example:
++
+For more information, see [Use Jupyter notebooks to hunt for security threats](notebooks.md).
+
+### Microsoft Sentinel renaming
+
+Starting in November 2021, Azure Sentinel is being renamed to Microsoft Sentinel, and you'll see upcoming updates in the portal, documentation, and other resources in parallel.
+
+Earlier entries in this article and the older [Archive for What's new in Sentinel](whats-new-archive.md) continue to use the name *Azure* Sentinel, as that was the service name when those features were new.
+
+For more information, see our [blog on recent security enhancements](https://aka.ms/secblg11).
+
+### Deploy and monitor Azure Key Vault honeytokens with Microsoft Sentinel
+
+The new **Microsoft Sentinel Deception** solution helps you watch for malicious activity in your key vaults by helping you to deploy decoy keys and secrets, called *honeytokens*, to selected Azure key vaults.
+
+Once deployed, any access or operation with the honeytoken keys and secrets generate incidents that you can investigate in Microsoft Sentinel.
+
+Since there's no reason to actually use honeytoken keys and secrets, any similar activity in your workspace may be malicious and should be investigated.
+
+The **Microsoft Sentinel Deception** solution includes a workbook to help you deploy the honeytokens, either at scale or one at a time, watchlists to track the honeytokens created, and analytics rules to generate incidents as needed.
+
+For more information, see [Deploy and monitor Azure Key Vault honeytokens with Microsoft Sentinel (Public preview)](monitor-key-vault-honeytokens.md).
+
+## October 2021
+
+- [Windows Security Events connector using Azure Monitor Agent now in GA](#windows-security-events-connector-using-azure-monitor-agent-now-in-ga)
+- [Defender for Office 365 events now available in the Microsoft 365 Defender connector (Public preview)](#defender-for-office-365-events-now-available-in-the-microsoft-365-defender-connector-public-preview)
+- [Playbook templates and gallery now available (Public preview)](#playbook-templates-and-gallery-now-available-public-preview)
+- [Template versioning for your scheduled analytics rules (Public preview)](#manage-template-versions-for-your-scheduled-analytics-rules-public-preview)
+- [DHCP normalization schema (Public preview)](#dhcp-normalization-schema-public-preview)
+
+### Windows Security Events connector using Azure Monitor Agent now in GA
+
+The new version of the Windows Security Events connector, based on the Azure Monitor Agent, is now generally available. For more information, see [Connect to Windows servers to collect security events](connect-windows-security-events.md?tabs=AMA).
+
+### Defender for Office 365 events now available in the Microsoft 365 Defender connector (Public preview)
+
+In addition to those from Microsoft Defender for Endpoint, you can now ingest raw [advanced hunting events](/microsoft-365/security/defender/advanced-hunting-overview) from [Microsoft Defender for Office 365](/microsoft-365/security/office-365-security/overview) through the [Microsoft 365 Defender connector](connect-microsoft-365-defender.md). [Learn more](microsoft-365-defender-sentinel-integration.md#advanced-hunting-event-collection).
+
+### Playbook templates and gallery now available (Public preview)
+
+A playbook template is a pre-built, tested, and ready-to-use workflow that can be customized to meet your needs. Templates can also serve as a reference for best practices when developing playbooks from scratch, or as inspiration for new automation scenarios.
+
+Playbook templates have been developed by the Sentinel community, independent software vendors (ISVs), and Microsoft's own experts, and you can find them in the **Playbook templates** tab (under **Automation**), as part of an [Azure Sentinel solution](sentinel-solutions.md), or in the [Azure Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks).
+
+For more information, see [Create and customize playbooks from built-in templates](use-playbook-templates.md).
+
+### Manage template versions for your scheduled analytics rules (Public preview)
+
+When you create analytics rules from [built-in Azure Sentinel rule templates](detect-threats-built-in.md), you effectively create a copy of the template. Past that point, the active rule is ***not*** dynamically updated to match any changes that get made to the originating template.
+
+However, rules created from templates ***do*** remember which templates they came from, which allows you two advantages:
+
+- If you made changes to a rule when creating it from a template (or at any time after that), you can always revert the rule back to its original version (as a copy of the template).
+
+- If a template is updated, you'll be notified and you can choose to update your rules to the new version of their templates, or leave them as they are.
+
+[Learn how to manage these tasks](manage-analytics-rule-templates.md), and what to keep in mind. These procedures apply to any [Scheduled](detect-threats-built-in.md#scheduled) analytics rules created from templates.
+
+### DHCP normalization schema (Public preview)
+
+The Advanced Security Information Model (ASIM) now supports a DHCP normalization schema, which is used to describe events reported by a DHCP server and is used by Azure Sentinel to enable source-agnostic analytics.
+
+Events described in the DHCP normalization schema include serving requests for DHCP IP address leased from client systems and updating a DNS server with the leases granted.
+
+For more information, see:
+
+- [Azure Sentinel DHCP normalization schema reference (Public preview)](dhcp-normalization-schema.md)
+- [Normalization and the Azure Sentinel Information Model (ASIM)](normalization.md)
+
+## September 2021
+
+- [Data connector health enhancements (Public preview)](#data-connector-health-enhancements-public-preview)
+- [New in docs: scaling data connector documentation](#new-in-docs-scaling-data-connector-documentation)
+- [Azure Storage account connector changes](#azure-storage-account-connector-changes)
+
+### Data connector health enhancements (Public preview)
+
+Azure Sentinel now provides the ability to enhance your data connector health monitoring with a new *SentinelHealth* table. The *SentinelHealth* table is created after you [turn on the Azure Sentinel health feature](monitor-sentinel-health.md) in your Azure Sentinel workspace, at the first success or failure health event generated.
+
+For more information, see [Monitor the health of your data connectors with this Azure Sentinel workbook](monitor-data-connector-health.md).
+
+> [!NOTE]
+> The *SentinelHealth* data table is currently supported only for selected data connectors. For more information, see [Supported data connectors](monitor-data-connector-health.md#supported-data-connectors).
+>
++
+### New in docs: scaling data connector documentation
+
+As we continue to add more and more built-in data connectors for Azure Sentinel, we reorganized our data connector documentation to reflect this scaling.
+
+For most data connectors, we replaced full articles that describe an individual connector with a series of generic procedures and a full reference of all currently supported connectors.
+
+Check the [Azure Sentinel data connectors reference](data-connectors-reference.md) for details about your connector, including references to the relevant generic procedure, as well as extra information and configurations required.
+
+For more information, see:
+
+- **Conceptual information**: [Connect data sources](connect-data-sources.md)
+
+- **Generic how-to articles**:
+
+ - [Connect to Azure, Windows, Microsoft, and Amazon services](connect-azure-windows-microsoft-services.md)
+ - [Connect your data source to the Azure Sentinel Data Collector API to ingest data](connect-rest-api-template.md)
+ - [Get CEF-formatted logs from your device or appliance into Azure Sentinel](connect-common-event-format.md)
+ - [Collect data from Linux-based sources using Syslog](connect-syslog.md)
+ - [Collect data in custom log formats to Azure Sentinel with the Log Analytics agent](connect-custom-logs.md)
+ - [Use Azure Functions to connect your data source to Azure Sentinel](connect-azure-functions-template.md)
+ - [Resources for creating Azure Sentinel custom connectors](create-custom-connector.md)
+
+### Azure Storage account connector changes
+
+Due to some changes made within the Azure Storage account resource configuration itself, the connector also needs to be reconfigured.
+The storage account (parent) resource has within it other (child) resources for each type of storage: files, tables, queues, and blobs.
+
+When configuring diagnostics for a storage account, you must select and configure, in turn:
+- The parent account resource, exporting the **Transaction** metric.
+- Each of the child storage-type resources, exporting all the logs and metrics (see the table above).
+
+You'll only see the storage types that you actually have defined resources for.
++
+## August 2021
+
+- [Advanced incident search (Public preview)](#advanced-incident-search-public-preview)
+- [Fusion detection for Ransomware (Public preview)](#fusion-detection-for-ransomware-public-preview)
+- [Watchlist templates for UEBA data](#watchlist-templates-for-ueba-data-public-preview)
+- [File event normalization schema (Public preview)](#file-event-normalization-schema-public-preview)
+- [New in docs: Best practice guidance](#new-in-docs-best-practice-guidance)
+
+### Advanced incident search (Public preview)
+
+By default, incident searches run across the **Incident ID**, **Title**, **Tags**, **Owner**, and **Product name** values only. Azure Sentinel now provides [advanced search options](investigate-cases.md#search-for-incidents) to search across more data, including alert details, descriptions, entities, tactics, and more.
+
+For example:
++
+For more information, see [Search for incidents](investigate-cases.md#search-for-incidents).
+
+### Fusion detection for Ransomware (Public preview)
+
+Azure Sentinel now provides new Fusion detections for possible Ransomware activities, generating incidents titled as **Multiple alerts possibly related to Ransomware activity detected**.
+
+Incidents are generated for alerts that are possibly associated with Ransomware activities, when they occur during a specific time-frame, and are associated with the Execution and Defense Evasion stages of an attack. You can use the alerts listed in the incident to analyze the techniques possibly used by attackers to compromise a host / device and to evade detection.
+
+Supported data connectors include:
+
+- [Azure Defender (Azure Security Center)](connect-defender-for-cloud.md)
+- [Microsoft Defender for Endpoint](./data-connectors-reference.md#microsoft-defender-for-endpoint)
+- [Microsoft Defender for Identity](./data-connectors-reference.md#microsoft-defender-for-identity)
+- [Microsoft Cloud App Security](./data-connectors-reference.md#microsoft-defender-for-cloud-apps)
+- [Azure Sentinel scheduled analytics rules](detect-threats-built-in.md#scheduled)
+
+For more information, see [Multiple alerts possibly related to Ransomware activity detected](fusion.md#fusion-for-ransomware).
+
+### Watchlist templates for UEBA data (Public preview)
+
+Azure Sentinel now provides built-in watchlist templates for UEBA data, which you can customize for your environment and use during investigations.
+
+After UEBA watchlists are populated with data, you can correlate that data with analytics rules, view it in the entity pages and investigation graphs as insights, create custom uses such as to track VIP or sensitive users, and more.
+
+Watchlist templates currently include:
+
+- **VIP Users**. A list of user accounts of employees that have high impact value in the organization.
+- **Terminated Employees**. A list of user accounts of employees that have been, or are about to be, terminated.
+- **Service Accounts**. A list of service accounts and their owners.
+- **Identity Correlation**. A list of related user accounts that belong to the same person.
+- **High Value Assets**. A list of devices, resources, or other assets that have critical value in the organization.
+- **Network Mapping**. A list of IP subnets and their respective organizational contexts.
+
+For more information, see [Create watchlists in Microsoft Sentinel](watchlists-create.md) and [Built-in watchlist schemas](watchlist-schemas.md).
+++
+### File Event normalization schema (Public preview)
+
+The Azure Sentinel Information Model (ASIM) now supports a File Event normalization schema, which is used to describe file activity, such as creating, modifying, or deleting files or documents. File events are reported by operating systems, file storage systems such as Azure Files, and document management systems such as Microsoft SharePoint.
+
+For more information, see:
+
+- [Azure Sentinel File Event normalization schema reference (Public preview)](file-event-normalization-schema.md)
+- [Normalization and the Azure Sentinel Information Model (ASIM)](normalization.md)
++
+### New in docs: Best practice guidance
+
+In response to multiple requests from customers and our support teams, we added a series of best practice guidance to our documentation.
+
+For more information, see:
+
+- [Prerequisites for deploying Azure Sentinel](prerequisites.md)
+- [Best practices for Azure Sentinel](best-practices.md)
+- [Azure Sentinel workspace architecture best practices](best-practices-workspace-architecture.md)
+- [Design your Azure Sentinel workspace architecture](design-your-workspace-architecture.md)
+- [Azure Sentinel sample workspace designs](sample-workspace-designs.md)
+- [Data collection best practices](best-practices-data.md)
+
+> [!TIP]
+> You can find more guidance added across our documentation in relevant conceptual and how-to articles. For more information, see [Best practice references](best-practices.md#best-practice-references).
+>
+ ## July 2021 - [Microsoft Threat Intelligence Matching Analytics (Public preview)](#microsoft-threat-intelligence-matching-analytics-public-preview)
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
If you're looking for items older than six months, you'll find them in the [Arch
## August 2022
+- [Heads up: Microsoft 365 Defender now integrates Azure Active Directory Identity Protection (AADIP)](#heads-up-microsoft-365-defender-now-integrates-azure-active-directory-identity-protection-aadip)
- [Azure resource entity page (Preview)](#azure-resource-entity-page-preview) - [New data sources for User and entity behavior analytics (UEBA) (Preview)](#new-data-sources-for-user-and-entity-behavior-analytics-ueba-preview) - [Microsoft Sentinel Solution for SAP is now generally available](#microsoft-sentinel-solution-for-sap-is-now-generally-available)
+### Heads up: Microsoft 365 Defender now integrates Azure Active Directory Identity Protection (AADIP)
+
+[Microsoft 365 Defender](/microsoft-365/security/defender/) now includes the integration of [Azure Active Directory Identity Protection (AADIP)](../active-directory/identity-protection/index.yml) alerts and incidents.
+
+Microsoft Sentinel customers with the [Microsoft 365 Defender connector](microsoft-365-defender-sentinel-integration.md) enabled will automatically start receiving AADIP alerts and incidents in their Microsoft Sentinel incidents queue. Depending on your configuration, this may affect you as follows:
+
+- If you already have your AADIP connector enabled in Microsoft Sentinel, you may receive duplicate incidents. To avoid this, you have a few choices, listed here in descending order of preference:
+
+ - Disable incident creation in your AADIP data connector.
+
+ - Disable AADIP integration at the source, in your Microsoft 365 Defender portal.
+
+ - Create an automation rule in Microsoft Sentinel to automatically close incidents created by the [Microsoft Security analytics rule](create-incidents-from-alerts.md) that creates AADIP incidents.
+
+- If you don't have your AADIP connector enabled, you may receive AADIP incidents, but without any data in them. To correct this, simply [enable your AADIP connector](data-connectors-reference.md#azure-active-directory-identity-protection). Be sure **not** to enable incident creation on the connector page.
+
+- If you're first enabling your Microsoft 365 Defender connector now, the AADIP connection will be made automatically behind the scenes. You won't need to do anything else.
+ ### Azure resource entity page (Preview) Azure resources such as Azure Virtual Machines, Azure Storage Accounts, Azure Key Vault, Azure DNS, and more are essential parts of your network. Threat actors might attempt to obtain sensitive data from your storage account, gain access to your key vault and the secrets it contains, or infect your virtual machine with malware. The new [Azure resource entity page](entity-pages.md) is designed to help your SOC investigate incidents that involve Azure resources in your environment, hunt for potential attacks, and assess risk.
The new **Advanced KQL for Microsoft Sentinel** interactive workbook is designed
Accompanying the new workbook is an explanatory [blog post](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/advanced-kql-framework-workbook-empowering-you-to-become-kql/ba-p/3033766), as well as a new [introduction to Kusto Query Language](kusto-overview.md) and a [collection of learning and skilling resources](kusto-resources.md) in the Microsoft Sentinel documentation.
-## December 2021
--- [Apache Log4j Vulnerability Detection solution](#apache-log4j-vulnerability-detection-solution-public-preview)-- [IoT OT Threat Monitoring with Defender for IoT solution](#iot-ot-threat-monitoring-with-defender-for-iot-solution-public-preview)-- [Continuous Threat Monitoring for GitHub solution](#ingest-github-logs-into-your-microsoft-sentinel-workspace-public-preview)--
-### Apache Log4j Vulnerability Detection solution
-
-Remote code execution vulnerabilities related to Apache Log4j were disclosed on 9 December 2021. The vulnerability allows for unauthenticated remote code execution, and it's triggered when a specially crafted string, provided by the attacker through a variety of different input vectors, is parsed and processed by the Log4j 2 vulnerable component.
-
-The [Apache Log4J Vulnerability Detection](sentinel-solutions-catalog.md#domain-solutions) solution was added to the Microsoft Sentinel content hub to help customers monitor, detect, and investigate signals related to the exploitation of this vulnerability, using Microsoft Sentinel.
-
-For more information, see the [Microsoft Security Response Center blog](https://msrc-blog.microsoft.com/2021/12/11/microsofts-response-to-cve-2021-44228-apache-log4j2/) and [Centrally discover and deploy Microsoft Sentinel out-of-the-box content and solutions](sentinel-solutions-deploy.md).
-
-### IoT OT Threat Monitoring with Defender for IoT solution (Public preview)
-
-The new **IoT OT Threat Monitoring with Defender for IoT** solution available in the [Microsoft Sentinel content hub](sentinel-solutions-catalog.md#microsoft) provides further support for the Microsoft Sentinel integration with Microsoft Defender for IoT, bridging gaps between IT and OT security challenges, and empowering SOC teams with enhanced abilities to efficiently and effectively detect and respond to OT threats.
-
-For more information, see [Tutorial: Integrate Microsoft Sentinel and Microsoft Defender for IoT](iot-solution.md).
--
-### Ingest GitHub logs into your Microsoft Sentinel workspace (Public preview)
-
-Use the new [Continuous Threat Monitoring for GitHub](sentinel-solutions-catalog.md#github) solution and [data connector](data-connectors-reference.md#github-preview) to ingest your GitHub logs into your Microsoft Sentinel workspace.
-
-The **Continuous Threat Monitoring for GitHub** solution includes a data connector, relevant analytics rules, and a workbook that you can use to visualize your log data.
-
-For example, view the number of users that were added or removed from GitHub repositories, how many repositories were created, forked, or cloned, in the selected time frame.
-
-> [!NOTE]
-> The **Continuous Threat Monitoring for GitHub** solution is supported for GitHub enterprise licenses only.
->
-
-For more information, see [Centrally discover and deploy Microsoft Sentinel out-of-the-box content and solutions (Public preview)](sentinel-solutions-deploy.md) and [instructions](data-connectors-reference.md#github-preview) for installing the GitHub data connector.
-
-### Apache Log4j Vulnerability Detection solution (Public preview)
-
-Remote code execution vulnerabilities related to Apache Log4j were disclosed on 9 December 2021. The vulnerability allows for unauthenticated remote code execution, and it's triggered when a specially crafted string, provided by the attacker through a variety of different input vectors, is parsed and processed by the Log4j 2 vulnerable component.
-
-The [Apache Log4J Vulnerability Detection](sentinel-solutions-catalog.md#domain-solutions) solution was added to the Microsoft Sentinel content hub to help customers monitor, detect, and investigate signals related to the exploitation of this vulnerability, using Microsoft Sentinel.
-
-For more information, see the [Microsoft Security Response Center blog](https://msrc-blog.microsoft.com/2021/12/11/microsofts-response-to-cve-2021-44228-apache-log4j2/) and [Centrally discover and deploy Microsoft Sentinel out-of-the-box content and solutions](sentinel-solutions-deploy.md).
-
-## November 2021
--- [Incident advanced search now available in GA](#incident-advanced-search-now-available-in-ga)-- [Amazon Web Services S3 connector now available (Public preview)](#amazon-web-services-s3-connector-now-available-public-preview)-- [Windows Forwarded Events connector now available (Public preview)](#windows-forwarded-events-connector-now-available-public-preview)-- [Near-real-time (NRT) threat detection rules now available (Public preview)](#near-real-time-nrt-threat-detection-rules-now-available-public-preview)-- [Fusion engine now detects emerging and unknown threats (Public preview)](#fusion-engine-now-detects-emerging-and-unknown-threats-public-preview)-- [Fine-tuning recommendations for your analytics rules (Public preview)](#get-fine-tuning-recommendations-for-your-analytics-rules-public-preview)-- [Free trial updates](#free-trial-updates)-- [Content hub and new solutions (Public preview)](#content-hub-and-new-solutions-public-preview)-- [Continuous deployment from your content repositories (Public preview)](#enable-continuous-deployment-from-your-content-repositories-public-preview)-- [Enriched threat intelligence with Geolocation and WhoIs data (Public preview)](#enriched-threat-intelligence-with-geolocation-and-whois-data-public-preview)-- [Use notebooks with Azure Synapse Analytics in Microsoft Sentinel (Public preview)](#use-notebooks-with-azure-synapse-analytics-in-microsoft-sentinel-public-preview)-- [Enhanced Notebooks area in Microsoft Sentinel](#enhanced-notebooks-area-in-microsoft-sentinel)-- [Microsoft Sentinel renaming](#microsoft-sentinel-renaming)-- [Deploy and monitor Azure Key Vault honeytokens with Microsoft Sentinel](#deploy-and-monitor-azure-key-vault-honeytokens-with-microsoft-sentinel)-
-### Incident advanced search now available in GA
-
-Searching for incidents using the advanced search functionality is now generally available.
-
-The advanced incident search provides the ability to search across more data, including alert details, descriptions, entities, tactics, and more.
-
-For more information, see [Search for incidents](investigate-cases.md#search-for-incidents).
-
-### Amazon Web Services S3 connector now available (Public preview)
-
-You can now connect Microsoft Sentinel to your Amazon Web Services (AWS) S3 storage bucket, in order to ingest logs from a variety of AWS services.
-
-For now, you can use this connection to ingest VPC Flow Logs and GuardDuty findings, as well as AWS CloudTrail.
-
-For more information, see [Connect Microsoft Sentinel to S3 Buckets to get Amazon Web Services (AWS) data](connect-aws.md).
-
-### Windows Forwarded Events connector now available (Public preview)
-
-You can now stream event logs from Windows Servers connected to your Microsoft Sentinel workspace using Windows Event Collection / Windows Event Forwarding (WEC / WEF), thanks to this new data connector. The connector uses the new Azure Monitor Agent (AMA), which provides a number of advantages over the legacy Log Analytics agent (also known as the MMA):
--- **Scalability:** If you've enabled Windows Event Collection (WEC), you can install the Azure Monitor Agent (AMA) on the WEC machine to collect logs from many servers with a single connection point.--- **Speed:** The AMA can send data at an improved rate of 5 K EPS, allowing for faster data refresh.--- **Efficiency:** The AMA allows you to design complex Data Collection Rules (DCR) to filter the logs at their source, choosing the exact events to stream to your workspace. DCRs help lower your network traffic and your ingestion costs by leaving out undesired events.--- **Coverage:** WEC / WEF enables the collection of Windows Event logs from legacy (on-premises and physical) servers and also from high-usage or sensitive machines, such as domain controllers, where installing an agent is undesired.-
-We recommend using this connector with the [Microsoft Sentinel Information Model (ASIM)](normalization.md) parsers installed to ensure full support for data normalization.
-
-Learn more about the [Windows Forwarded Events connector](data-connectors-reference.md#windows-forwarded-events-preview).
-
-### Near-real-time (NRT) threat detection rules now available (Public preview)
-
-When you're faced with security threats, time and speed are of the essence. You need to be aware of threats as they materialize so you can analyze and respond quickly to contain them. Microsoft Sentinel's near-real-time (NRT) analytics rules offer you faster threat detection - closer to that of an on-premises SIEM - and the ability to shorten response times in specific scenarios.
-
-Microsoft SentinelΓÇÖs [near-real-time analytics rules](detect-threats-built-in.md#nrt) provide up-to-the-minute threat detection out-of-the-box. This type of rule was designed to be highly responsive by running its query at intervals just one minute apart.
-
-Learn more about [NRT rules](near-real-time-rules.md) and [how to use them](create-nrt-rules.md).
-
-### Fusion engine now detects emerging and unknown threats (Public preview)
-
-In addition to detecting attacks based on [predefined scenarios](fusion-scenario-reference.md), Microsoft Sentinel's ML-powered Fusion engine can help you find the emerging and unknown threats in your environment by applying extended ML analysis and by correlating a broader scope of anomalous signals, while keeping the alert fatigue low.
-
-The Fusion engine's ML algorithms constantly learn from existing attacks and apply analysis based on how security analysts think. It can therefore discover previously undetected threats from millions of anomalous behaviors across the kill-chain throughout your environment, which helps you stay one step ahead of the attackers.
-
-Learn more about [Fusion for emerging threats](fusion.md#fusion-for-emerging-threats).
-
-Also, the [Fusion analytics rule is now more configurable](configure-fusion-rules.md), reflecting its increased functionality.
-
-### Get fine-tuning recommendations for your analytics rules (Public preview)
-
-Fine-tuning threat detection rules in your SIEM can be a difficult, delicate, and continuous process of balancing between maximizing your threat detection coverage and minimizing false positive rates. Microsoft Sentinel simplifies and streamlines this process by using machine learning to analyze billions of signals from your data sources as well as your responses to incidents over time, deducing patterns and providing you with actionable recommendations and insights that can significantly lower your tuning overhead and allow you to focus on detecting and responding to actual threats.
-
-[Tuning recommendations and insights](detection-tuning.md) are now built in to your analytics rules.
-
-### Free trial updates
-
-Microsoft Sentinel's free trial continues to support new or existing Log Analytics workspaces at no additional cost for the first 31 days.
-
-We're evolving our free trial experience to include the following updates:
--- **New Log Analytics workspaces** can ingest up to 10 GB / day of log data for the first 31-days at no cost. New workspaces include workspaces that are less than three days old.-
- Both Log Analytics data ingestion and Microsoft Sentinel charges are waived during the 31-day trial period. This free trial is subject to a 20-workspace limit per Azure tenant.
--- **Existing Log Analytics workspaces** can enable Microsoft Sentinel at no additional cost. Existing workspaces include any workspaces created more than three days ago.-
- Only the Microsoft Sentinel charges are waived during the 31-day trial period.
-
-Usage beyond these limits will be charged per the pricing listed on the [Microsoft Sentinel pricing](https://azure.microsoft.com/pricing/details/azure-sentinel) page. Charges related to additional capabilities for [automation](automation.md) and [bring your own machine learning](bring-your-own-ml.md) are still applicable during the free trial.
-
-> [!TIP]
-> During your free trial, find resources for cost management, training, and more on the **News & guides > Free trial** tab in Microsoft Sentinel. This tab also displays details about the dates of your free trial, and how many days you've left until it expires.
->
-
-For more information, see [Plan and manage costs for Microsoft Sentinel](billing.md).
-
-### Content hub and new solutions (Public preview)
-
-Microsoft Sentinel now provides a **Content hub**, a centralized location to find and deploy Microsoft Sentinel out-of-the-box (built-in) content and solutions to your Microsoft Sentinel workspace. Find the content you need by filtering for content type, support models, categories and more, or use the powerful text search.
-
-Under **Content management**, select **Content hub**. Select a solution to view more details on the right, and then click **Install** to install it in your workspace.
--
-The following list includes highlights of new, out-of-the-box solutions added to the Content hub:
-
- :::column span="":::
- - Microsoft Sentinel Training Lab
- - Cisco ASA
- - Cisco Duo Security
- - Cisco Meraki
- - Cisco StealthWatch
- - Digital Guardian
- - 365 Dynamics
- - GCP Cloud DNS
- :::column-end:::
- :::column span="":::
- - GCP CloudMonitor
- - GCP Identity and Access Management
- - FalconForce
- - FireEye NX
- - Flare Systems Firework
- - Forescout
- - Fortinet Fortigate
- - Imperva Cloud FAW
- :::column-end:::
- :::column span="":::
- - Insider Risk Management (IRM)
- - IronNet CyberSecurity Iron Defense
- - Lookout
- - McAfee Network Security Platform
- - Microsoft MITRE ATT&CK Solution for Cloud
- - Palo Alto PAN-OS
- :::column-end:::
- :::column span="":::
- - Rapid7 Nexpose / Insight VM
- - ReversingLabs
- - RSA SecurID
- - Semperis
- - Tenable Nessus Scanner
- - Vectra Stream
- - Zero Trust
- :::column-end:::
-
-For more information, see:
--- [Learn about Microsoft Sentinel solutions](sentinel-solutions.md)-- [Discover and deploy Microsoft Sentinel solutions](sentinel-solutions-deploy.md)-- [Microsoft Sentinel solutions catalog](sentinel-solutions-catalog.md)-
-### Enable continuous deployment from your content repositories (Public preview)
-
-The new Microsoft Sentinel **Repositories** page provides the ability to manage and deploy your custom content from GitHub or Azure DevOps repositories, as an alternative to managing them in the Azure portal. This capability introduces a more streamlined and automated approach for managing and deploying content across Microsoft Sentinel workspaces.
-
-If you store your custom content in an external repository in order to maintain it outside of Microsoft Sentinel, now you can connect that repository to your Microsoft Sentinel workspace. Content you add, create, or edit in your repository is automatically deployed to your Microsoft Sentinel workspaces, and will be visible from the various Microsoft Sentinel galleries, such as the **Analytics**, **Hunting**, or **Workbooks** pages.
-
-For more information, see [Deploy custom content from your repository](ci-cd.md).
-
-### Enriched threat intelligence with Geolocation and WhoIs data (Public preview)
-
-Now, any threat intelligence data that you bring in to Microsoft Sentinel via data connectors and logic app playbooks, or create in Microsoft Sentinel, is automatically enriched with GeoLocation and WhoIs information.
-
-GeoLocation and WhoIs data can provide more context for investigations where the selected indicator of compromise (IOC) is found.
-
-For example, use GeoLocation data to find details like *Organization* or *Country* for the indicator, and WhoIs data to find data like *Registrar* and *Record creation* data.
-
-You can view GeoLocation and WhoIs data on the **Threat Intelligence** pane for each indicator of compromise that you've imported into Microsoft Sentinel. Details for the indicator are shown on the right, including any Geolocation and WhoIs data available.
-
-For example:
--
-> [!TIP]
-> The Geolocation and WhoIs information come from the Microsoft Threat Intelligence service, which you can also access via API. For more information, see [Enrich entities with geolocation data via API](geolocation-data-api.md).
->
-
-For more information, see:
--- [Understand threat intelligence in Microsoft Sentinel](understand-threat-intelligence.md)-- [Understand threat intelligence integrations](threat-intelligence-integration.md)-- [Work with threat indicators in Microsoft Sentinel](work-with-threat-indicators.md)-- [Connect threat intelligence platforms](connect-threat-intelligence-tip.md)-
-### Use notebooks with Azure Synapse Analytics in Microsoft Sentinel (Public preview)
-
-Microsoft Sentinel now integrates Jupyter notebooks with Azure Synapse for large-scale security analytics scenarios.
-
-Until now, Jupyter notebooks in Microsoft Sentinel have been integrated with Azure Machine Learning. This functionality supports users who want to incorporate notebooks, popular open-source machine learning toolkits, and libraries such as TensorFlow, as well as their own custom models, into security workflows.
-
-The new Azure Synapse integration provides extra analytic horsepower, such as:
--- **Security big data analytics**, using cost-optimized, fully managed Azure Synapse Apache Spark compute pool.--- **Cost-effective Data Lake access** to build analytics on historical data via Azure Data Lake Storage Gen2, which is a set of capabilities dedicated to big data analytics, built on top of Azure Blob Storage.--- **Flexibility to integrate data sources** into security operation workflows from multiple sources and formats.--- **PySpark, a Python-based API** for using the Spark framework in combination with Python, reducing the need to learn a new programming language if you're already familiar with Python.-
-To support this integration, we added the ability to create and launch an Azure Synapse workspace directly from Microsoft Sentinel. We also added new, sample notebooks to guide you through configuring the Azure Synapse environment, setting up a continuous data export pipeline from Log Analytics into Azure Data Lake Storage, and then hunting on that data at scale.
-
-For more information, see [Integrate notebooks with Azure Synapse](notebooks-with-synapse.md).
-
-### Enhanced Notebooks area in Microsoft Sentinel
-
-The **Notebooks** area in Microsoft Sentinel also now has an **Overview** tab, where you can find basic information about notebooks, and a new **Notebook types** column in the **Templates** tab to indicate the type of each notebook displayed. For example, notebooks might have types of **Getting started**, **Configuration**, **Hunting**, and now **Synapse**.
-
-For example:
--
-For more information, see [Use Jupyter notebooks to hunt for security threats](notebooks.md).
-
-### Microsoft Sentinel renaming
-
-Starting in November 2021, Azure Sentinel is being renamed to Microsoft Sentinel, and you'll see upcoming updates in the portal, documentation, and other resources in parallel.
-
-Earlier entries in this article and the older [Archive for What's new in Sentinel](whats-new-archive.md) continue to use the name *Azure* Sentinel, as that was the service name when those features were new.
-
-For more information, see our [blog on recent security enhancements](https://aka.ms/secblg11).
-
-### Deploy and monitor Azure Key Vault honeytokens with Microsoft Sentinel
-
-The new **Microsoft Sentinel Deception** solution helps you watch for malicious activity in your key vaults by helping you to deploy decoy keys and secrets, called *honeytokens*, to selected Azure key vaults.
-
-Once deployed, any access or operation with the honeytoken keys and secrets generate incidents that you can investigate in Microsoft Sentinel.
-
-Since there's no reason to actually use honeytoken keys and secrets, any similar activity in your workspace may be malicious and should be investigated.
-
-The **Microsoft Sentinel Deception** solution includes a workbook to help you deploy the honeytokens, either at scale or one at a time, watchlists to track the honeytokens created, and analytics rules to generate incidents as needed.
-
-For more information, see [Deploy and monitor Azure Key Vault honeytokens with Microsoft Sentinel (Public preview)](monitor-key-vault-honeytokens.md).
-
-## October 2021
--- [Windows Security Events connector using Azure Monitor Agent now in GA](#windows-security-events-connector-using-azure-monitor-agent-now-in-ga)-- [Defender for Office 365 events now available in the Microsoft 365 Defender connector (Public preview)](#defender-for-office-365-events-now-available-in-the-microsoft-365-defender-connector-public-preview)-- [Playbook templates and gallery now available (Public preview)](#playbook-templates-and-gallery-now-available-public-preview)-- [Template versioning for your scheduled analytics rules (Public preview)](#manage-template-versions-for-your-scheduled-analytics-rules-public-preview)-- [DHCP normalization schema (Public preview)](#dhcp-normalization-schema-public-preview)-
-### Windows Security Events connector using Azure Monitor Agent now in GA
-
-The new version of the Windows Security Events connector, based on the Azure Monitor Agent, is now generally available. For more information, see [Connect to Windows servers to collect security events](connect-windows-security-events.md?tabs=AMA).
-
-### Defender for Office 365 events now available in the Microsoft 365 Defender connector (Public preview)
-
-In addition to those from Microsoft Defender for Endpoint, you can now ingest raw [advanced hunting events](/microsoft-365/security/defender/advanced-hunting-overview) from [Microsoft Defender for Office 365](/microsoft-365/security/office-365-security/overview) through the [Microsoft 365 Defender connector](connect-microsoft-365-defender.md). [Learn more](microsoft-365-defender-sentinel-integration.md#advanced-hunting-event-collection).
-
-### Playbook templates and gallery now available (Public preview)
-
-A playbook template is a pre-built, tested, and ready-to-use workflow that can be customized to meet your needs. Templates can also serve as a reference for best practices when developing playbooks from scratch, or as inspiration for new automation scenarios.
-
-Playbook templates have been developed by the Sentinel community, independent software vendors (ISVs), and Microsoft's own experts, and you can find them in the **Playbook templates** tab (under **Automation**), as part of an [Azure Sentinel solution](sentinel-solutions.md), or in the [Azure Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks).
-
-For more information, see [Create and customize playbooks from built-in templates](use-playbook-templates.md).
-
-### Manage template versions for your scheduled analytics rules (Public preview)
-
-When you create analytics rules from [built-in Azure Sentinel rule templates](detect-threats-built-in.md), you effectively create a copy of the template. Past that point, the active rule is ***not*** dynamically updated to match any changes that get made to the originating template.
-
-However, rules created from templates ***do*** remember which templates they came from, which allows you two advantages:
--- If you made changes to a rule when creating it from a template (or at any time after that), you can always revert the rule back to its original version (as a copy of the template).--- If a template is updated, you'll be notified and you can choose to update your rules to the new version of their templates, or leave them as they are.-
-[Learn how to manage these tasks](manage-analytics-rule-templates.md), and what to keep in mind. These procedures apply to any [Scheduled](detect-threats-built-in.md#scheduled) analytics rules created from templates.
-
-### DHCP normalization schema (Public preview)
-
-The Advanced Security Information Model (ASIM) now supports a DHCP normalization schema, which is used to describe events reported by a DHCP server and is used by Azure Sentinel to enable source-agnostic analytics.
-
-Events described in the DHCP normalization schema include serving requests for DHCP IP address leased from client systems and updating a DNS server with the leases granted.
-
-For more information, see:
--- [Azure Sentinel DHCP normalization schema reference (Public preview)](dhcp-normalization-schema.md)-- [Normalization and the Azure Sentinel Information Model (ASIM)](normalization.md)-
-## September 2021
--- [Data connector health enhancements (Public preview)](#data-connector-health-enhancements-public-preview)-- [New in docs: scaling data connector documentation](#new-in-docs-scaling-data-connector-documentation)-- [Azure Storage account connector changes](#azure-storage-account-connector-changes)-
-### Data connector health enhancements (Public preview)
-
-Azure Sentinel now provides the ability to enhance your data connector health monitoring with a new *SentinelHealth* table. The *SentinelHealth* table is created after you [turn on the Azure Sentinel health feature](monitor-sentinel-health.md) in your Azure Sentinel workspace, at the first success or failure health event generated.
-
-For more information, see [Monitor the health of your data connectors with this Azure Sentinel workbook](monitor-data-connector-health.md).
-
-> [!NOTE]
-> The *SentinelHealth* data table is currently supported only for selected data connectors. For more information, see [Supported data connectors](monitor-data-connector-health.md#supported-data-connectors).
->
--
-### New in docs: scaling data connector documentation
-
-As we continue to add more and more built-in data connectors for Azure Sentinel, we reorganized our data connector documentation to reflect this scaling.
-
-For most data connectors, we replaced full articles that describe an individual connector with a series of generic procedures and a full reference of all currently supported connectors.
-
-Check the [Azure Sentinel data connectors reference](data-connectors-reference.md) for details about your connector, including references to the relevant generic procedure, as well as extra information and configurations required.
-
-For more information, see:
--- **Conceptual information**: [Connect data sources](connect-data-sources.md)--- **Generic how-to articles**:-
- - [Connect to Azure, Windows, Microsoft, and Amazon services](connect-azure-windows-microsoft-services.md)
- - [Connect your data source to the Azure Sentinel Data Collector API to ingest data](connect-rest-api-template.md)
- - [Get CEF-formatted logs from your device or appliance into Azure Sentinel](connect-common-event-format.md)
- - [Collect data from Linux-based sources using Syslog](connect-syslog.md)
- - [Collect data in custom log formats to Azure Sentinel with the Log Analytics agent](connect-custom-logs.md)
- - [Use Azure Functions to connect your data source to Azure Sentinel](connect-azure-functions-template.md)
- - [Resources for creating Azure Sentinel custom connectors](create-custom-connector.md)
-
-### Azure Storage account connector changes
-
-Due to some changes made within the Azure Storage account resource configuration itself, the connector also needs to be reconfigured.
-The storage account (parent) resource has within it other (child) resources for each type of storage: files, tables, queues, and blobs.
-
-When configuring diagnostics for a storage account, you must select and configure, in turn:
-- The parent account resource, exporting the **Transaction** metric.-- Each of the child storage-type resources, exporting all the logs and metrics (see the table above).-
-You'll only see the storage types that you actually have defined resources for.
--
-## August 2021
--- [Advanced incident search (Public preview)](#advanced-incident-search-public-preview)-- [Fusion detection for Ransomware (Public preview)](#fusion-detection-for-ransomware-public-preview)-- [Watchlist templates for UEBA data](#watchlist-templates-for-ueba-data-public-preview)-- [File event normalization schema (Public preview)](#file-event-normalization-schema-public-preview)-- [New in docs: Best practice guidance](#new-in-docs-best-practice-guidance)-
-### Advanced incident search (Public preview)
-
-By default, incident searches run across the **Incident ID**, **Title**, **Tags**, **Owner**, and **Product name** values only. Azure Sentinel now provides [advanced search options](investigate-cases.md#search-for-incidents) to search across more data, including alert details, descriptions, entities, tactics, and more.
-
-For example:
--
-For more information, see [Search for incidents](investigate-cases.md#search-for-incidents).
-
-### Fusion detection for Ransomware (Public preview)
-
-Azure Sentinel now provides new Fusion detections for possible Ransomware activities, generating incidents titled as **Multiple alerts possibly related to Ransomware activity detected**.
-
-Incidents are generated for alerts that are possibly associated with Ransomware activities, when they occur during a specific time-frame, and are associated with the Execution and Defense Evasion stages of an attack. You can use the alerts listed in the incident to analyze the techniques possibly used by attackers to compromise a host / device and to evade detection.
-
-Supported data connectors include:
--- [Azure Defender (Azure Security Center)](connect-defender-for-cloud.md)-- [Microsoft Defender for Endpoint](./data-connectors-reference.md#microsoft-defender-for-endpoint)-- [Microsoft Defender for Identity](./data-connectors-reference.md#microsoft-defender-for-identity)-- [Microsoft Cloud App Security](./data-connectors-reference.md#microsoft-defender-for-cloud-apps)-- [Azure Sentinel scheduled analytics rules](detect-threats-built-in.md#scheduled)-
-For more information, see [Multiple alerts possibly related to Ransomware activity detected](fusion.md#fusion-for-ransomware).
-
-### Watchlist templates for UEBA data (Public preview)
-
-Azure Sentinel now provides built-in watchlist templates for UEBA data, which you can customize for your environment and use during investigations.
-
-After UEBA watchlists are populated with data, you can correlate that data with analytics rules, view it in the entity pages and investigation graphs as insights, create custom uses such as to track VIP or sensitive users, and more.
-
-Watchlist templates currently include:
--- **VIP Users**. A list of user accounts of employees that have high impact value in the organization.-- **Terminated Employees**. A list of user accounts of employees that have been, or are about to be, terminated.-- **Service Accounts**. A list of service accounts and their owners.-- **Identity Correlation**. A list of related user accounts that belong to the same person.-- **High Value Assets**. A list of devices, resources, or other assets that have critical value in the organization.-- **Network Mapping**. A list of IP subnets and their respective organizational contexts.-
-For more information, see [Create watchlists in Microsoft Sentinel](watchlists-create.md) and [Built-in watchlist schemas](watchlist-schemas.md).
---
-### File Event normalization schema (Public preview)
-
-The Azure Sentinel Information Model (ASIM) now supports a File Event normalization schema, which is used to describe file activity, such as creating, modifying, or deleting files or documents. File events are reported by operating systems, file storage systems such as Azure Files, and document management systems such as Microsoft SharePoint.
-
-For more information, see:
--- [Azure Sentinel File Event normalization schema reference (Public preview)](file-event-normalization-schema.md)-- [Normalization and the Azure Sentinel Information Model (ASIM)](normalization.md)--
-### New in docs: Best practice guidance
-
-In response to multiple requests from customers and our support teams, we added a series of best practice guidance to our documentation.
-
-For more information, see:
--- [Prerequisites for deploying Azure Sentinel](prerequisites.md)-- [Best practices for Azure Sentinel](best-practices.md)-- [Azure Sentinel workspace architecture best practices](best-practices-workspace-architecture.md)-- [Design your Azure Sentinel workspace architecture](design-your-workspace-architecture.md)-- [Azure Sentinel sample workspace designs](sample-workspace-designs.md)-- [Data collection best practices](best-practices-data.md)-
-> [!TIP]
-> You can find more guidance added across our documentation in relevant conceptual and how-to articles. For more information, see [Best practice references](best-practices.md#best-practice-references).
->
- ## Next steps > [!div class="nextstepaction"]
service-bus-messaging Service Bus Federation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-federation-overview.md
With the Azure Functions consumption plan, the prebuilt triggers can even scale
In contrast to all of this, most common replication engines for messaging and eventing, such as Apache Kafka's [MirrorMaker](http://kafka.apache.org/documentation/#basic_ops_mirror_maker) require you to provide a hosting environment and scale the replication engine yourself. That includes configuring and integrating the security and networking features and facilitating the flow of monitoring data, and then you still don't have an opportunity to inject custom replication tasks into the flow.
+### Replication tasks with Azure Logic Apps
+
+A non-coding alternative to doing replication using Functions would be to use [Logic Apps](../logic-apps/logic-apps-overview.md) instead. Logic Apps have [predefined replication tasks](../logic-apps/create-replication-tasks-azure-resources.md) for Service Bus. These can help with setting up replication between different instances, and can be adjusted for further customization.
+ ## Next Steps In this article, we explored a range of federation patterns and explained the role of Azure Functions as the event and messaging replication runtime in Azure.
Next, you might want to read up how to set up a replicator application with Azur
- [Routing events to Azure Event Hubs](https://github.com/Azure-Samples/azure-messaging-replication-dotnet/tree/main/functions/config/ServiceBusCopyToEventHub) - [Acquire events from Azure Event Hubs](https://github.com/Azure-Samples/azure-messaging-replication-dotnet/tree/main/functions/config/EventHubCopyToServiceBus)
-[1]: ./media/service-bus-auto-forwarding/IC628632.gif
+[1]: ./media/service-bus-auto-forwarding/IC628632.gif
service-connector How To Integrate App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-app-configuration.md
Supported authentication and clients for App Service, Container Apps and Azure S
### [Azure Spring Apps](#tab/spring-apps)
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal |
-|-|::|::|::|::|
-| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal |
+|-|::|::|::|::|
+| .NET | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Node.js | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| None | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
Supported authentication and clients for App Service, Container Apps and Azure S
Use the connection details below to connect compute services to Azure App Configuration stores instances. For each example below, replace the placeholder texts `<App-Configuration-name>`, `<ID>`, `<secret>`, `<client-ID>`, `<client-secret>`, and `<tenant-ID>` with your App Configuration store name, ID, secret, client ID, client secret and tenant ID.
-### .NET, Java, Node.JS, Python
-
-#### Secret / connection string
+### Secret / connection string
> [!div class="mx-tdBreakAll"] > | Default environment variable name | Description | Sample value | > | | | | > | AZURE_APPCONFIGURATION_CONNECTIONSTRING | Your App Configuration Connection String | `Endpoint=https://<App-Configuration-name>.azconfig.io;Id=<ID>;Secret=<secret>` |
-#### System-assigned managed identity
+### System-assigned managed identity
| Default environment variable name | Description | Sample value | |--||| | AZURE_APPCONFIGURATION_ENDPOINT | App Configuration endpoint | `https://<App-Configuration-name>.azconfig.io` |
-#### User-assigned managed identity
+### User-assigned managed identity
| Default environment variable name | Description | Sample value | |--|-|--| | AZURE_APPCONFIGURATION_ENDPOINT | App Configuration Endpoint | `https://App-Configuration-name>.azconfig.io` | | AZURE_APPCONFIGURATION_CLIENTID | Your client ID | `<client-ID>` |
-#### Service principal
+### Service principal
| Default environment variable name | Description | Sample value | |-|-|-|
service-connector How To Integrate Confluent Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-confluent-kafka.md
This page shows the supported authentication types and client types of Apache ka
Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
-### [Azure App Service](#tab/app-service)
- | Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | |--|-|--|--|-| | .NET | | | ![yes icon](./media/green-check.png) | |
-| Go | | | ![yes icon](./media/green-check.png) | |
| Java | | | ![yes icon](./media/green-check.png) | | | Java - Spring Boot | | | ![yes icon](./media/green-check.png) | | | Node.js | | | ![yes icon](./media/green-check.png) | | | Python | | | ![yes icon](./media/green-check.png) | |
-| Ruby | | | ![yes icon](./media/green-check.png) | |
| None | | | ![yes icon](./media/green-check.png) | |
-### [Azure Container Apps](#tab/container-apps)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-|--|-|--|--|-|
-| .NET | | | ![yes icon](./media/green-check.png) | |
-| Go | | | ![yes icon](./media/green-check.png) | |
-| Java | | | ![yes icon](./media/green-check.png) | |
-| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
-| Node.js | | | ![yes icon](./media/green-check.png) | |
-| Python | | | ![yes icon](./media/green-check.png) | |
-| Ruby | | | ![yes icon](./media/green-check.png) | |
-| None | | | ![yes icon](./media/green-check.png) | |
-
-### [Azure Spring Apps](#tab/spring-apps)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-|--|-|--|--|-|
-| Java | | | ![yes icon](./media/green-check.png) | |
-| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
--- ## Default environment variable names or application properties Use the connection details below to connect compute services to Kafka. For each example below, replace the placeholder texts `<server-name>`, `<Bootstrap-server-key>`, `<Bootstrap-server-secret>`, `<schema-registry-key>`, and `<schema-registry-secret>` with your server name, Bootstrap server key, Bootstrap server secret, schema registry key, and schema registry secret.
-### .NET, Java, Node.JS and Python
+### Azure App Service and Azure Container Apps
| Default environment variable name | Description | Example value | |||--|
Use the connection details below to connect compute services to Kafka. For each
| AZURE_CONFLUENTCLOUDSCHEMAREGISTRY_URL | Your Confluent registry URL | `https://psrc-<server-name>.westus2.azure.confluent.cloud` | | AZURE_CONFLUENTCLOUDSCHEMAREGISTRY_USERINFO | Your Confluent registry user information | `<schema-registry-key>:<schema-registry-secret>` |
-### Spring Boot
+### Azure Spring Apps
| Default environment variable name | Description | Example value | |--||--|
service-connector How To Integrate Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-cosmos-db.md
Title: Integrate the Azure Cosmos DB Mongo API with Service Connector
-description: Integrate the Azure Cosmos DB Mongo API into your application with Service Connector
+ Title: Integrate the Azure Cosmos DB MongoDB API with Service Connector
+description: Integrate the Azure Cosmos DB MongoDB API into your application with Service Connector
Last updated 08/11/2022
-# Integrate the Azure Cosmos DB Mondo API with Service Connector
+# Integrate the Azure Cosmos DB API for MongoDB with Service Connector
-This page shows the supported authentication types and client types for the Azure Cosmos DB Mongo API using Service Connector. You might still be able to connect to Azure Cosmos DB in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
+This page shows the supported authentication types and client types for the Azure Cosmos DB Mongo API using Service Connector. You might still be able to connect to the Azure Cosmos DB API for MongoDB in other programming languages without using Service Connector. This page also shows default environment variable names and values (or Spring Boot configuration) you get when you create the service connection. You can learn more about [Service Connector environment variable naming convention](concept-service-connector-internals.md).
## Supported compute services
Supported authentication and clients for App Service, Container Apps and Azure S
| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | |--|--|--|--|--|
+| .NET | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
+| Node.js | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Go | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
Supported authentication and clients for App Service, Container Apps and Azure S
Use the connection details below to connect compute services to Cosmos DB. For each example below, replace the placeholder texts `<mongo-db-admin-user>`, `<password>`, `<mongo-db-server>`, `<subscription-ID>`, `<resource-group-name>`, `<database-server>`, `<client-secret>`, and `<tenant-id>` with your Mongo DB Admin username, password, Mongo DB server, subscription ID, resource group name, database server, client secret and tenant ID.
-### Secret / Connection string
+### Azure App Service and Azure Container Apps
+
+#### Secret / Connection string
| Default environment variable name | Description | Example value | |--|--|-|
-| AZURE_COSMOS_CONNECTIONSTRING | Mongo DB in Cosmos DB connection string | `mongodb://<mongo-db-admin-user>:<password>@<mongo-db-server>.mongo.cosmos.azure.com:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000&appName=@<mongo-db-server>@` |
+| AZURE_COSMOS_CONNECTIONSTRING | Cosmos DB MongoDB API connection string | `mongodb://<mongo-db-admin-user>:<password>@<mongo-db-server>.mongo.cosmos.azure.com:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000&appName=@<mongo-db-server>@` |
-### System-assigned managed identity
+#### System-assigned managed identity
| Default environment variable name | Description | Example value | |--|--|--|
Use the connection details below to connect compute services to Cosmos DB. For e
| AZURE_COSMOS_SCOPE | Your managed identity scope | `https://management.azure.com/.default` | | AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<database-server>.documents.azure.com:443/` |
-### User-assigned managed identity
+#### User-assigned managed identity
| Default environment variable name | Description | Example value | |--|--|--|
Use the connection details below to connect compute services to Cosmos DB. For e
| AZURE_COSMOS_SUBSCRIPTIONID | Your subscription ID | `<subscription-ID>` | | AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<database-server>.documents.azure.com:443/` |
-### Service principal
+#### Service principal
| Default environment variable name | Description | Example value | |--|--|--|
Use the connection details below to connect compute services to Cosmos DB. For e
| AZURE_COSMOS_SUBSCRIPTIONID | Your subscription ID | `<subscription-ID>` | | AZURE_COSMOS_RESOURCEENDPOINT | Your resource endpoint | `https://<database-server>.documents.azure.com:443/` |
+### Azure Spring Apps
+
+| Default environment variable name | Description | Example value |
+|--|-|-|
+| spring.data.mongodb.database | Your database | `<database-name>` |
+| spring.data.mongodb.uri | Your database URI | `mongodb://<mongo-db-admin-user>:<password>@<mongo-db-server>.mongo.cosmos.azure.com:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000&appName=@<mongo-db-server>@` |
+ ## Next steps Follow the tutorials listed below to learn more about Service Connector.
service-connector How To Integrate Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-event-hubs.md
Supported authentication and clients for App Service, Container Apps and Azure S
### [Azure Spring Apps](#tab/spring-apps)
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-||::|::|::|::|
-| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java - Spring Boot | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Kafka - Spring Boot | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+||::|::|::|::|
+| .NET | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Go | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java - Spring Boot | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Kafka - Spring Boot | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Node.js | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| None | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
Supported authentication and clients for App Service, Container Apps and Azure S
Use the connection details below to connect compute services to Event Hubs. For each example below, replace the placeholder texts `<Event-Hubs-namespace>`, `<access-key-name>`, `<access-key-value>` `<client-ID>`, `<client-secret>`, and `<tenant-id>` with your Event Hubs namespace, shared access key name, shared access key value, client ID, client secret and tenant ID.
-### .NET, Java, Node.JS, Python
+### Azure App Service and Azure Container Apps
#### Secret / connection string
Use the connection details below to connect compute services to Event Hubs. For
| AZURE_EVENTHUB_CLIENTSECRET | Your client secret | `<client-secret>` | | AZURE_EVENTHUB_TENANTID | Your tenant ID | `<tenant-id>` |
-### Java - Spring Boot
+### Azure Spring Cloud
#### Spring Boot secret/connection string
service-connector How To Integrate Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-key-vault.md
Supported authentication and clients for App Service, Container Apps and Azure S
|--|--|--|-|--| | .NET | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | | Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
+| Java - Spring Boot | | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
| Node.js | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | | Python | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | | None | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) |
Supported authentication and clients for App Service, Container Apps and Azure S
| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | |--|--|--|-|--|
+| .NET | ![yes icon](./media/green-check.png) | | | ![yes icon](./media/green-check.png) |
| Java | ![yes icon](./media/green-check.png) | | | ![yes icon](./media/green-check.png) |
-| Java - Spring Boot | | | | ![yes icon](./media/green-check.png) |
+| Java - Spring Boot | ![yes icon](./media/green-check.png) | | | ![yes icon](./media/green-check.png) |
+| Node.js | ![yes icon](./media/green-check.png) | | | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | | | ![yes icon](./media/green-check.png) |
+| None | ![yes icon](./media/green-check.png) | | | ![yes icon](./media/green-check.png) |
Supported authentication and clients for App Service, Container Apps and Azure S
Use the connection details below to connect compute services to Azure Key Vault. For each example below, replace the placeholder texts `<vault-name>`, `<client-ID>`, `<client-secret>`, and `<tenant-id>` with your key vault name, client-ID, client secret and tenant ID.
-### .NET, Java, Node.JS, Python
-
-#### System-assigned managed identity
+### System-assigned managed identity
| Default environment variable name | Description | Example value | |--|-|--| | AZURE_KEYVAULT_SCOPE | Your Azure RBAC scope | `https://management.azure.com/.default` | | AZURE_KEYVAULT_RESOURCEENDPOINT | Your Key Vault endpoint | `https://<vault-name>.vault.azure.net/` |
-#### User-assigned managed identity
+### User-assigned managed identity
| Default environment variable name | Description | Example value | |--|-|--|
Use the connection details below to connect compute services to Azure Key Vault.
| AZURE_KEYVAULT_RESOURCEENDPOINT | Your Key Vault endpoint | `https://<vault-name>.vault.azure.net/` | | AZURE_KEYVAULT_CLIENTID | Your Client ID | `<client-ID>` |
-#### Service principal
+### Service principal
| Default environment variable name | Description | Example value | |--|-|--|
Use the connection details below to connect compute services to Azure Key Vault.
| AZURE_KEYVAULT_CLIENTSECRET | Your Client secret | `<client-secret>` | | AZURE_KEYVAULT_TENANTID | Your Tenant ID | `<tenant-id>` |
-### Java - Spring Boot
-
-#### Java - Spring Boot service principal
+### Java - Spring Boot service principal
| Default environment variable name | Description | Example value | |--|--|-|
service-connector How To Integrate Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-mysql.md
This page shows the supported authentication types and client types of Azure Dat
Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
-### [Azure App Service](#tab/app-service)
- | Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | ||-|--|--|-| | .NET (MySqlConnector) | | | ![yes icon](./media/green-check.png) | |
Supported authentication and clients for App Service, Container Apps and Azure S
| Ruby (mysql2) | | | ![yes icon](./media/green-check.png) | | | None | | | ![yes icon](./media/green-check.png) | |
-### [Azure Container Apps](#tab/container-apps)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-||-|--|--|-|
-| .NET (MySqlConnector) | | | ![yes icon](./media/green-check.png) | |
-| Go (go-sql-driver for mysql) | | | ![yes icon](./media/green-check.png) | |
-| Java (JDBC) | | | ![yes icon](./media/green-check.png) | |
-| Java - Spring Boot (JDBC) | | | ![yes icon](./media/green-check.png) | |
-| Node.js (mysql) | | | ![yes icon](./media/green-check.png) | |
-| Python (mysql-connector-python) | | | ![yes icon](./media/green-check.png) | |
-| Python-Django | | | ![yes icon](./media/green-check.png) | |
-| PHP (mysqli) | | | ![yes icon](./media/green-check.png) | |
-| Ruby (mysql2) | | | ![yes icon](./media/green-check.png) | |
-| None | | | ![yes icon](./media/green-check.png) | |
-
-### [Azure Spring Apps](#tab/spring-apps)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-||-|--|--|-|
-| Java (JDBC) | | | ![yes icon](./media/green-check.png) | |
-| Java - Spring Boot (JDBC) | | | ![yes icon](./media/green-check.png) | |
--- ## Default environment variable names or application properties Use the connection details below to connect compute services to Azure Database for MySQL. For each example below, replace the placeholder texts `<MySQL-DB-name>`, `<MySQL-DB-username>`, `<MySQL-DB-password>`, `<server-host>`, and `<port>` with your Azure Database for MySQL name, Azure Database for MySQL username, Azure Database for MySQL password, server host, and port.
service-connector How To Integrate Postgres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-postgres.md
This page shows the supported authentication types and client types of Azure Dat
Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
-### [Azure App Service](#tab/app-service)
| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | ||-|--|--|-|
Supported authentication and clients for App Service, Container Apps and Azure S
| Ruby (ruby-pg) | | | ![yes icon](./media/green-check.png) | | | None | | | ![yes icon](./media/green-check.png) | |
-### [Azure Container Apps](#tab/container-apps)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-||-|--|--|-|
-| .NET (ADO.NET) | | | ![yes icon](./media/green-check.png) | |
-| Go (pg) | | | ![yes icon](./media/green-check.png) | |
-| Java (JDBC) | | | ![yes icon](./media/green-check.png) | |
-| Java - Spring Boot (JDBC) | | | ![yes icon](./media/green-check.png) | |
-| Node.js (pg) | | | ![yes icon](./media/green-check.png) | |
-| Python (psycopg2) | | | ![yes icon](./media/green-check.png) | |
-| Python-Django | | | ![yes icon](./media/green-check.png) | |
-| PHP (native) | | | ![yes icon](./media/green-check.png) | |
-| Ruby (ruby-pg) | | | ![yes icon](./media/green-check.png) | |
-| None | | | ![yes icon](./media/green-check.png) | |
-
-### [Azure Spring Apps](#tab/spring-apps)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-||-|--|--|-|
-| Java (JDBC) | | | ![yes icon](./media/green-check.png) | |
-| Java - Spring Boot (JDBC) | | | ![yes icon](./media/green-check.png) | |
--- ## Default environment variable names or application properties Use the connection details below to connect compute services to PostgreSQL. For each example below, replace the placeholder texts `<postgreSQL-server-name>`, `<database-name>`, `<username>`, and `<password>` with your server name, database name, username and password.
service-connector How To Integrate Redis Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-redis-cache.md
This page shows the supported authentication types and client types of Azure Cac
Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
-### [Azure App Service](#tab/app-service)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-|--|-|--|--|-|
-| .NET (StackExchange.Redis) | | | ![yes icon](./media/green-check.png) | |
-| Go (go-redis) | | | ![yes icon](./media/green-check.png) | |
-| Java (Jedis) | | | ![yes icon](./media/green-check.png) | |
-| Java - Spring Boot (spring-boot-starter-data-redis) | | | ![yes icon](./media/green-check.png) | |
-| Node.js (node-redis) | | | ![yes icon](./media/green-check.png) | |
-| Python (redis-py) | | | ![yes icon](./media/green-check.png) | |
-| None | | | ![yes icon](./media/green-check.png) | |
-
-### [Azure Container Apps](#tab/container-apps)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-|--|-|--|--|-|
-| .NET (StackExchange.Redis) | | | ![yes icon](./media/green-check.png) | |
-| Go (go-redis) | | | ![yes icon](./media/green-check.png) | |
-| Java (Jedis) | | | ![yes icon](./media/green-check.png) | |
-| Java - Spring Boot (spring-boot-starter-data-redis) | | | ![yes icon](./media/green-check.png) | |
-| Node.js (node-redis) | | | ![yes icon](./media/green-check.png) | |
-| Python (redis-py) | | | ![yes icon](./media/green-check.png) | |
-| None | | | ![yes icon](./media/green-check.png) | |
-
-### [Azure Spring Apps](#tab/spring-apps)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-|--|-|--|--|-|
-| Java (Jedis) | | | ![yes icon](./media/green-check.png) | |
-| Java - Spring Boot (spring-boot-starter-data-redis) | | | ![yes icon](./media/green-check.png) | |
--
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|--|-|--|--|-|
+| .NET | | | ![yes icon](./media/green-check.png) | |
+| Go | | | ![yes icon](./media/green-check.png) | |
+| Java | | | ![yes icon](./media/green-check.png) | |
+| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
+| Node.js | | | ![yes icon](./media/green-check.png) | |
+| Python | | | ![yes icon](./media/green-check.png) | |
+| None | | | ![yes icon](./media/green-check.png) | |
## Default environment variable names or application properties
service-connector How To Integrate Service Bus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-service-bus.md
Supported authentication and clients for App Service, Container Apps and Azure S
### [Azure Spring Apps](#tab/spring-apps)
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal |
-|--|::|::|::|::|
-| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java - Spring Boot | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal |
+|--|::|::|::|::|
+| .NET | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Go | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java - Spring Boot | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Node.js | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| None | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
Supported authentication and clients for App Service, Container Apps and Azure S
Use the connection details below to connect compute services to Service Bus. For each example below, replace the placeholder texts `<Service-Bus-namespace>`, `<access-key-name>`, `<access-key-value>` `<client-ID>`, `<client-secret>`, and `<tenant-id>` with your own Service Bus namespace, shared access key name, shared access key value, client ID, client secret and tenant ID.
-### .NET, Java, Node.JS, Python
+### Azure App Service and Azure Container Apps
#### Secret/connection string
Use the connection details below to connect compute services to Service Bus. For
| AZURE_SERVICEBUS_CLIENTSECRET | Your client secret | `<client-secret>` | | AZURE_SERVICEBUS_TENANTID | Your tenant ID | `<tenant-id>` |
-### Java - Spring Boot
+### Azure Spring Apps
#### Spring Boot secret/connection string
service-connector How To Integrate Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-sql-database.md
This page shows all the supported compute services, clients, and authentication
Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
-### [Azure App Service](#tab/app-service)
- | Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal | |--|:--:|::|::|:--:| | .NET | | | ![yes icon](./media/green-check.png) | |
Supported authentication and clients for App Service, Container Apps and Azure S
| Ruby | | | ![yes icon](./media/green-check.png) | | | None | | | ![yes icon](./media/green-check.png) | |
-### [Azure Container Apps](#tab/container-apps)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal |
-|--|:--:|::|::|:--:|
-| .NET | | | ![yes icon](./media/green-check.png) | |
-| Go | | | ![yes icon](./media/green-check.png) | |
-| Java | | | ![yes icon](./media/green-check.png) | |
-| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
-| PHP | | | ![yes icon](./media/green-check.png) | |
-| Node.js | | | ![yes icon](./media/green-check.png) | |
-| Python | | | ![yes icon](./media/green-check.png) | |
-| Python - Django | | | ![yes icon](./media/green-check.png) | |
-| Ruby | | | ![yes icon](./media/green-check.png) | |
-| None | | | ![yes icon](./media/green-check.png) | |
-
-### [Azure Spring Apps](#tab/spring-apps)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal |
-|--|:--:|::|::|:--:|
-| Java | | | ![yes icon](./media/green-check.png) | |
-| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
--- ## Default environment variable names or application properties Use the environment variable names and application properties listed below to connect compute services to Azure SQL Database using a secret and a connection string.
-### Connect an Azure App Service instance
+### Azure Container Apps
-Use the connection details below to connect Azure App Service instances with .NET, Go, Java, Java - Spring Boot, PHP, Node.js, Python, Python - Django and Ruby. For each example below, replace the placeholder texts `<sql-server>`, `<sql-database>`, `<sql-username>`, and `<sql-password>` with your own server name, database name, user ID and password.
+Use the connection details below to connect Azure App Service and Azure Container Apps instances with .NET, Go, Java, Java - Spring Boot, PHP, Node.js, Python, Python - Django and Ruby. For each example below, replace the placeholder texts `<sql-server>`, `<sql-database>`, `<sql-username>`, and `<sql-password>` with your own server name, database name, user ID and password.
-#### Azure App Service with .NET (sqlClient)
+#### .NET (sqlClient)
> [!div class="mx-tdBreakAll"] > | Default environment variable name | Description | Sample value | > | | | | > | AZURE_SQL_CONNECTIONSTRING | Azure SQL Database connection string | `Data Source=<sql-server>.database.windows.net,1433;Initial Catalog=<sql-database>;User ID=<sql-username>;Password=<sql-password>` |
-#### Azure App Service with Java Database Connectivity (JDBC)
+#### Java Database Connectivity (JDBC)
> [!div class="mx-tdBreakAll"] > | Default environment variable name | Description | Sample value | > | | | | > | AZURE_SQL_CONNECTIONSTRING | Azure SQL Database connection string | `jdbc:sqlserver://<sql-server>.database.windows.net:1433;databaseName=<sql-database>;user=<sql-username>;password=<sql-password>;` |
-#### Azure App Service with Java Spring Boot (spring-boot-starter-jdbc)
+#### Java Spring Boot (spring-boot-starter-jdbc)
> [!div class="mx-tdBreakAll"] > | Default environment variable name | Description | Sample value |
Use the connection details below to connect Azure App Service instances with .NE
> | spring.datasource.username | Azure SQL Database datasource username | `<sql-user>` | > | spring.datasource.password | Azure SQL Database datasource password | `<sql-password>` |
-#### Azure App Service with Go (go-mssqldb)
+#### Go (go-mssqldb)
> [!div class="mx-tdBreakAll"] > | Default environment variable name | Description | Sample value | > | | | | > | AZURE_SQL_CONNECTIONSTRING | Azure SQL Database connection string | `server=<sql-server>.database.windows.net;port=1433;database=<sql-database>;user id=<sql-username>;password=<sql-password>;` |
-#### Azure App Service with Node.js
+#### Node.js
> [!div class="mx-tdBreakAll"] > | Default environment variable name | Description | Sample value |
Use the connection details below to connect Azure App Service instances with .NE
> | AZURE_SQL_USERNAME | Azure SQL Database username | `<sql-username>` | > | AZURE_SQL_PASSWORD | Azure SQL Database password | `<sql-password>` |
-#### Azure App Service with PHP
+#### PHP
> [!div class="mx-tdBreakAll"] > | Default environment variable name | Description | Sample value |
Use the connection details below to connect Azure App Service instances with .NE
> | AZURE_SQL_UID | Azure SQL Database unique identifier (UID) | `<sql-username>` | > | AZURE_SQL_PASSWORD | Azure SQL Database password | `<sql-password>` |
-#### Azure App Service with Python (pyobdc)
+#### Python (pyobdc)
> [!div class="mx-tdBreakAll"] > | Default environment variable name | Description | Sample value |
Use the connection details below to connect Azure App Service instances with .NE
> | AZURE_SQL_USER | Azure SQL Database user | `<sql-username>` | > | AZURE_SQL_PASSWORD | Azure SQL Database password | `<sql-password>` |
-#### Azure App Service with Django (mssql-django)
+#### ADjango (mssql-django)
> [!div class="mx-tdBreakAll"] > | Default environment variable name | Description | Sample value |
Use the connection details below to connect Azure App Service instances with .NE
> | AZURE_SQL_USER | Azure SQL Database user | `<sql-username>` | > | AZURE_SQL_PASSWORD | Azure SQL Database password | `<sql-password>` |
-#### Azure App Service with Ruby
+#### Ruby
> [!div class="mx-tdBreakAll"] > | Default environment variable name | Description | Sample value |
Use the connection details below to connect Azure App Service instances with .NE
> | AZURE_SQL_USERNAME | Azure SQL Database username | `<sql-username>` | > | AZURE_SQL_PASSWORD | Azure SQL Database password | `<sql-password>` |
-### Connect an Azure Spring Cloud instance
+### Azure Spring Cloud
Use the connection details below to connect Azure Spring Cloud instances with Java Spring Boot.
-#### Azure Spring Cloud with Java Spring Boot (spring-boot-starter-jdbc)
+#### Java Spring Boot (spring-boot-starter-jdbc)
> [!div class="mx-tdBreakAll"] > | Default environment variable name | Description | Sample value |
service-connector How To Integrate Storage Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-blob.md
Title: Integrate Azure Blob Storage with Service Connector description: Integrate Azure Blob Storage into your application with Service Connector--++
Supported authentication and clients for App Service, Container Apps and Azure S
| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | |--|--|--|--|--|
+| .NET | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | | Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
+| Node.js | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| None | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
Supported authentication and clients for App Service, Container Apps and Azure S
Use the connection details below to connect compute services to Blob Storage. For each example below, replace the placeholder texts `<account name>`, `<account-key>`, `<client-ID>`, `<client-secret>`, `<tenant-ID>`, and `<storage-account-name>` with your own account name, account key, client ID, client secret, tenant ID and storage account name.
-### .NET, Java, Node.JS, Python
+### Azure App Service and Azure Container Apps
#### Secret / connection string
Use the connection details below to connect compute services to Blob Storage. Fo
| AZURE_STORAGEBLOB_CLIENTSECRET | Your client secret | `<client-secret>` | | AZURE_STORAGEBLOB_TENANTID | Your tenant ID | `<tenant-ID>` |
-### Java - Spring Boot
+### Azure Spring Cloud
-#### Java - Spring Boot secret / connection string
+#### secret / connection string
| Application properties | Description | Example value | |--|--||
service-connector How To Integrate Storage File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-file.md
This page shows the supported authentication types and client types of Azure Fil
Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
-### [Azure App Service](#tab/app-service)
- | Client Type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | |--|-|--|--|-| | .NET | | | ![yes icon](./media/green-check.png) | |
Supported authentication and clients for App Service, Container Apps and Azure S
| Ruby | | | ![yes icon](./media/green-check.png) | | | None | | | ![yes icon](./media/green-check.png) | |
-### [Azure Container Apps](#tab/container-apps)
-
-| Client Type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-|--|-|--|--|-|
-| .NET | | | ![yes icon](./media/green-check.png) | |
-| Java | | | ![yes icon](./media/green-check.png) | |
-| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
-| Node.js | | | ![yes icon](./media/green-check.png) | |
-| Python | | | ![yes icon](./media/green-check.png) | |
-| PHP | | | ![yes icon](./media/green-check.png) | |
-| Ruby | | | ![yes icon](./media/green-check.png) | |
-| None | | | ![yes icon](./media/green-check.png) | |
-
-### [Azure Spring Apps](#tab/spring-apps)
-
-| Client Type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-|--|-|--|--|-|
-| Java | | | ![yes icon](./media/green-check.png) | |
-| Java - Spring Boot | | | ![yes icon](./media/green-check.png) | |
--- ## Default environment variable names or application properties Use the connection details below to connect compute services to Azure Files. For each example below, replace the placeholder texts `<account-name>`, `<account-key>`, `<storage-account-name>` and `<storage-account-key>` with your own account name, account key, storage account name, and storage account key.
-### .NET, Java, Node.JS, Python, PHP and Ruby secret / connection string
+### Azure App Service secret / connection string
| Default environment variable name | Description | Example value | ||--|-| | AZURE_STORAGEFILE_CONNECTIONSTRING | File storage connection string | `DefaultEndpointsProtocol=https;AccountName=<account-name>;AccountKey=<account-key>;EndpointSuffix=core.windows.net` |
-### Java - Spring Boot secret / connection string
+### Azure Spring Cloud secret / connection string
| Application properties | Description | Example value | |--|||
service-connector How To Integrate Storage Queue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-queue.md
Supported authentication and clients for App Service, Container Apps and Azure S
### [Azure Spring Apps](#tab/spring-apps)
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-|--|--|--|--|--|
-| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
-| Java - Spring Boot | ![yes icon](./media/green-check.png) | | | |
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
+|--|--|--|--|--|
+| .NET | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java - Spring Boot | ![yes icon](./media/green-check.png) | | | |
+| Node.js | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
Use the connection details below to connect compute services to Queue Storage. F
| AZURE_STORAGEQUEUE_CLIENTSECRET | Your client secret | `<client-secret>` | | AZURE_STORAGEQUEUE_TENANTID | Your tenant ID | `<tenant-ID>` |
-### Java - Spring Boot
+### Azure Spring Apps
#### Java - Spring Boot secret / connection string
service-connector How To Integrate Storage Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-table.md
This page shows the supported authentication types and client types of Azure Tab
Supported authentication and clients for App Service, Container Apps and Azure Spring Apps:
-### [Azure App Service](#tab/app-service)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-|-|-|--|--|-|
-| .NET | | | ![yes icon](./media/green-check.png) | |
-| Java | | | ![yes icon](./media/green-check.png) | |
-| Node.js | | | ![yes icon](./media/green-check.png) | |
-| Python | | | ![yes icon](./media/green-check.png) | |
-
-### [Azure Container Apps](#tab/container-apps)
- | Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal | |-|-|--|--|-| | .NET | | | ![yes icon](./media/green-check.png) | |
Supported authentication and clients for App Service, Container Apps and Azure S
| Node.js | | | ![yes icon](./media/green-check.png) | | | Python | | | ![yes icon](./media/green-check.png) | |
-### [Azure Spring Apps](#tab/spring-apps)
-
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret / connection string | Service principal |
-|-|-|--|--|-|
-| Java | | | ![yes icon](./media/green-check.png) | |
--- ## Default environment variable names or application properties Use the connection details below to connect compute services to Azure Table Storage. For each example below, replace the placeholder texts `<account-name>` and `<account-key>` with your own account name and account key.
service-connector How To Integrate Web Pubsub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-web-pubsub.md
Supported authentication and clients for App Service, Container Apps and Azure S
### [Azure Spring Apps](#tab/spring-apps)
-| Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal |
-|-|::|::|::|::|
-| Java | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Client type | System-assigned managed identity | User-assigned managed identity | Secret/connection string | Service principal |
+|-|::|::|::|::|
+| .NET | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Node.js | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Python | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
+| Java | ![yes icon](./media/green-check.png) | | ![yes icon](./media/green-check.png) | ![yes icon](./media/green-check.png) |
spring-apps How To Config Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-config-server.md
This article shows you how to configure a managed Spring Cloud Config Server in Azure Spring Apps service.
-Spring Cloud Config Server provides server and client-side support for an externalized configuration in a distributed system. The Config Server instance provides a central place to manage external properties for applications across all environments. For more information, see the [Spring Cloud Config Server reference](https://spring.io/projects/spring-cloud-config).
+Spring Cloud Config Server provides server and client-side support for an externalized configuration in a distributed system. The Config Server instance provides a central place to manage external properties for applications across all environments. For more information, see the [Spring Cloud Config documentation](https://spring.io/projects/spring-cloud-config).
## Prerequisites
-* An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-* An already provisioned and running Azure Spring Apps service of basic or standard tier. To set up and launch an Azure Spring Apps service, see [Quickstart: Launch a Java Spring application by using the Azure CLI](./quickstart.md). Spring Cloud Config Server is not applicable to enterprise tier.
+- An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- An already provisioned and running Azure Spring Apps service of basic or standard tier. To set up and launch an Azure Spring Apps service, see [Quickstart: Deploy your first application to Azure Spring Apps](./quickstart.md). Spring Cloud Config Server isn't applicable to enterprise tier.
## Restriction
-There are some restrictions when you use Config Server with a Git back end. Some properties are automatically injected into your application environment to access Config Server and Service Discovery. If you also configure those properties from your Config Server files, you might experience conflicts and unexpected behavior. The properties include:
+There are some restrictions when you use Config Server with a Git back end. The following properties are automatically injected into your application environment to access Config Server and Service Discovery. If you also configure those properties from your Config Server files, you might experience conflicts and unexpected behavior.
```yaml eureka.client.service-url.defaultZone
spring.jmx.enabled
``` > [!CAUTION]
-> We strongly recommend that you *do not* put the above properties in your Config Server application files.
+> Don't put these properties in your Config Server application files.
## Create your Config Server files
-Azure Spring Apps supports Azure DevOps, GitHub, GitLab, and Bitbucket for storing your Config Server files. When you've your repository ready, create the configuration files with the following instructions and store them there.
+Azure Spring Apps supports Azure DevOps, GitHub, GitLab, and Bitbucket for storing your Config Server files. When your repository is ready, you can create the configuration files and store them there.
-Additionally, some configurable properties are available only for certain types. The following subsections list the properties for each repository type.
+Some configurable properties are available only for certain types. The following sections describe the properties for each repository type.
> [!NOTE] > Config Server takes `master` (on Git) as the default label if you don't specify one. However, GitHub has recently changed the default branch from `master` to `main`. To avoid Azure Spring Apps Config Server failure, be sure to pay attention to the default label when setting up Config Server with GitHub, especially for newly-created repositories. ### Public repository
-When you use a public repository, your configurable properties are more limited.
+When you use a public repository, your configurable properties are more limited than with a private repository.
-All configurable properties that are used to set up the public Git repository are listed in the following table:
+The following table lists the configurable properties that you can use to set up a public Git repository.
> [!NOTE] > Using a hyphen (-) to separate words is the only naming convention that's currently supported. For example, you can use *default-label*, but not *defaultLabel*.
-| Property | Required | Feature |
-| :-- | -- | |
-| `uri` | Yes | The URI of the Git repository that's used as the Config Server back end begins with *http://*, *https://*, *git@*, or *ssh://*. |
-| `default-label` | No | The default label of the Git repository, should be the *branch name*, *tag name*, or *commit-id* of the repository. |
-| `search-paths` | No | An array of strings that are used to search subdirectories of the Git repository. |
+| Property | Required | Feature |
+|:-|-||
+| `uri` | Yes | The URI of the Git repository that's used as the Config Server back end. Should begin with `http://`, `https://`, `git@`, or `ssh://`. |
+| `default-label` | No | The default label of the Git repository. Should be the branch name, tag name, or commit ID of the repository. |
+| `search-paths` | No | An array of strings that are used to search subdirectories of the Git repository. |
### Private repository with SSH authentication
-All configurable properties used to set up private Git repository with SSH are listed in the following table:
+The following table lists the configurable properties that you can use to set up a private Git repository with SSH.
> [!NOTE] > Using a hyphen (-) to separate words is the only naming convention that's currently supported. For example, you can use *default-label*, but not *defaultLabel*.
-| Property | Required | Feature |
-| :- | -- | |
-| `uri` | Yes | The URI of the Git repository used as the Config Server back end, should be started with *http://*, *https://*, *git@*, or *ssh://*. |
-| `default-label` | No | The default label of the Git repository, should be the *branch name*, *tag name*, or *commit-id* of the repository. |
-| `search-paths` | No | An array of strings used to search subdirectories of the Git repository. |
-| `private-key` | No | The SSH private key to access the Git repository, _required_ when the URI starts with *git@* or *ssh://*. |
-| `host-key` | No | The host key of the Git repository server shouldn't include the algorithm prefix as covered by `host-key-algorithm`. |
-| `host-key-algorithm` | No | The host key algorithm should be *ssh-dss*, *ssh-rsa*, *ecdsa-sha2-nistp256*, *ecdsa-sha2-nistp384*, or *ecdsa-sha2-nistp521*. *Required* only if `host-key` exists. |
-| `strict-host-key-checking` | No | Indicates whether the Config Server instance will fail to start when using the private `host-key`. Should be *true* (default value) or *false*. |
+| Property | Required | Feature |
+|:|-|-|
+| `uri` | Yes | The URI of the Git repository used as the Config Server back end. Should begin with `http://`, `https://`, `git@`, or `ssh://`. |
+| `default-label` | No | The default label of the Git repository. Should be the branch name, tag name, or commit ID of the repository. |
+| `search-paths` | No | An array of strings used to search subdirectories of the Git repository. |
+| `private-key` | No | The SSH private key to access the Git repository. Required when the URI starts with `git@` or `ssh://`. |
+| `host-key` | No | The host key of the Git repository server. Shouldn't include the algorithm prefix as covered by `host-key-algorithm`. |
+| `host-key-algorithm` | No | The host key algorithm. Should be *ssh-dss*, *ssh-rsa*, *ecdsa-sha2-nistp256*, *ecdsa-sha2-nistp384*, or *ecdsa-sha2-nistp521*. Required only if `host-key` exists. |
+| `strict-host-key-checking` | No | Indicates whether the Config Server instance will fail to start when using the private `host-key`. Should be *true* (default value) or *false*. |
> [!NOTE]
-> Config Server doesn't support SHA-2 signatures yet and we are actively working on to support it in future release. Before that, please use SHA-1 signatures or basic auth instead.
+> Config Server doesn't support SHA-2 signatures yet. Until support is added, use SHA-1 signatures or basic auth instead.
### Private repository with basic authentication
-All configurable properties used to set up private Git repository with basic authentication are listed below.
+The following table lists the configurable properties that you can use to set up a private Git repository with basic authentication.
> [!NOTE] > Using a hyphen (-) to separate words is the only naming convention that's currently supported. For example, use *default-label*, not *defaultLabel*.
-| Property | Required | Feature |
-| :-- | -- | |
-| `uri` | Yes | The URI of the Git repository that's used as the Config Server back end should be started with *http://*, *https://*, *git@*, or *ssh://*. |
-| `default-label` | No | The default label of the Git repository, should be the *branch name*, *tag name*, or *commit-id* of the repository. |
-| `search-paths` | No | An array of strings used to search subdirectories of the Git repository. |
-| `username` | No | The username that's used to access the Git repository server, _required_ when the Git repository server supports `Http Basic Authentication`. |
-| `password` | No | The password or personal access token used to access the Git repository server, _required_ when the Git repository server supports `Http Basic Authentication`. |
+| Property | Required | Feature |
+|:-|-|--|
+| `uri` | Yes | The URI of the Git repository that's used as the Config Server back end. Should begin with `http://`, `https://`, `git@`, or `ssh://`.
+| `default-label` | No | The default label of the Git repository. Should be the *branch name*, *tag name*, or *commit-id* of the repository. |
+| `search-paths` | No | An array of strings used to search subdirectories of the Git repository. |
+| `username` | No | The username that's used to access the Git repository server. Required when the Git repository server supports HTTP basic authentication. |
+| `password` | No | The password or personal access token used to access the Git repository server. Required when the Git repository server supports HTTP basic authentication. |
> [!NOTE]
-> Many `Git` repository servers support the use of tokens rather than passwords for HTTP Basic Authentication. Some repositories allow tokens to persist indefinitely. However, some Git repository servers, including Azure DevOps Server, force tokens to expire in a few hours. Repositories that cause tokens to expire shouldn't use token-based authentication with Azure Spring Apps.
-> GitHub has removed support for password authentication, so you'll need to use a personal access token instead of password authentication for GitHub. For more information, see [Token authentication](https://github.blog/2020-12-15-token-authentication-requirements-for-git-operations/).
+> Many Git repository servers support the use of tokens rather than passwords for HTTP basic authentication. Some repositories allow tokens to persist indefinitely. However, some Git repository servers, including Azure DevOps Server, force tokens to expire in a few hours. Repositories that cause tokens to expire shouldn't use token-based authentication with Azure Spring Apps.
+>
+> GitHub has removed support for password authentication, so you'll need to use a personal access token instead of password authentication for GitHub. For more information, see [Token authentication requirements for Git operations](https://github.blog/2020-12-15-token-authentication-requirements-for-git-operations/).
### Other Git repositories
-All configurable properties used to set up Git repositories with pattern are listed below.
+The following table lists the configurable properties you can use to set up Git repositories with a pattern.
> [!NOTE] > Using a hyphen (-) to separate words is the only naming convention that's currently supported. For example, use *default-label*, not *defaultLabel*.
-| Property | Required | Feature |
-| : | - | |
-| `repos` | No | A map consisting of the settings for a Git repository with a given name. |
-| `repos."uri"` | Yes on `repos` | The URI of the Git repository that's used as the Config Server back end should be started with *http://*, *https://*, *git@*, or *ssh://*. |
-| `repos."name"` | Yes on `repos` | A name to identify on the Git repository, _required_ only if `repos` exists. For example, *team-A*, *team-B*. |
-| `repos."pattern"` | No | An array of strings used to match an application name. For each pattern, use the `{application}/{profile}` format with wildcards. |
-| `repos."default-label"` | No | The default label of the Git repository should be the *branch name*, *tag name*, or *commit-id* of the repository. |
-| `repos."search-paths`" | No | An array of strings used to search subdirectories of the Git repository. |
-| `repos."username"` | No | The username that's used to access the Git repository server, _required_ when the Git repository server supports `Http Basic Authentication`. |
-| `repos."password"` | No | The password or personal access token used to access the Git repository server, _required_ when the Git repository server supports `Http Basic Authentication`. |
-| `repos."private-key"` | No | The SSH private key to access Git repository, _required_ when the URI starts with *git@* or *ssh://*. |
-| `repos."host-key"` | No | The host key of the Git repository server shouldn't include the algorithm prefix as covered by `host-key-algorithm`. |
-| `repos."host-key-algorithm"` | No | The host key algorithm should be *ssh-dss*, *ssh-rsa*, *ecdsa-sha2-nistp256*, *ecdsa-sha2-nistp384*, or *ecdsa-sha2-nistp521*. *Required* only if `host-key` exists. |
-| `repos."strict-host-key-checking"` | No | Indicates whether the Config Server instance will fail to start when using the private `host-key`. Should be *true* (default value) or *false*. |
-
-The following table shows some examples for the **Additional repositories** section. For more information, see [Pattern Matching and Multiple Repositories](https://cloud.spring.io/spring-cloud-config/reference/html/#_pattern_matching_and_multiple_repositories) in the Spring documentation.
-
-| Patterns | Description |
-| : | - |
-| *test-config-server-app-0/\** | The pattern and repository URI will match a Spring boot application named `test-config-server-app-0` with any profile. |
-| *test-config-server-app-1/dev* | The pattern and repository URI will match a Spring boot application named `test-config-server-app-1` with dev profile. |
-| *test-config-server-app-2/prod* | The pattern and repository URI will match a Spring boot application named `test-config-server-app-2` with prod profile. |
-
+| Property | Required | Feature |
+|:--|-|--|
+| `repos` | No | A map consisting of the settings for a Git repository with a given name. |
+| `repos."uri"` | Yes on `repos` | The URI of the Git repository that's used as the Config Server back end. Should begin with `http://`, `https://`, `git@`, or `ssh://`. |
+| `repos."name"` | Yes on `repos` | A name to identify the repository; for example, *team-A* or *team-B*. Required only if `repos` exists. |
+| `repos."pattern"` | No | An array of strings used to match an application name. For each pattern, use the format *{application}/{profile}* format with wildcards. |
+| `repos."default-label"` | No | The default label of the Git repository. Should be the branch name, tag name, or commit IOD of the repository. |
+| `repos."search-paths`" | No | An array of strings used to search subdirectories of the Git repository. |
+| `repos."username"` | No | The username used to access the Git repository server. Required when the Git repository server supports HTTP basic authentication. |
+| `repos."password"` | No | The password or personal access token used to access the Git repository server. Required when the Git repository server supports HTTP basic authentication. |
+| `repos."private-key"` | No | The SSH private key to access Git repository. Required when the URI begins with `git@` or `ssh://`. |
+| `repos."host-key"` | No | The host key of the Git repository server. Shouldn't include the algorithm prefix as covered by `host-key-algorithm`. |
+| `repos."host-key-algorithm"` | No | The host key algorithm. Should be *ssh-dss*, *ssh-rsa*, *ecdsa-sha2-nistp256*, *ecdsa-sha2-nistp384*, or *ecdsa-sha2-nistp521*. Required only if `host-key` exists. |
+| `repos."strict-host-key-checking"` | No | Indicates whether the Config Server instance will fail to start when using the private `host-key`. Should be *true* (default value) or *false*. |
+
+The following table shows some examples of patterns for configuring your service with an optional additional repository. For more information, see the [Additional repositories section](#additional-repositories) and the [Pattern Matching and Multiple Repositories section](https://cloud.spring.io/spring-cloud-config/reference/html/#_pattern_matching_and_multiple_repositories) of the Spring documentation.
+
+| Patterns | Description |
+|:--||
+| *test-config-server-app-0/\** | The pattern and repository URI matches a Spring boot application named `test-config-server-app-0` with any profile. |
+| *test-config-server-app-1/dev* | The pattern and repository URI matches a Spring boot application named `test-config-server-app-1` with a dev profile. |
+| *test-config-server-app-2/prod* | The pattern and repository URI matches a Spring boot application named `test-config-server-app-2` with a prod profile. |
+ ## Attach your Config Server repository to Azure Spring Apps
-Now that your configuration files are saved in a repository, you need to connect Azure Spring Apps to it.
+Now that your configuration files are saved in a repository, use the following steps to connect Azure Spring Apps to the repository.
1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Go to your Azure Spring Apps **Overview** page.
-
-3. Select **Config Server** in the left navigation pane.
-
-4. In the **Default repository** section, set **URI** to "https://github.com/Azure-Samples/piggymetrics-config".
+1. Go to your Azure Spring Apps **Overview** page.
-5. Select **Validate**.
+1. Select **Config Server** in the left navigation pane.
- ![Navigate to config server](media/how-to-config-server/portal-config.png)
+1. In the **Default repository** section, set **URI** to `https://github.com/Azure-Samples/piggymetrics-config`.
-6. When validation is complete, select **Apply** to save your changes.
+1. Select **Validate**.
- ![Validating config server](media/how-to-config-server/validate-complete.png)
+ :::image type="content" source="media/how-to-config-server/portal-config.png" lightbox="media/how-to-config-server/portal-config.png" alt-text="Screenshot of Azure portal showing the Config Server page.":::
-7. Updating the configuration can take a few minutes.
+1. When validation is complete, select **Apply** to save your changes.
- ![Updating config server](media/how-to-config-server/updating-config.png)
+ :::image type="content" source="media/how-to-config-server/validate-complete.png" lightbox="media/how-to-config-server/validate-complete.png" alt-text="Screenshot of Azure portal showing Config Server page with Apply button highlighted.":::
-8. You should get a notification when the configuration is complete.
+Updating the configuration can take a few minutes. You should get a notification when the configuration is complete.
### Enter repository information directly to the Azure portal
+You can enter repository information for the default repository and, optionally, for additional repositories.
+ #### Default repository
-* **Public repository**: In the **Default repository** section, in the **Uri** box, paste the repository URI. Set the **Label** to **config**. Ensure that the **Authentication** setting is **Public**, and then select **Apply** to finish.
+Use the steps in this section to enter repository information for a public or private repository.
-* **Private repository**: Azure Spring Apps supports basic password/token-based authentication and SSH.
+- **Public repository**: In the **Default repository** section, in the **Uri** box, paste the repository URI. Enter *config* for the **Label** setting. Ensure that the **Authentication** setting is *Public*, and then select **Apply**.
- * **Basic Authentication**: In the **Default repository** section, in the **Uri** box, paste the repository URI, and then select the **Authentication** ("pencil" icon) button. In the **Edit Authentication** pane, in the **Authentication type** drop-down list, select **HTTP Basic**, and then enter your username and password/token to grant access to Azure Spring Apps. Select **OK**, and then select **Apply** to finish setting up your Config Server instance.
+- **Private repository**: Azure Spring Apps supports basic password/token-based authentication and SSH.
- ![The Edit Authentication pane basic auth](media/spring-cloud-tutorial-config-server/basic-auth.png)
+ - **Basic Authentication**: In the **Default repository** section, in the **Uri** box, paste the repository URI, and then select the setting under **Authentication** to open the **Edit Authentication** pane. In the **Authentication type** drop-down list, select **HTTP Basic**, and then enter your username and password/token to grant access to Azure Spring Apps. Select **OK**, and then select **Apply** to finish setting up your Config Server instance.
- > [!CAUTION]
- > Some Git repository servers use a *personal-token* or an *access-token*, such as a password, for **Basic Authentication**. You can use that kind of token as a password in Azure Spring Apps, because it will never expire. But for other Git repository servers, such as Bitbucket and Azure DevOps Server, the *access-token* expires in one or two hours. This means that the option isn't viable when you use those repository servers with Azure Spring Apps.
- > GitHub has removed support for password authentication, so you'll need to use a personal access token instead of password authentication for GitHub. For more information, see [Token authentication](https://github.blog/2020-12-15-token-authentication-requirements-for-git-operations/).
+ :::image type="content" source="media/how-to-config-server/basic-auth.png" lightbox="media/how-to-config-server/basic-auth.png" alt-text="Screenshot of the Default repository section showing authentication settings for Basic authentication.":::
- * **SSH**: In the **Default repository** section, in the **Uri** box, paste the repository URI, and then select the **Authentication** ("pencil" icon) button. In the **Edit Authentication** pane, in the **Authentication type** drop-down list, select **SSH**, and then enter your **Private key**. Optionally, specify your **Host key** and **Host key algorithm**. Be sure to include your public key in your Config Server repository. Select **OK**, and then select **Apply** to finish setting up your Config Server instance.
+ > [!NOTE]
+ > Many Git repository servers support the use of tokens rather than passwords for HTTP basic authentication. Some repositories allow tokens to persist indefinitely. However, some Git repository servers, including Azure DevOps Server, force tokens to expire in a few hours. Repositories that cause tokens to expire shouldn't use token-based authentication with Azure Spring Apps.
+ >
+ > GitHub has removed support for password authentication, so you'll need to use a personal access token instead of password authentication for GitHub. For more information, see [Token authentication requirements for Git operations](https://github.blog/2020-12-15-token-authentication-requirements-for-git-operations/).
- ![The Edit Authentication pane ssh auth](media/spring-cloud-tutorial-config-server/ssh-auth.png)
+ - **SSH**: In the **Default repository** section, in the **Uri** box, paste the repository URI, and then select the setting under **Authentication** to open the **Edit Authentication** pane. In the **Edit Authentication** pane, in the **Authentication type** drop-down list, select **SSH**, and then enter your private key. Optionally, specify your host key and host key algorithm. Include your public key in your Config Server repository. Select **OK**, and then select **Apply** to finish setting up your Config Server instance.
+
+ :::image type="content" source="media/how-to-config-server/ssh-auth.png" lightbox="media/how-to-config-server/ssh-auth.png" alt-text="Screenshot of the Default repository section showing authentication settings for SSH authentication.":::
#### Additional repositories
-If you want to use an optional **Additional repositories** to configure your service, specify the **URI** and **Authentication** the same way as the **Default repository**. Be sure to include a **Name** for your pattern, and then select **Apply** to attach it to your instance.
+If you want to configure your service with an optional additional repository, specify the **Uri** and **Authentication** settings as you did for the default repository. Be sure to include a **Name** setting for your pattern, and then select **Apply** to attach it to your instance.
### Enter repository information into a YAML file
-If you've written a YAML file with your repository settings, you can import the file directly from your local machine to Azure Spring Apps. A simple YAML file for a private repository with basic authentication would look like this:
+If you've written a YAML file with your repository settings, you can import the file directly from your local machine to Azure Spring Apps. The following example shows a simple YAML file for a private repository with basic authentication.
```yaml spring:
spring:
```
-Select the **Import settings** button, and then select the YAML file from your project directory. Select **Import**, and then an `async` operation from your **Notifications** will pop up. After 1-2 minutes, it should report success.
+Select the **Import settings** button, and then select the YAML file from your project directory. Select **Import**.
-![The Config Server Notifications pane](media/spring-cloud-tutorial-config-server/local-yml-success.png)
-The information from your YAML file should be displayed in the Azure portal. Select **Apply** to finish.
+Your **Notifications** displays an `async` operation. Config Server should report success after 1-2 minutes. The information from your YAML file displays in the Azure portal. Select **Apply** to finish the import.
-## Using Azure Repos for Azure Spring Apps Configuration
+## Use Azure Repos for Azure Spring Apps configuration
-Azure Spring Apps can access Git repositories that are public, secured by SSH, or secured using HTTP basic authentication. We'll use that last option, as it's easier to create and manage with Azure Repos.
+Azure Spring Apps can access Git repositories that are public, secured by SSH, or secured using HTTP basic authentication. HTTP basic authentication is the easiest of the options for creating and managing repositories with Azure Repos.
-### Get repo url and credentials
+### Get repo URL and credentials
-1. In the Azure Repos portal for your project, select the **Clone** button:
+Use the following steps to get your repo URL and credentials.
- ![Picture of Clone Button](media/spring-cloud-tutorial-config-server/clone-button.png)
+1. In the Azure Repos portal for your project, select the **Clone** button:
-1. Copy the clone URL from the textbox. This URL will typically be in the form:
+1. Copy the clone URL from the textbox. This URL will typically be in the following form:
```text https://<organization name>@dev.azure.com/<organization name>/<project name>/_git/<repository name> ```
- Remove everything after `https://` and before `dev.azure.com`, including the `@`. The resulting URL should be in the form:
+ Remove everything after `https://` and before `dev.azure.com`, including the `@` symbol. The resulting URL should be in the following form:
```text https://dev.azure.com/<organization name>/<project name>/_git/<repository name>
Azure Spring Apps can access Git repositories that are public, secured by SSH, o
Save this URL for use in the next section.
-1. Select **Generate Git Credentials**. A username and password will appear and should be saved for use in the next section.
+1. Select **Generate Git Credentials** to display a username and password, which should be saved for use in the following section.
### Configure Azure Spring Apps to access the Git repository
Azure Spring Apps can access Git repositories that are public, secured by SSH, o
1. Select the service to configure.
-1. In the left pane of the service page, under **Settings**, select the **Config Server** tab. Configure the repository we previously created:
+1. In the left pane of the service page under **Settings**, select the **Config Server** tab. Configure the repository you created, as follows:
- * Add the repository URL that you've saved from the previous section.
- * Select **Authentication** and then select **HTTP Basic**.
- * The __username__ is the username saved from the previous section.
- * The __password__ is the password saved from the previous section.
- * Select **Apply** and then wait for the operation to succeed.
+ - Add the repository URI that you saved in the previous section.
+ - Select the setting under **Authentication** to open the **Edit Authentication** pane.
+ - For **Authentication type**, select **HTTP Basic**.
+ - For **Username**, specify the user name that you saved in the previous section.
+ - For **Password**, specify the password that you saved in the previous section.
+ - Select **OK**, and then wait for the operation to complete.
- ![Spring Cloud config server](media/spring-cloud-tutorial-config-server/config-server-azure-repos.png)
+ :::image type="content" source="media/how-to-config-server/config-server-azure-repos.png" lightbox="media/how-to-config-server/config-server-azure-repos.png" alt-text="Screenshot of repository configuration settings.":::
## Delete your configuration
-You can select the **Reset** button that appears in the **Config Server** tab to erase your existing settings completely. Delete the config server settings if you want to connect your Config Server instance to another source, such as moving from GitHub to Azure DevOps.
+Select **Reset** on the **Config Server** tab to erase your existing settings. Delete the config server settings if you want to connect your Config Server instance to another source, such as when you're moving from GitHub to Azure DevOps.
## Config Server refresh
-When properties are changed, services consuming those properties need to be notified before changes can be made. The default solution for Spring Cloud Config is to manually trigger the [refresh event](https://spring.io/guides/gs/centralized-configuration/), which may not be feasible if there are lots of app instances. Instead, you can automatically refresh values from the config server by letting the config client poll for changes based on a refresh internal.
+When properties are changed, services consuming those properties must be notified before changes can be made. The default solution for Spring Cloud Config Server is to manually trigger the refresh event, which may not be feasible if there are many app instances. For more information, see [Centralized Configuration](https://spring.io/guides/gs/centralized-configuration/)
+
+Instead, you can automatically refresh values from Config Server by letting the config client poll for changes based on a refresh internal. Use the following steps to automatically refresh values from Config Server.
-1. Register a scheduled task to refresh the context in a given interval.
+1. Register a scheduled task to refresh the context in a given interval, as shown in the following example.
```java @ConditionalOnBean({RefreshEndpoint.class})
When properties are changed, services consuming those properties need to be noti
} ```
-2. Enable auto-refresh and set the appropriate refresh interval in your application.yml. In this example, the client will poll for config changes every 60 seconds, which is the minimum value you can set for refresh interval.
+1. Enable auto-refresh and set the appropriate refresh interval in your *application.yml* file. In the following example, the client polls for config changes every 60 seconds, which is the minimum value you can set for a refresh interval.
- By default auto-refresh is set to false and the refresh-interval is set to 60 seconds.
+ By default, auto-refresh is set to *false* and the refresh-interval is set to *60 seconds*.
```yaml spring:
When properties are changed, services consuming those properties need to be noti
- refresh ```
-3. Add @RefreshScope in your code. In this example, the variable connectTimeout will be automatically refreshed every 60 seconds.
+1. Add @RefreshScope in your code. In the following example, the variable `connectTimeout` is automatically refreshed every 60 seconds.
```java @RestController
When properties are changed, services consuming those properties need to be noti
} ```
-> [!TIP]
-> For more information, see this [sample project](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/tree/master/config-client-polling) for more information.
+For more information, see the [config-client-polling sample](https://github.com/Azure-Samples/Azure-Spring-Cloud-Samples/tree/master/config-client-polling).
## Next steps
spring-apps How To Use Tls Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-use-tls-certificate.md
You need to grant Azure Spring Apps access to your key vault before you import y
:::image type="content" source="media/use-tls-certificates/grant-key-vault-permission.png" alt-text="Screenshot of Azure portal 'Create an access policy' page with Permission pane showing and Get and List permissions highlighted." lightbox="media/use-tls-certificates/grant-key-vault-permission.png":::
-1. Under **Principal**, select your **Azure Spring Apps Resource Provider**.
+1. Under **Principal**, select your **Azure Spring Cloud Resource Provider**.
:::image type="content" source="media/use-tls-certificates/select-service-principal.png" alt-text="Screenshot of Azure portal 'Create an access policy' page with Principal pane showing and Azure Spring Apps Resource Provider highlighted." lightbox="media/use-tls-certificates/select-service-principal.png":::
spring-apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/overview.md
The following quickstarts will help you get started:
* [Launch your first app](quickstart.md) * [Introduction to the sample app](quickstart-sample-app-introduction.md)
+The following documents will help you migrate existing Spring Boot apps to Azure Spring Apps:
+
+* [Migrate Spring Boot applications to Azure Spring Apps](/azure/developer/java/migration/migrate-spring-boot-to-azure-spring-cloud)
+* [Migrate Spring Cloud applications to Azure Spring Apps](/azure/developer/java/migration/migrate-spring-cloud-to-azure-spring-cloud?pivots=sc-standard-tier)
+ The following quickstarts apply to Basic/Standard tier only. For Enterprise tier quickstarts, see the next section. * [Provision an Azure Spring Apps service instance](quickstart-provision-service-instance.md)
static-web-apps Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/configuration.md
The `platform` section controls platform specific settings, such as the API lang
### Selecting the API language runtime version
-To configure the API language runtime version, set the `apiRuntime` property in the `platform` section to one of the following supported values.
-
-| Language runtime version | Operating system | Azure Functions version | `apiRuntime` value |
-|--|--|--|--|
-| .NET Core 3.1 | Windows | 3.x | `dotnet:3.1` |
-| .NET 6.0 in-process | Windows | 4.x | `dotnet:6.0` |
-| .NET 6.0 isolated | Windows | 4.x | `dotnet-isolated:6.0` |
-| Node.js 12.x | Linux | 3.x | `node:12` |
-| Node.js 14.x | Linux | 4.x | `node:14` |
-| Node.js 16.x | Linux | 4.x | `node:16` |
-| Python 3.8 | Linux | 3.x | `python:3.8` |
-| Python 3.9 | Linux | 4.x | `python:3.9` |
-
-The following example configuration demonstrates how to use the `apiRuntime` property to select Node.js 16 as the API language runtime version.
-
-```json
-{
- "platform": {
- "apiRuntime": "node:16"
- }
-}
-```
## Networking
static-web-apps Enterprise Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/enterprise-edge.md
Title: Enterprise-grade edge (preview) in Azure Static Web Apps
-description: Learn about Azure Static Web Apps enterprise-grade edge (Preview)
+ Title: Enterprise-grade edge in Azure Static Web Apps
+description: Learn about Azure Static Web Apps enterprise-grade edge
Last updated 01/11/2021
-# Enterprise-grade edge (Preview)
+# Enterprise-grade edge
-Use Azure Static Web Apps enterprise-grade edge (Preview) to enable faster page loads, enhance security, and optimize reliability for your global applications. Enterprise edge combines the capabilities of Azure Static Web Apps, Azure Front Door, and Azure Content Delivery Network (CDN) into a single secure cloud CDN platform.
+Use Azure Static Web Apps enterprise-grade edge to enable faster page loads, enhance security, and optimize reliability for your global applications. Enterprise edge combines the capabilities of Azure Static Web Apps, Azure Front Door, and Azure Content Delivery Network (CDN) into a single secure cloud CDN platform.
Key features of Azure Static Web Apps enterprise-grade edge include:
Key features of Azure Static Web Apps enterprise-grade edge include:
* Optimized file compression.
-> [!NOTE]
-> Static Web Apps enterprise-grade edge is currently in preview.
- ## Caching When enterprise-grade edge is enabled for your static web app, you benefit from caching at various levels.
az staticwebapp enterprise-edge enable -n my-static-webapp -g my-resource-group
## Limitations -- Private Endpoint cannot be used with enterprise-grade edge.
+- Private Endpoint can't be used with enterprise-grade edge.
## Next steps
static-web-apps Languages Runtimes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/languages-runtimes.md
+
+ Title: Supported languages and runtimes in Azure Static Web Apps
+description: Supported languages and runtimes in Azure Static Web Apps
++++ Last updated : 08/30/2022+++
+# Supported languages and runtimes in Azure Static Web Apps
+
+Azure Static Web Apps features two different places where runtime and language versions are important, on the front end and for the API.
+
+| Runtime type | Description |
+|--|--|
+| [Front end](#front-end) | The version responsible for running the website's build steps that build the front end application. |
+| [API](#api) | The version and runtime of Azure Functions used in your web application. |
+
+## Front end
+
+You can specify the version used to build the front end of your static web app. Configuring a non-default version is often only necessary if you need to target older versions.
+
+You can specify the runtime version that builds the front end of your static web app in the _package.json_ file in the `engines` section of the file.
+
+```json
+{
+ ...
+ "engines": {
+ "node": ">=14.0.0"
+ }
+}
+```
+
+## API
+
+The APIs in Azure Static Web Apps are supported by Azure Functions. Refer to the [Azure Functions supported languages and runtimes](/azure/azure-functions/supported-languages) for details.
+
+The following versions are supported for managed functions in Static Web Apps. If your application requires a version not listed, considering [bringing your own functions](./functions-bring-your-own.md).
++
+## Deprecations
+
+The following runtimes are deprecated in Azure Static Web Apps. For more information about changing your runtime, see [Specify API language runtime version in Azure Static Web Apps](https://azure.microsoft.com/updates/generally-available-specify-api-language-runtime-version-in-azure-static-web-apps/) and [Azure Functions runtime versions overview](/azure/azure-functions/functions-versions?tabs=azure-powershell%2Cin-process%2Cv4&pivots=programming-language-csharp#upgrade-your-local-project).
+
+- .NET Core 3.1
+- Node.js 12.x
storage Geo Redundant Design Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/geo-redundant-design-legacy.md
+
+ Title: Use geo-redundancy to design highly available applications (.NET v11 SDK)
+
+description: Learn how to use geo-redundant storage to design a highly available application using the .NET v11 SDK that is flexible enough to handle outages.
+++++ Last updated : 08/23/2022++++++
+# Use geo-redundancy to design highly available applications (.NET v11 SDK)
+
+A common feature of cloud-based infrastructures like Azure Storage is that they provide a highly available and durable platform for hosting data and applications. Developers of cloud-based applications must consider carefully how to leverage this platform to maximize those advantages for their users. Azure Storage offers geo-redundant storage to ensure high availability even in the event of a regional outage. Storage accounts configured for geo-redundant replication are synchronously replicated in the primary region, and then asynchronously replicated to a secondary region that is hundreds of miles away.
+
+Azure Storage offers two options for geo-redundant replication. The only difference between these two options is how data is replicated in the primary region:
+
+- [Geo-zone-redundant storage (GZRS)](storage-redundancy.md): Data is replicated synchronously across three Azure availability zones in the primary region using *zone-redundant storage (ZRS)*, then replicated asynchronously to the secondary region. For read access to data in the secondary region, enable read-access geo-zone-redundant storage (RA-GZRS).
+
+ Microsoft recommends using GZRS/RA-GZRS for scenarios that require maximum availability and durability.
+
+- [Geo-redundant storage (GRS)](storage-redundancy.md): Data is replicated synchronously three times in the primary region using *locally redundant storage (LRS)*, then replicated asynchronously to the secondary region. For read access to data in the secondary region, enable read-access geo-redundant storage (RA-GRS).
+
+This article shows how to design your application to handle an outage in the primary region. If the primary region becomes unavailable, your application can adapt to perform read operations against the secondary region instead. Make sure that your storage account is configured for RA-GRS or RA-GZRS before you get started.
+
+## Application design considerations when reading from the secondary
+
+The purpose of this article is to show you how to design an application that will continue to function (albeit in a limited capacity) even in the event of a major disaster at the primary data center. You can design your application to handle transient or long-running issues by reading from the secondary region when there is a problem that interferes with reading from the primary region. When the primary region is available again, your application can return to reading from the primary region.
+
+Keep in mind these key points when designing your application for RA-GRS or RA-GZRS:
+
+- Azure Storage maintains a read-only copy of the data you store in your primary region in a secondary region. As noted above, the storage service determines the location of the secondary region.
+
+- The read-only copy is [eventually consistent](https://en.wikipedia.org/wiki/Eventual_consistency) with the data in the primary region.
+
+- For blobs, tables, and queues, you can query the secondary region for a *Last Sync Time* value that tells you when the last replication from the primary to the secondary region occurred. (This is not supported for Azure Files, which doesn't have RA-GRS redundancy at this time.)
+
+- You can use the Storage Client Library to read and write data in either the primary or secondary region. You can also redirect read requests automatically to the secondary region if a read request to the primary region times out.
+
+- If the primary region becomes unavailable, you can initiate an account failover. When you fail over to the secondary region, the DNS entries pointing to the primary region are changed to point to the secondary region. After the failover is complete, write access is restored for GRS and RA-GRS accounts. For more information, see [Disaster recovery and storage account failover](storage-disaster-recovery-guidance.md).
+
+### Using eventually consistent data
+
+The proposed solution assumes that it is acceptable to return potentially stale data to the calling application. Because data in the secondary region is eventually consistent, it is possible the primary region may become inaccessible before an update to the secondary region has finished replicating.
+
+For example, suppose your customer submits an update successfully, but the primary region fails before the update is propagated to the secondary region. When the customer asks to read the data back, they receive the stale data from the secondary region instead of the updated data. When designing your application, you must decide whether this is acceptable, and if so, how you will message the customer.
+
+Later in this article, we show how to check the Last Sync Time for the secondary data to check whether the secondary is up-to-date.
+
+### Handling services separately or all together
+
+While unlikely, it is possible for one service to become unavailable while the other services are still fully functional. You can handle the retries and read-only mode for each service separately (blobs, queues, tables), or you can handle retries generically for all the storage services together.
+
+For example, if you use queues and blobs in your application, you may decide to put in separate code to handle retryable errors for each of these. Then if you get a retry from the blob service, but the queue service is still working, only the part of your application that handles blobs will be impacted. If you decide to handle all storage service retries generically and a call to the blob service returns a retryable error, then requests to both the blob service and the queue service will be impacted.
+
+Ultimately, this depends on the complexity of your application. You may decide not to handle the failures by service, but instead to redirect read requests for all storage services to the secondary region and run the application in read-only mode when you detect a problem with any storage service in the primary region.
+
+### Other considerations
+
+These are the other considerations we will discuss in the rest of this article.
+
+- Handling retries of read requests using the Circuit Breaker pattern
+
+- Eventually consistent data and the Last Sync Time
+
+- Testing
+
+## Running your application in read-only mode
+
+To effectively prepare for an outage in the primary region, you must be able to handle both failed read requests and failed update requests (with update in this case meaning inserts, updates, and deletions). If the primary region fails, read requests can be redirected to the secondary region. However, update requests cannot be redirected to the secondary because the secondary is read-only. For this reason, you need to design your application to run in read-only mode.
+
+For example, you can set a flag that is checked before any update requests are submitted to Azure Storage. When one of the update requests comes through, you can skip it and return an appropriate response to the customer. You may even want to disable certain features altogether until the problem is resolved and notify users that those features are temporarily unavailable.
+
+If you decide to handle errors for each service separately, you will also need to handle the ability to run your application in read-only mode by service. For example, you may have read-only flags for each service that can be enabled and disabled. Then you can handle the flag in the appropriate places in your code.
+
+Being able to run your application in read-only mode has another side benefit ΓÇô it gives you the ability to ensure limited functionality during a major application upgrade. You can trigger your application to run in read-only mode and point to the secondary data center, ensuring nobody is accessing the data in the primary region while you're making upgrades.
+
+## Handling updates when running in read-only mode
+
+There are many ways to handle update requests when running in read-only mode. We won't cover this comprehensively, but generally, there are a couple of patterns that you consider.
+
+- You can respond to your user and tell them you are not currently accepting updates. For example, a contact management system could enable customers to access contact information but not make updates.
+
+- You can enqueue your updates in another region. In this case, you would write your pending update requests to a queue in a different region, and then have a way to process those requests after the primary data center comes online again. In this scenario, you should let the customer know that the update requested is queued for later processing.
+
+- You can write your updates to a storage account in another region. Then when the primary data center comes back online, you can have a way to merge those updates into the primary data, depending on the structure of the data. For example, if you are creating separate files with a date/time stamp in the name, you can copy those files back to the primary region. This works for some workloads such as logging and iOT data.
+
+## Handling retries
+
+The Azure Storage client library helps you determine which errors can be retried. For example, a 404 error (resource not found) would not be retried because retrying it is not likely to result in success. On the other hand, a 500 error can be retried because it is a server error, and the problem may simply be a transient issue. For more details, check out the [open source code for the ExponentialRetry class](https://github.com/Azure/azure-storage-net/blob/87b84b3d5ee884c7adc10e494e2c7060956515d0/Lib/Common/RetryPolicies/ExponentialRetry.cs) in the .NET storage client library. (Look for the ShouldRetry method.)
+
+### Read requests
+
+Read requests can be redirected to secondary storage if there is a problem with primary storage. As noted above in [Using Eventually Consistent Data](#using-eventually-consistent-data), it must be acceptable for your application to potentially read stale data. If you are using the storage client library to access data from the secondary, you can specify the retry behavior of a read request by setting a value for the **LocationMode** property to one of the following:
+
+- **PrimaryOnly** (the default)
+
+- **PrimaryThenSecondary**
+
+- **SecondaryOnly**
+
+- **SecondaryThenPrimary**
+
+When you set the **LocationMode** to **PrimaryThenSecondary**, if the initial read request to the primary endpoint fails with an error that can be retried, the client automatically makes another read request to the secondary endpoint. If the error is a server timeout, then the client will have to wait for the timeout to expire before it receives a retryable error from the service.
+
+There are basically two scenarios to consider when you are deciding how to respond to a retryable error:
+
+- This is an isolated problem and subsequent requests to the primary endpoint will not return a retryable error. An example of where this might happen is when there is a transient network error.
+
+ In this scenario, there is no significant performance penalty in having **LocationMode** set to **PrimaryThenSecondary** as this only happens infrequently.
+
+- This is a problem with at least one of the storage services in the primary region and all subsequent requests to that service in the primary region are likely to return retryable errors for a period of time. An example of this is if the primary region is completely inaccessible.
+
+ In this scenario, there is a performance penalty because all your read requests will try the primary endpoint first, wait for the timeout to expire, then switch to the secondary endpoint.
+
+For these scenarios, you should identify that there is an ongoing issue with the primary endpoint and send all read requests directly to the secondary endpoint by setting the **LocationMode** property to **SecondaryOnly**. At this time, you should also change the application to run in read-only mode. This approach is known as the [Circuit Breaker Pattern](/azure/architecture/patterns/circuit-breaker).
+
+### Update requests
+
+The Circuit Breaker pattern can also be applied to update requests. However, update requests cannot be redirected to secondary storage, which is read-only. For these requests, you should leave the **LocationMode** property set to **PrimaryOnly** (the default). To handle these errors, you can apply a metric to these requests ΓÇô such as 10 failures in a row ΓÇô and when your threshold is met, switch the application into read-only mode. You can use the same methods for returning to update mode as those described below in the next section about the Circuit Breaker pattern.
+
+## Circuit Breaker pattern
+
+Using the Circuit Breaker pattern in your application can prevent it from retrying an operation that is likely to fail repeatedly. It allows the application to continue to run rather than taking up time while the operation is retried exponentially. It also detects when the fault has been fixed, at which time the application can try the operation again.
+
+### How to implement the Circuit Breaker pattern
+
+To identify that there is an ongoing problem with a primary endpoint, you can monitor how frequently the client encounters retryable errors. Because each case is different, you have to decide on the threshold you want to use for the decision to switch to the secondary endpoint and run the application in read-only mode. For example, you could decide to perform the switch if there are 10 failures in a row with no successes. Another example is to switch if 90% of the requests in a 2-minute period fail.
+
+For the first scenario, you can simply keep a count of the failures, and if there is a success before reaching the maximum, set the count back to zero. For the second scenario, one way to implement it is to use the MemoryCache object (in .NET). For each request, add a CacheItem to the cache, set the value to success (1) or fail (0), and set the expiration time to 2 minutes from now (or whatever your time constraint is). When an entry's expiration time is reached, the entry is automatically removed. This will give you a rolling 2-minute window. Each time you make a request to the storage service, you first use a Linq query across the MemoryCache object to calculate the percent success by summing the values and dividing by the count. When the percent success drops below some threshold (such as 10%), set the **LocationMode** property for read requests to **SecondaryOnly** and switch the application into read-only mode before continuing.
+
+The threshold of errors used to determine when to make the switch may vary from service to service in your application, so you should consider making them configurable parameters. This is also where you decide to handle retryable errors from each service separately or as one, as discussed previously.
+
+Another consideration is how to handle multiple instances of an application, and what to do when you detect retryable errors in each instance. For example, you may have 20 VMs running with the same application loaded. Do you handle each instance separately? If one instance starts having problems, do you want to limit the response to just that one instance, or do you want to try to have all instances respond in the same way when one instance has a problem? Handling the instances separately is much simpler than trying to coordinate the response across them, but how you do this depends on your application's architecture.
+
+### Options for monitoring the error frequency
+
+You have three main options for monitoring the frequency of retries in the primary region in order to determine when to switch over to the secondary region and change the application to run in read-only mode.
+
+- Add a handler for the [**Retrying**](/dotnet/api/microsoft.azure.cosmos.table.operationcontext.retrying) event on the [**OperationContext**](/java/api/com.microsoft.azure.storage.operationcontext) object you pass to your storage requests ΓÇô this is the method displayed in this article and used in the accompanying sample. These events fire whenever the client retries a request, enabling you to track how often the client encounters retryable errors on a primary endpoint.
+
+ ```csharp
+ operationContext.Retrying += (sender, arguments) =>
+ {
+ // Retrying in the primary region
+ if (arguments.Request.Host == primaryhostname)
+ ...
+ };
+ ```
+
+
+
+- In the [**Evaluate**](/dotnet/api/microsoft.azure.cosmos.table.iextendedretrypolicy.evaluate) method in a custom retry policy, you can run custom code whenever a retry takes place. In addition to recording when a retry happens, this also gives you the opportunity to modify your retry behavior.
+
+ ```csharp
+ public RetryInfo Evaluate(RetryContext retryContext,
+ OperationContext operationContext)
+ {
+ var statusCode = retryContext.LastRequestResult.HttpStatusCode;
+ if (retryContext.CurrentRetryCount >= this.maximumAttempts
+ || ((statusCode >= 300 && statusCode < 500 && statusCode != 408)
+ || statusCode == 501 // Not Implemented
+ || statusCode == 505 // Version Not Supported
+ ))
+ {
+ // Do not retry
+ return null;
+ }
+
+ // Monitor retries in the primary location
+ ...
+
+ // Determine RetryInterval and TargetLocation
+ RetryInfo info =
+ CreateRetryInfo(retryContext.CurrentRetryCount);
+
+ return info;
+ }
+ ```
+
+
+
+- The third approach is to implement a custom monitoring component in your application that continually pings your primary storage endpoint with dummy read requests (such as reading a small blob) to determine its health. This would take up some resources, but not a significant amount. When a problem is discovered that reaches your threshold, you would then perform the switch to **SecondaryOnly** and read-only mode.
+
+At some point, you will want to switch back to using the primary endpoint and allowing updates. If using one of the first two methods listed above, you could simply switch back to the primary endpoint and enable update mode after an arbitrarily selected amount of time or number of operations has been performed. You can then let it go through the retry logic again. If the problem has been fixed, it will continue to use the primary endpoint and allow updates. If there is still a problem, it will once more switch back to the secondary endpoint and read-only mode after failing the criteria you've set.
+
+For the third scenario, when pinging the primary storage endpoint becomes successful again, you can trigger the switch back to **PrimaryOnly** and continue allowing updates.
+
+## Handling eventually consistent data
+
+Geo-redundant storage works by replicating transactions from the primary to the secondary region. This replication process guarantees that the data in the secondary region is *eventually consistent*. This means that all the transactions in the primary region will eventually appear in the secondary region, but that there may be a lag before they appear, and that there is no guarantee the transactions arrive in the secondary region in the same order as that in which they were originally applied in the primary region. If your transactions arrive in the secondary region out of order, you *may* consider your data in the secondary region to be in an inconsistent state until the service catches up.
+
+The following table shows an example of what might happen when you update the details of an employee to make them a member of the *administrators* role. For the sake of this example, this requires you update the **employee** entity and update an **administrator role** entity with a count of the total number of administrators. Notice how the updates are applied out of order in the secondary region.
+
+| **Time** | **Transaction** | **Replication** | **Last Sync Time** | **Result** |
+|-|||--||
+| T0 | Transaction A: <br> Insert employee <br> entity in primary | | | Transaction A inserted to primary,<br> not replicated yet. |
+| T1 | | Transaction A <br> replicated to<br> secondary | T1 | Transaction A replicated to secondary. <br>Last Sync Time updated. |
+| T2 | Transaction B:<br>Update<br> employee entity<br> in primary | | T1 | Transaction B written to primary,<br> not replicated yet. |
+| T3 | Transaction C:<br> Update <br>administrator<br>role entity in<br>primary | | T1 | Transaction C written to primary,<br> not replicated yet. |
+| *T4* | | Transaction C <br>replicated to<br> secondary | T1 | Transaction C replicated to secondary.<br>LastSyncTime not updated because <br>transaction B has not been replicated yet.|
+| *T5* | Read entities <br>from secondary | | T1 | You get the stale value for employee <br> entity because transaction B hasn't <br> replicated yet. You get the new value for<br> administrator role entity because C has<br> replicated. Last Sync Time still hasn't<br> been updated because transaction B<br> hasn't replicated. You can tell the<br>administrator role entity is inconsistent <br>because the entity date/time is after <br>the Last Sync Time. |
+| *T6* | | Transaction B<br> replicated to<br> secondary | T6 | *T6* ΓÇô All transactions through C have <br>been replicated, Last Sync Time<br> is updated. |
+
+In this example, assume the client switches to reading from the secondary region at T5. It can successfully read the **administrator role** entity at this time, but the entity contains a value for the count of administrators that is not consistent with the number of **employee** entities that are marked as administrators in the secondary region at this time. Your client could simply display this value, with the risk that it is inconsistent information. Alternatively, the client could attempt to determine that the **administrator role** is in a potentially inconsistent state because the updates have happened out of order, and then inform the user of this fact.
+
+To recognize that it has potentially inconsistent data, the client can use the value of the *Last Sync Time* that you can get at any time by querying a storage service. This tells you the time when the data in the secondary region was last consistent and when the service had applied all the transactions prior to that point in time. In the example shown above, after the service inserts the **employee** entity in the secondary region, the last sync time is set to *T1*. It remains at *T1* until the service updates the **employee** entity in the secondary region when it is set to *T6*. If the client retrieves the last sync time when it reads the entity at *T5*, it can compare it with the timestamp on the entity. If the timestamp on the entity is later than the last sync time, then the entity is in a potentially inconsistent state, and you can take whatever is the appropriate action for your application. Using this field requires that you know when the last update to the primary was completed.
+
+To learn how to check the last sync time, see [Check the Last Sync Time property for a storage account](last-sync-time-get.md).
+
+## Testing
+
+It's important to test that your application behaves as expected when it encounters retryable errors. For example, you need to test that the application switches to the secondary and into read-only mode when it detects a problem, and switches back when the primary region becomes available again. To do this, you need a way to simulate retryable errors and control how often they occur.
+
+You can use [Fiddler](https://www.telerik.com/fiddler) to intercept and modify HTTP responses in a script. This script can identify responses that come from your primary endpoint and change the HTTP status code to one that the Storage Client Library recognizes as a retryable error. This code snippet shows a simple example of a Fiddler script that intercepts responses to read requests against the **employeedata** table to return a 502 status:
+
+```
+static function OnBeforeResponse(oSession: Session) {
+ ...
+ if ((oSession.hostname == "\[yourstorageaccount\].table.core.windows.net")
+ && (oSession.PathAndQuery.StartsWith("/employeedata?$filter"))) {
+ oSession.responseCode = 502;
+ }
+}
+```
+
+You could extend this example to intercept a wider range of requests and only change the **responseCode** on some of them to better simulate a real-world scenario. For more information about customizing Fiddler scripts, see [Modifying a Request or Response](https://docs.telerik.com/fiddler/KnowledgeBase/FiddlerScript/ModifyRequestOrResponse) in the Fiddler documentation.
+
+If you have made the thresholds for switching your application to read-only mode configurable, it will be easier to test the behavior with non-production transaction volumes.
+++
+## Next steps
+
+For a complete sample showing how to make the switch back and forth between the primary and secondary endpoints, see [Azure Samples ΓÇô Using the Circuit Breaker Pattern with RA-GRS storage](https://github.com/Azure-Samples/storage-dotnet-circuit-breaker-pattern-ha-apps-using-ra-grs).
storage Geo Redundant Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/geo-redundant-design.md
Previously updated : 07/19/2022 Last updated : 08/23/2022
# Use geo-redundancy to design highly available applications
-A common feature of cloud-based infrastructures like Azure Storage is that they provide a highly available and durable platform for hosting data and applications. Developers of cloud-based applications must consider carefully how to leverage this platform to maximize those advantages for their users. Azure Storage offers geo-redundant storage to ensure high availability even in the event of a regional outage. Storage accounts configured for geo-redundant replication are synchronously replicated in the primary region, and then asynchronously replicated to a secondary region that is hundreds of miles away.
+Cloud-based infrastructures like Azure Storage provide a highly available and durable platform for hosting data and applications. Developers of cloud-based applications must consider carefully how to leverage this platform to maximize those advantages for their users. Azure Storage offers geo-redundancy options to ensure high availability even during a regional outage. Storage accounts configured for geo-redundant replication are synchronously replicated in the primary region, and asynchronously replicated to a secondary region that is hundreds of miles away.
-Azure Storage offers two options for geo-redundant replication. The only difference between these two options is how data is replicated in the primary region:
+Azure Storage offers two options for geo-redundant replication: [Geo-redundant storage (GRS)](storage-redundancy.md#geo-redundant-storage) and [Geo-zone-redundant storage (GZRS)](storage-redundancy.md#geo-zone-redundant-storage). To make use of the Azure Storage geo-redundancy options, make sure that your storage account is configured for read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-GZRS). If it's not, you can learn more about how to [change your storage account replication type](redundancy-migration.md).
-- [Geo-zone-redundant storage (GZRS)](storage-redundancy.md): Data is replicated synchronously across three Azure availability zones in the primary region using *zone-redundant storage (ZRS)*, then replicated asynchronously to the secondary region. For read access to data in the secondary region, enable read-access geo-zone-redundant storage (RA-GZRS).
+This article shows how to design an application that will continue to function, albeit in a limited capacity, even when there's a significant outage in the primary region. If the primary region becomes unavailable, your application can switch seamlessly to perform read operations against the secondary region until the primary region is responsive again.
- Microsoft recommends using GZRS/RA-GZRS for scenarios that require maximum availability and durability.
+## Application design considerations
-- [Geo-redundant storage (GRS)](storage-redundancy.md): Data is replicated synchronously three times in the primary region using *locally redundant storage (LRS)*, then replicated asynchronously to the secondary region. For read access to data in the secondary region, enable read-access geo-redundant storage (RA-GRS).
+You can design your application to handle transient faults or significant outages by reading from the secondary region when there's an issue that interferes with reading from the primary region. When the primary region is available again, your application can return to reading from the primary region.
-This article shows how to design your application to handle an outage in the primary region. If the primary region becomes unavailable, your application can adapt to perform read operations against the secondary region instead. Make sure that your storage account is configured for RA-GRS or RA-GZRS before you get started.
+Keep in mind these key considerations when designing your application for availability and resiliency using RA-GRS or RA-GZRS:
-## Application design considerations when reading from the secondary
+- A read-only copy of the data you store in the primary region is asynchronously replicated in a secondary region. This asynchronous replication means that the read-only copy in the secondary region is [eventually consistent](https://en.wikipedia.org/wiki/Eventual_consistency) with the data in the primary region. The storage service determines the location of the secondary region.
-The purpose of this article is to show you how to design an application that will continue to function (albeit in a limited capacity) even in the event of a major disaster at the primary data center. You can design your application to handle transient or long-running issues by reading from the secondary region when there is a problem that interferes with reading from the primary region. When the primary region is available again, your application can return to reading from the primary region.
-
-Keep in mind these key points when designing your application for RA-GRS or RA-GZRS:
--- Azure Storage maintains a read-only copy of the data you store in your primary region in a secondary region. As noted above, the storage service determines the location of the secondary region.--- The read-only copy is [eventually consistent](https://en.wikipedia.org/wiki/Eventual_consistency) with the data in the primary region.--- For blobs, tables, and queues, you can query the secondary region for a *Last Sync Time* value that tells you when the last replication from the primary to the secondary region occurred. (This is not supported for Azure Files, which doesn't have RA-GRS redundancy at this time.)--- You can use the Storage Client Library to read and write data in either the primary or secondary region. You can also redirect read requests automatically to the secondary region if a read request to the primary region times out.
+- You can use the Azure Storage client libraries to perform read and update requests against the primary region endpoint. If the primary region is unavailable, you can automatically redirect read requests to the secondary region. You can also configure your app to send read requests directly to the secondary region, if desired, even when the primary region is available.
- If the primary region becomes unavailable, you can initiate an account failover. When you fail over to the secondary region, the DNS entries pointing to the primary region are changed to point to the secondary region. After the failover is complete, write access is restored for GRS and RA-GRS accounts. For more information, see [Disaster recovery and storage account failover](storage-disaster-recovery-guidance.md).
-### Using eventually consistent data
+### Working with eventually consistent data
-The proposed solution assumes that it is acceptable to return potentially stale data to the calling application. Because data in the secondary region is eventually consistent, it is possible the primary region may become inaccessible before an update to the secondary region has finished replicating.
+The proposed solution assumes that it's acceptable to return potentially stale data to the calling application. Because data in the secondary region is eventually consistent, it's possible that the primary region may become inaccessible before an update to the secondary region has finished replicating.
-For example, suppose your customer submits an update successfully, but the primary region fails before the update is propagated to the secondary region. When the customer asks to read the data back, they receive the stale data from the secondary region instead of the updated data. When designing your application, you must decide whether this is acceptable, and if so, how you will message the customer.
+For example, suppose your customer submits an update successfully, but the primary region fails before the update is propagated to the secondary region. When the customer asks to read the data back, they receive the stale data from the secondary region instead of the updated data. When designing your application, you must decide whether or not this behavior is acceptable. If it is, you also need to consider how to notify the user.
-Later in this article, we show how to check the Last Sync Time for the secondary data to check whether the secondary is up-to-date.
+Later in this article, you'll learn more about [handling eventually consistent data](#handling-eventually-consistent-data) and how to check the **Last Sync Time** property to evaluate any discrepancies between data in the primary and secondary regions.
### Handling services separately or all together
-While unlikely, it is possible for one service to become unavailable while the other services are still fully functional. You can handle the retries and read-only mode for each service separately (blobs, queues, tables), or you can handle retries generically for all the storage services together.
-
-For example, if you use queues and blobs in your application, you may decide to put in separate code to handle retryable errors for each of these. Then if you get a retry from the blob service, but the queue service is still working, only the part of your application that handles blobs will be impacted. If you decide to handle all storage service retries generically and a call to the blob service returns a retryable error, then requests to both the blob service and the queue service will be impacted.
-
-Ultimately, this depends on the complexity of your application. You may decide not to handle the failures by service, but instead to redirect read requests for all storage services to the secondary region and run the application in read-only mode when you detect a problem with any storage service in the primary region.
-
-### Other considerations
-
-These are the other considerations we will discuss in the rest of this article.
--- Handling retries of read requests using the Circuit Breaker pattern
+While unlikely, it's possible for one service (blobs, queues, tables, or files) to become unavailable while the other services are still fully functional. You can handle the retries for each service separately, or you can handle retries generically for all the storage services together.
-- Eventually-consistent data and the Last Sync Time
+For example, if you use queues and blobs in your application, you may decide to put in separate code to handle retryable errors for each service. That way, a blob service error will only affect the part of your application that deals with blobs, leaving queues to continue running as normal. If, however, you decide to handle all storage service retries together, then requests to both blob and queue services will be affected if either service returns a retryable error.
-- Testing
+Ultimately, this decision depends on the complexity of your application. You may prefer to handle failures by service to limit the impact of retries. Or you may decide to redirect read requests for all storage services to the secondary region when you detect a problem with any storage service in the primary region.
## Running your application in read-only mode
-To effectively prepare for an outage in the primary region, you must be able to handle both failed read requests and failed update requests (with update in this case meaning inserts, updates, and deletions). If the primary region fails, read requests can be redirected to the secondary region. However, update requests cannot be redirected to the secondary because the secondary is read-only. For this reason, you need to design your application to run in read-only mode.
+To effectively prepare for an outage in the primary region, your application must be able to handle both failed read requests and failed update requests. If the primary region fails, read requests can be redirected to the secondary region. However, update requests can't be redirected because the replicated data in the secondary region is read-only. For this reason, you need to design your application to be able to run in read-only mode.
-For example, you can set a flag that is checked before any update requests are submitted to Azure Storage. When one of the update requests comes through, you can skip it and return an appropriate response to the customer. You may even want to disable certain features altogether until the problem is resolved and notify users that those features are temporarily unavailable.
+For example, you can set a flag that is checked before any update requests are submitted to Azure Storage. When an update request comes through, you can skip the request and return an appropriate response to the user. You might even choose to disable certain features altogether until the problem is resolved, and notify users that the features are temporarily unavailable.
-If you decide to handle errors for each service separately, you will also need to handle the ability to run your application in read-only mode by service. For example, you may have read-only flags for each service that can be enabled and disabled. Then you can handle the flag in the appropriate places in your code.
+If you decide to handle errors for each service separately, you'll also need to handle the ability to run your application in read-only mode by service. For example, you may set up read-only flags for each service. Then you can enable or disable the flags in the code, as needed.
-Being able to run your application in read-only mode has another side benefit ΓÇô it gives you the ability to ensure limited functionality during a major application upgrade. You can trigger your application to run in read-only mode and point to the secondary data center, ensuring nobody is accessing the data in the primary region while you're making upgrades.
+Being able to run your application in read-only mode also gives you the ability to ensure limited functionality during a major application upgrade. You can trigger your application to run in read-only mode and point to the secondary data center, ensuring nobody is accessing the data in the primary region while you're making upgrades.
-## Handling updates when running in read-only mode
+### Handling updates when running in read-only mode
-There are many ways to handle update requests when running in read-only mode. We won't cover this comprehensively, but generally, there are a couple of patterns that you consider.
+There are many ways to handle update requests when running in read-only mode. This section focuses on a few general patterns to consider.
-- You can respond to your user and tell them you are not currently accepting updates. For example, a contact management system could enable customers to access contact information but not make updates.
+- You can respond to the user and notify them that update requests are not currently being processed. For example, a contact management system could enable users to access contact information but not make updates.
-- You can enqueue your updates in another region. In this case, you would write your pending update requests to a queue in a different region, and then have a way to process those requests after the primary data center comes online again. In this scenario, you should let the customer know that the update requested is queued for later processing.
+- You can enqueue your updates in another region. In this case, you would write your pending update requests to a queue in a different region, and then process those requests after the primary data center comes online again. In this scenario, you should let the user know that the update request is queued for later processing.
-- You can write your updates to a storage account in another region. Then when the primary data center comes back online, you can have a way to merge those updates into the primary data, depending on the structure of the data. For example, if you are creating separate files with a date/time stamp in the name, you can copy those files back to the primary region. This works for some workloads such as logging and iOT data.
+- You can write your updates to a storage account in another region. When the primary region comes back online, you can merge those updates into the primary data, depending on the structure of the data. For example, if you're creating separate files with a date/time stamp in the name, you can copy those files back to the primary region. This solution can apply to workloads such as logging and IoT data.
## Handling retries
-The Azure Storage client library helps you determine which errors can be retried. For example, a 404 error (resource not found) would not be retried because retrying it is not likely to result in success. On the other hand, a 500 error can be retried because it is a server error, and the problem may simply be a transient issue. For more details, check out the [open source code for the ExponentialRetry class](https://github.com/Azure/azure-storage-net/blob/87b84b3d5ee884c7adc10e494e2c7060956515d0/Lib/Common/RetryPolicies/ExponentialRetry.cs) in the .NET storage client library. (Look for the ShouldRetry method.)
+Applications that communicate with services running in the cloud must be sensitive to unplanned events and faults that can occur. These faults can be transient or persistent, ranging from a momentary loss of connectivity to a significant outage due to a natural disaster. It's important to design cloud applications with appropriate retry handling to maximize availability and improve overall application stability.
### Read requests
-Read requests can be redirected to secondary storage if there is a problem with primary storage. As noted above in [Using Eventually Consistent Data](#using-eventually-consistent-data), it must be acceptable for your application to potentially read stale data. If you are using the storage client library to access data from the secondary, you can specify the retry behavior of a read request by setting a value for the **LocationMode** property to one of the following:
+If the primary region becomes unavailable, read requests can be redirected to secondary storage. As noted earlier, it must be acceptable for your application to potentially read stale data. The Azure Storage client library offers options for handling retries and redirecting read requests to a secondary region.
-- **PrimaryOnly** (the default)
+In this example, the retry handling for Blob storage is configured in the `BlobClientOptions` class and will apply to the `BlobServiceClient` object we create using these configuration options. This configuration is a **primary then secondary** approach, where read request retries from the primary region are redirected to the secondary region. This approach is best when failures in the primary region are expected to be temporary.
-- **PrimaryThenSecondary**
+```csharp
+string accountName = "<YOURSTORAGEACCOUNTNAME>";
+Uri primaryAccountUri = new Uri($"https://{accountName}.blob.core.windows.net/");
+Uri secondaryAccountUri = new Uri($"https://{accountName}-secondary.blob.core.windows.net/");
-- **SecondaryOnly**
+// Provide the client configuration options for connecting to Azure Blob storage
+BlobClientOptions blobClientOptions = new BlobClientOptions()
+{
+ Retry = {
+ // The delay between retry attempts for a fixed approach or the delay
+ // on which to base calculations for a backoff-based approach
+ Delay = TimeSpan.FromSeconds(2),
-- **SecondaryThenPrimary**-
-When you set the **LocationMode** to **PrimaryThenSecondary**, if the initial read request to the primary endpoint fails with an error that can be retried, the client automatically makes another read request to the secondary endpoint. If the error is a server timeout, then the client will have to wait for the timeout to expire before it receives a retryable error from the service.
-
-There are basically two scenarios to consider when you are deciding how to respond to a retryable error:
--- This is an isolated problem and subsequent requests to the primary endpoint will not return a retryable error. An example of where this might happen is when there is a transient network error.-
- In this scenario, there is no significant performance penalty in having **LocationMode** set to **PrimaryThenSecondary** as this only happens infrequently.
--- This is a problem with at least one of the storage services in the primary region and all subsequent requests to that service in the primary region are likely to return retryable errors for a period of time. An example of this is if the primary region is completely inaccessible.-
- In this scenario, there is a performance penalty because all your read requests will try the primary endpoint first, wait for the timeout to expire, then switch to the secondary endpoint.
-
-For these scenarios, you should identify that there is an ongoing issue with the primary endpoint and send all read requests directly to the secondary endpoint by setting the **LocationMode** property to **SecondaryOnly**. At this time, you should also change the application to run in read-only mode. This approach is known as the [Circuit Breaker Pattern](/azure/architecture/patterns/circuit-breaker).
-
-### Update requests
+ // The maximum number of retry attempts before giving up
+ MaxRetries = 5,
-The Circuit Breaker pattern can also be applied to update requests. However, update requests cannot be redirected to secondary storage, which is read-only. For these requests, you should leave the **LocationMode** property set to **PrimaryOnly** (the default). To handle these errors, you can apply a metric to these requests ΓÇô such as 10 failures in a row ΓÇô and when your threshold is met, switch the application into read-only mode. You can use the same methods for returning to update mode as those described below in the next section about the Circuit Breaker pattern.
+ // The approach to use for calculating retry delays
+ Mode = RetryMode.Exponential,
-## Circuit Breaker pattern
+ // The maximum permissible delay between retry attempts
+ MaxDelay = TimeSpan.FromSeconds(10)
+ },
-Using the Circuit Breaker pattern in your application can prevent it from retrying an operation that is likely to fail repeatedly. It allows the application to continue to run rather than taking up time while the operation is retried exponentially. It also detects when the fault has been fixed, at which time the application can try the operation again.
+ // If the GeoRedundantSecondaryUri property is set, the secondary Uri will be used for
+ // GET or HEAD requests during retries.
+ // If the status of the response from the secondary Uri is a 404, then subsequent retries
+ // for the request will not use the secondary Uri again, as this indicates that the resource
+ // may not have propagated there yet.
+ // Otherwise, subsequent retries will alternate back and forth between primary and secondary Uri.
+ GeoRedundantSecondaryUri = secondaryAccountUri
+};
-### How to implement the circuit breaker pattern
-
-To identify that there is an ongoing problem with a primary endpoint, you can monitor how frequently the client encounters retryable errors. Because each case is different, you have to decide on the threshold you want to use for the decision to switch to the secondary endpoint and run the application in read-only mode. For example, you could decide to perform the switch if there are 10 failures in a row with no successes. Another example is to switch if 90% of the requests in a 2-minute period fail.
-
-For the first scenario, you can simply keep a count of the failures, and if there is a success before reaching the maximum, set the count back to zero. For the second scenario, one way to implement it is to use the MemoryCache object (in .NET). For each request, add a CacheItem to the cache, set the value to success (1) or fail (0), and set the expiration time to 2 minutes from now (or whatever your time constraint is). When an entry's expiration time is reached, the entry is automatically removed. This will give you a rolling 2-minute window. Each time you make a request to the storage service, you first use a Linq query across the MemoryCache object to calculate the percent success by summing the values and dividing by the count. When the percent success drops below some threshold (such as 10%), set the **LocationMode** property for read requests to **SecondaryOnly** and switch the application into read-only mode before continuing.
-
-The threshold of errors used to determine when to make the switch may vary from service to service in your application, so you should consider making them configurable parameters. This is also where you decide to handle retryable errors from each service separately or as one, as discussed previously.
-
-Another consideration is how to handle multiple instances of an application, and what to do when you detect retryable errors in each instance. For example, you may have 20 VMs running with the same application loaded. Do you handle each instance separately? If one instance starts having problems, do you want to limit the response to just that one instance, or do you want to try to have all instances respond in the same way when one instance has a problem? Handling the instances separately is much simpler than trying to coordinate the response across them, but how you do this depends on your application's architecture.
-
-### Options for monitoring the error frequency
-
-You have three main options for monitoring the frequency of retries in the primary region in order to determine when to switch over to the secondary region and change the application to run in read-only mode.
--- Add a handler for the [**Retrying**](/dotnet/api/microsoft.azure.cosmos.table.operationcontext.retrying) event on the [**OperationContext**](/java/api/com.microsoft.azure.storage.operationcontext) object you pass to your storage requests ΓÇô this is the method displayed in this article and used in the accompanying sample. These events fire whenever the client retries a request, enabling you to track how often the client encounters retryable errors on a primary endpoint.-
- # [.NET v12 SDK](#tab/current)
-
- We are currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For more information, see [Announcing the Azure Storage v12 Client Libraries](https://techcommunity.microsoft.com/t5/azure-storage/announcing-the-azure-storage-v12-client-libraries/ba-p/1482394).
-
- # [.NET v11 SDK](#tab/legacy)
+// Create a BlobServiceClient object using the configuration options above
+BlobServiceClient blobServiceClient = new BlobServiceClient(primaryAccountUri, new DefaultAzureCredential(), blobClientOptions);
+```
- ```csharp
- operationContext.Retrying += (sender, arguments) =>
- {
- // Retrying in the primary region
- if (arguments.Request.Host == primaryhostname)
- ...
- };
- ```
+If you determine that the primary region is likely to be unavailable for a long period of time, you can configure all read requests to point at the secondary region. This configuration is a **secondary only** approach. As discussed earlier, you'll need a strategy to handle update requests during this time, and a way to inform users that only read requests are being processed. In this example, we create a new instance of `BlobServiceClient` which uses the secondary region endpoint.
-
+```csharp
+string accountName = "<YOURSTORAGEACCOUNTNAME>";
+Uri primaryAccountUri = new Uri($"https://{accountName}.blob.core.windows.net/");
+Uri secondaryAccountUri = new Uri($"https://{accountName}-secondary.blob.core.windows.net/");
-- In the [**Evaluate**](/dotnet/api/microsoft.azure.cosmos.table.iextendedretrypolicy.evaluate) method in a custom retry policy, you can run custom code whenever a retry takes place. In addition to recording when a retry happens, this also gives you the opportunity to modify your retry behavior.
+// Create a BlobServiceClient object pointed at the secondary Uri
+// Use blobServiceClientSecondary only when issuing read requests, as secondary storage is read-only
+BlobServiceClient blobServiceClientSecondary = new BlobServiceClient(secondaryAccountUri, new DefaultAzureCredential(), blobClientOptions);
+```
- # [.NET v12 SDK](#tab/current)
+Knowing when to switch to read-only mode and **secondary only** requests is part of an architectural design pattern called the [Circuit Breaker pattern](/azure/architecture/patterns/circuit-breaker), which will be discussed in a later section.
- We are currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For more information, see [Announcing the Azure Storage v12 Client Libraries](https://techcommunity.microsoft.com/t5/azure-storage/announcing-the-azure-storage-v12-client-libraries/ba-p/1482394).
+### Update requests
- # [.NET v11 SDK](#tab/legacy)
+Update requests can't be redirected to secondary storage, which is read-only. As described earlier, your application needs to be able to [handle update requests](#handling-updates-when-running-in-read-only-mode) when the primary region is unavailable.
- ```csharp
- public RetryInfo Evaluate(RetryContext retryContext,
- OperationContext operationContext)
- {
- var statusCode = retryContext.LastRequestResult.HttpStatusCode;
- if (retryContext.CurrentRetryCount >= this.maximumAttempts
- || ((statusCode >= 300 && statusCode < 500 && statusCode != 408)
- || statusCode == 501 // Not Implemented
- || statusCode == 505 // Version Not Supported
- ))
- {
- // Do not retry
- return null;
- }
+The Circuit Breaker pattern can also be applied to update requests. To handle update request errors, you could set a threshold in the code, such as 10 consecutive failures, and track the number of failures for requests to primary region. Once the threshold is met, you can switch the application to read-only mode so that update requests to the primary region are no longer issued.
- // Monitor retries in the primary location
- ...
+### How to implement the Circuit Breaker pattern
- // Determine RetryInterval and TargetLocation
- RetryInfo info =
- CreateRetryInfo(retryContext.CurrentRetryCount);
+Handling failures that may take a variable amount of time to recover from is part of an architectural design pattern called the [Circuit Breaker pattern](/azure/architecture/patterns/circuit-breaker). Proper implementation of this pattern can prevent an application from repeatedly trying to execute an operation that's likely to fail, thereby improving application stability and resiliency.
- return info;
- }
- ```
+One aspect of the Circuit Breaker pattern is identifying when there's an ongoing problem with a primary endpoint. To make this determination, you can monitor how frequently the client encounters retryable errors. Because each scenario is different, you need to determine an appropriate threshold to use for the decision to switch to the secondary endpoint and run the application in read-only mode.
-
+For example, you could decide to perform the switch if there are 10 consecutive failures in the primary region. You can track this by keeping a count of the failures in the code. If there's a success before reaching the threshold, set the count back to zero. If the count reaches the threshold, then switch the application to use the secondary region for read requests.
-- The third approach is to implement a custom monitoring component in your application that continually pings your primary storage endpoint with dummy read requests (such as reading a small blob) to determine its health. This would take up some resources, but not a significant amount. When a problem is discovered that reaches your threshold, you would then perform the switch to **SecondaryOnly** and read-only mode.
+As an alternative approach, you could decide to implement a custom monitoring component in your application. This component could continuously ping your primary storage endpoint with trivial read requests (such as reading a small blob) to determine its health. This approach would take up some resources, but not a significant amount. When a problem is discovered that reaches your threshold, you would switch to **secondary only** read requests and read-only mode. For this scenario, when pinging the primary storage endpoint becomes successful again you can switch back to the primary region and continue allowing updates.
-At some point, you will want to switch back to using the primary endpoint and allowing updates. If using one of the first two methods listed above, you could simply switch back to the primary endpoint and enable update mode after an arbitrarily selected amount of time or number of operations has been performed. You can then let it go through the retry logic again. If the problem has been fixed, it will continue to use the primary endpoint and allow updates. If there is still a problem, it will once more switch back to the secondary endpoint and read-only mode after failing the criteria you've set.
+The error threshold used to determine when to make the switch may vary from service to service within your application, so you should consider making them configurable parameters.
-For the third scenario, when pinging the primary storage endpoint becomes successful again, you can trigger the switch back to **PrimaryOnly** and continue allowing updates.
+Another consideration is how to handle multiple instances of an application, and what to do when you detect retryable errors in each instance. For example, you may have 20 VMs running with the same application loaded. Do you handle each instance separately? If one instance starts having problems, do you want to limit the response to just that one instance? Or do you want all instances to respond in the same way when one instance has a problem? Handling the instances separately is much simpler than trying to coordinate the response across them, but your approach will depend on your application's architecture.
## Handling eventually consistent data
-Geo-redundant storage works by replicating transactions from the primary to the secondary region. This replication process guarantees that the data in the secondary region is *eventually consistent*. This means that all the transactions in the primary region will eventually appear in the secondary region, but that there may be a lag before they appear, and that there is no guarantee the transactions arrive in the secondary region in the same order as that in which they were originally applied in the primary region. If your transactions arrive in the secondary region out of order, you *may* consider your data in the secondary region to be in an inconsistent state until the service catches up.
+Geo-redundant storage works by replicating transactions from the primary to the secondary region. The replication process guarantees that the data in the secondary region is eventually consistent. This means that all the transactions in the primary region will eventually appear in the secondary region, but that there may be a lag before they appear. There's also no guarantee that transactions will arrive in the secondary region in the same order as they were originally applied in the primary region. If your transactions arrive in the secondary region out of order, you *may* consider your data in the secondary region to be in an inconsistent state until the service catches up.
-The following table shows an example of what might happen when you update the details of an employee to make them a member of the *administrators* role. For the sake of this example, this requires you update the **employee** entity and update an **administrator role** entity with a count of the total number of administrators. Notice how the updates are applied out of order in the secondary region.
+The following example for Azure Table storage shows what might happen when you update the details of an employee to make them a member of the **administrator role**. For the sake of this example, this requires you update the **employee** entity and update an **administrator role** entity with a count of the total number of administrators. Notice how the updates are applied out of order in the secondary region.
| **Time** | **Transaction** | **Replication** | **Last Sync Time** | **Result** | |-|||--||
The following table shows an example of what might happen when you update the de
| T1 | | Transaction A <br> replicated to<br> secondary | T1 | Transaction A replicated to secondary. <br>Last Sync Time updated. | | T2 | Transaction B:<br>Update<br> employee entity<br> in primary | | T1 | Transaction B written to primary,<br> not replicated yet. | | T3 | Transaction C:<br> Update <br>administrator<br>role entity in<br>primary | | T1 | Transaction C written to primary,<br> not replicated yet. |
-| *T4* | | Transaction C <br>replicated to<br> secondary | T1 | Transaction C replicated to secondary.<br>LastSyncTime not updated because <br>transaction B has not been replicated yet.|
+| *T4* | | Transaction C <br>replicated to<br> secondary | T1 | Transaction C replicated to secondary.<br>LastSyncTime not updated because <br>transaction B hasn't been replicated yet.|
| *T5* | Read entities <br>from secondary | | T1 | You get the stale value for employee <br> entity because transaction B hasn't <br> replicated yet. You get the new value for<br> administrator role entity because C has<br> replicated. Last Sync Time still hasn't<br> been updated because transaction B<br> hasn't replicated. You can tell the<br>administrator role entity is inconsistent <br>because the entity date/time is after <br>the Last Sync Time. | | *T6* | | Transaction B<br> replicated to<br> secondary | T6 | *T6* ΓÇô All transactions through C have <br>been replicated, Last Sync Time<br> is updated. |
-In this example, assume the client switches to reading from the secondary region at T5. It can successfully read the **administrator role** entity at this time, but the entity contains a value for the count of administrators that is not consistent with the number of **employee** entities that are marked as administrators in the secondary region at this time. Your client could simply display this value, with the risk that it is inconsistent information. Alternatively, the client could attempt to determine that the **administrator role** is in a potentially inconsistent state because the updates have happened out of order, and then inform the user of this fact.
+In this example, assume the client switches to reading from the secondary region at T5. It can successfully read the **administrator role** entity at this time, but the entity contains a value for the count of administrators that isn't consistent with the number of **employee** entities that are marked as administrators in the secondary region at this time. Your client could display this value, with the risk that the information is inconsistent. Alternatively, the client could attempt to determine that the **administrator role** is in a potentially inconsistent state because the updates have happened out of order, and then inform the user of this fact.
-To recognize that it has potentially inconsistent data, the client can use the value of the *Last Sync Time* that you can get at any time by querying a storage service. This tells you the time when the data in the secondary region was last consistent and when the service had applied all the transactions prior to that point in time. In the example shown above, after the service inserts the **employee** entity in the secondary region, the last sync time is set to *T1*. It remains at *T1* until the service updates the **employee** entity in the secondary region when it is set to *T6*. If the client retrieves the last sync time when it reads the entity at *T5*, it can compare it with the timestamp on the entity. If the timestamp on the entity is later than the last sync time, then the entity is in a potentially inconsistent state, and you can take whatever is the appropriate action for your application. Using this field requires that you know when the last update to the primary was completed.
+To determine whether a storage account has potentially inconsistent data, the client can check the value of the **Last Sync Time** property. **Last Sync Time** tells you the time when the data in the secondary region was last consistent and when the service had applied all the transactions prior to that point in time. In the example shown above, after the service inserts the **employee** entity in the secondary region, the last sync time is set to *T1*. It remains at *T1* until the service updates the **employee** entity in the secondary region when it's set to *T6*. If the client retrieves the last sync time when it reads the entity at *T5*, it can compare it with the timestamp on the entity. If the timestamp on the entity is later than the last sync time, then the entity is in a potentially inconsistent state, and you can take the appropriate action. Using this field requires that you know when the last update to the primary was completed.
To learn how to check the last sync time, see [Check the Last Sync Time property for a storage account](last-sync-time-get.md). ## Testing
-It's important to test that your application behaves as expected when it encounters retryable errors. For example, you need to test that the application switches to the secondary and into read-only mode when it detects a problem, and switches back when the primary region becomes available again. To do this, you need a way to simulate retryable errors and control how often they occur.
+It's important to test that your application behaves as expected when it encounters retryable errors. For example, you need to test that the application switches to the secondary region when it detects a problem, and then switches back when the primary region becomes available again. To properly test this behavior, you need a way to simulate retryable errors and control how often they occur.
-You can use [Fiddler](https://www.telerik.com/fiddler) to intercept and modify HTTP responses in a script. This script can identify responses that come from your primary endpoint and change the HTTP status code to one that the Storage Client Library recognizes as a retryable error. This code snippet shows a simple example of a Fiddler script that intercepts responses to read requests against the **employeedata** table to return a 502 status:
+One option is to use [Fiddler](https://www.telerik.com/fiddler) to intercept and modify HTTP responses in a script. This script can identify responses that come from your primary endpoint and change the HTTP status code to one that the Storage client library recognizes as a retryable error. This code snippet shows a simple example of a Fiddler script that intercepts responses to read requests against the **employeedata** table to return a 502 status:
-# [Java v12 SDK](#tab/current)
-
-We are currently working to create code snippets reflecting version 12.x of the Azure Storage client libraries. For more information, see [Announcing the Azure Storage v12 Client Libraries](https://techcommunity.microsoft.com/t5/azure-storage/announcing-the-azure-storage-v12-client-libraries/ba-p/1482394).
-
-# [Java v11 SDK](#tab/legacy)
-
-```java
+```
static function OnBeforeResponse(oSession: Session) { ...
- if ((oSession.hostname == "\[yourstorageaccount\].table.core.windows.net")
+ if ((oSession.hostname == "\[YOURSTORAGEACCOUNTNAME\].table.core.windows.net")
&& (oSession.PathAndQuery.StartsWith("/employeedata?$filter"))) { oSession.responseCode = 502; }
static function OnBeforeResponse(oSession: Session) {
You could extend this example to intercept a wider range of requests and only change the **responseCode** on some of them to better simulate a real-world scenario. For more information about customizing Fiddler scripts, see [Modifying a Request or Response](https://docs.telerik.com/fiddler/KnowledgeBase/FiddlerScript/ModifyRequestOrResponse) in the Fiddler documentation.
-If you have made the thresholds for switching your application to read-only mode configurable, it will be easier to test the behavior with non-production transaction volumes.
+If you have set up configurable thresholds for switching your application to read-only, it will be easier to test the behavior with non-production transaction volumes.
storage Storage Files Identity Auth Active Directory Domain Service Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-active-directory-domain-service-enable.md
description: Learn how to enable identity-based authentication over Server Messa
Previously updated : 08/29/2022 Last updated : 08/31/2022
az storage account update -n <storage-account-name> -g <resource-group-name> --e
By default, Azure AD DS authentication uses Kerberos RC4 encryption. We recommend configuring it to use Kerberos AES-256 encryption instead by following these instructions.
-The action requires running an operation on the Active Directory domain that's managed by Azure AD DS to reach a domain controller to request a property change to the domain object. The cmdlets below are Windows Server Active Directory PowerShell cmdlets, not Azure PowerShell cmdlets. Because of this, these PowerShell commands must be run from a machine that's domain-joined to the Azure AD DS domain.
+The action requires running an operation on the Active Directory domain that's managed by Azure AD DS to reach a domain controller to request a property change to the domain object. The cmdlets below are Windows Server Active Directory PowerShell cmdlets, not Azure PowerShell cmdlets. Because of this, these PowerShell commands must be run from a machine that's domain-joined to the Azure AD DS domain.
> [!IMPORTANT]
-> Azure Cloud Shell won't work in this scenario.
+> The Windows Server Active Directory PowerShell cmdlets in this section must be run in Windows PowerShell 5.1. PowerShell 7.x and Azure Cloud Shell won't work in this scenario.
As an Azure AD DS user with the required permissions (typically, members of the **AAD DC Administrators** group will have the necessary permissions), execute the following PowerShell commands.
storage Storage Files Migration Nas Cloud Databox https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-nas-cloud-databox.md
Consult your migration plan for the number of storage accounts you have decided
### DataBox options
-For a standard migration, one or a combination of these three DataBox options should be chosen:
+For a standard migration, one or a combination of these two DataBox options should be chosen:
-* DataBox Disks
- Microsoft will send you one and up to five SSD disks with a capacity of 8 TiB each, for a maximum total of 40 TiB. The usable capacity is about 20% less, due to encryption and file system overhead. For more information, see [DataBox Disks documentation](../../databox/data-box-disk-overview.md).
* DataBox This is the most common option. A ruggedized DataBox appliance, that works similar to a NAS, will be shipped to you. It has a usable capacity of 80 TiB. For more information, see [DataBox documentation](../../databox/data-box-overview.md). * DataBox Heavy This option features a ruggedized DataBox appliance on wheels, that works similar to a NAS, with a capacity of 1 PiB. The usable capacity is about 20% less, due to encryption and file system overhead. For more information, see [DataBox Heavy documentation](../../databox/data-box-heavy-overview.md).
+> [!WARNING]
+> Data Box Disks is not recommended for migrations into Azure file shares. Data Box Disks does not preserve file metadata, such as access permissions (ACLs) and other attributes.
+ ## Phase 4: Provision a temporary Windows Server While you wait for your Azure DataBox(es) to arrive, you can already deploy one or more Windows Servers you will need for running RoboCopy jobs.
synapse-analytics Synapse Machine Learning Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/synapse-machine-learning-library.md
+
+ Title: SynapseML and its use in Azure Synapse analytics.
+description: Learn about the SynapseML library and how it simplifies the creation of massively scalable machine learning (ML) pipelines in Azure Synapse analytics.
++++ Last updated : 08/31/2022+++
+# What is SynapseML?
+
+SynapseML (previously known as MMLSpark), is an open-source library that simplifies the creation of massively scalable machine learning (ML) pipelines. SynapseML provides simple, composable, and distributed APIs for a wide variety of different machine learning tasks such as text analytics, vision, anomaly detection, and many others. SynapseML is built on the [Apache Spark distributed computing framework](https://spark.apache.org/) and shares the same API as the [SparkML/MLLib library](https://spark.apache.org/mllib/), allowing you to seamlessly embed SynapseML models into existing Apache Spark workflows.
+
+With SynapseML, you can build scalable and intelligent systems to solve challenges in domains such as anomaly detection, computer vision, deep learning, text analytics, and others. SynapseML can train and evaluate models on single-node, multi-node, and elastically resizable clusters of computers. This lets you scale your work without wasting resources. SynapseML is usable across Python, R, Scala, Java, and .NET. Furthermore, its API abstracts over a wide variety of databases, file systems, and cloud data stores to simplify experiments no matter where data is located.
+
+SynapseML requires Scala 2.12, Spark 3.0+, and Python 3.6+.
+
+## Key features of SynapseML
+
+### A unified API for creating, training, and scoring models
+
+SynapseML offers a unified API that simplifies developing fault-tolerant distributed programs. In particular, SynapseML exposes many different machine learning frameworks under a single API that is scalable, data and language agnostic, and works for batch, streaming, and serving applications.
+
+A unified API standardizes many tools, frameworks, algorithms and streamlines the distributed machine learning experience. It enables developers to quickly compose disparate machine learning frameworks, keeps code clean, and enables workflows that require more than one framework. For example, workflows such as web-supervised learning or search-engine creation require multiple services and frameworks. SynapseML shields users from this extra complexity.
++
+### Use pre-built intelligent models
+
+Many tools in SynapseML don't require a large labeled training dataset. Instead, SynapseML provides simple APIs for pre-built intelligent services, such as Azure Cognitive Services, to quickly solve large-scale AI challenges related to both business and research. SynapseML enables developers to embed over 50 different state-of-the-art ML services directly into their systems and databases. These ready-to-use algorithms can parse a wide variety of documents, transcribe multi-speaker conversations in real time, and translate text to over 100 different languages. For more examples of how to use pre-built AI to solve tasks quickly, see [the SynapseML cognitive service examples](https://microsoft.github.io/SynapseML/docs/features/cognitive_services/CognitiveServices%20-%20Overview/).
+
+To make SynapseML's integration with Azure Cognitive Services fast and efficient SynapseML introduces many optimizations for service-oriented workflows. In particular, SynapseML automatically parses common throttling responses to ensure that jobs donΓÇÖt overwhelm backend services. Additionally, it uses exponential back-offs to handle unreliable network connections and failed responses. Finally, SparkΓÇÖs worker machines stay busy with new asynchronous parallelism primitives for Spark. Asynchronous parallelism allows worker machines to send requests while waiting on a response from the server and can yield a tenfold increase in throughput.
+
+### Broad ecosystem compatibility with ONNX
+
+SynapseML enables developers to use models from many different ML ecosystems through the Open Neural Network Exchange (ONNX) framework. With this integration, you can execute a wide variety of classical and deep learning models at scale with only a few lines of code. SynapseML automatically handles distributing ONNX models to worker nodes, batching and buffering input data for high throughput, and scheduling work on hardware accelerators.
+
+Bringing ONNX to Spark not only helps developers scale deep learning models, it also enables distributed inference across a wide variety of ML ecosystems. In particular, ONNXMLTools converts models from TensorFlow, scikit-learn, Core ML, LightGBM, XGBoost, H2O, and PyTorch to ONNX for accelerated and distributed inference using SynapseML.
+
+### Build responsible AI systems
+
+After building a model, itΓÇÖs imperative that researchers and engineers understand its limitations and behavior before deployment. SynapseML helps developers and researchers build responsible AI systems by introducing new tools that reveal why models make certain predictions and how to improve the training dataset to eliminate biases. SynapseML dramatically speeds the process of understanding a userΓÇÖs trained model by enabling developers to distribute computation across hundreds of machines. More specifically, SynapseML includes distributed implementations of Shapley Additive Explanations (SHAP) and Locally Interpretable Model-Agnostic Explanations (LIME) to explain the predictions of vision, text, and tabular models. It also includes tools such as Individual Conditional Expectation (ICE) and partial dependence analysis to recognized biased datasets.
+
+## Enterprise support on Azure Synapse Analytics
+
+SynapseML is generally available on Azure Synapse Analytics with enterprise support. You can build large-scale machine learning pipelines using Azure Cognitive Services, LightGBM, ONNX, and other [selected SynapseML features](https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/streamline-collaboration-and-insights-with-simplified-machine/ba-p/2924707). It even includes templates to quickly prototype distributed machine learning systems, such as visual search engines, predictive maintenance pipelines, document translation, and more.
+
+## Next steps
+
+* To learn more about SynapseML, see the [blog post.](https://www.microsoft.com/en-us/research/blog/synapseml-a-simple-multilingual-and-massively-parallel-machine-learning-library/)
+
+* [Install SynapseML and get started with examples.](https://microsoft.github.io/SynapseML/docs/getting_started/installation/)
+
+* [SynapseML GitHub repository.](https://github.com/microsoft/SynapseML)
synapse-analytics What Is Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/machine-learning/what-is-machine-learning.md
Previously updated : 10/01/2021 Last updated : 08/31/2022
Models that have been trained either in Azure Synapse or outside Azure Synapse c
* Another option for batch scoring machine learning models in Azure Synapse is to leverage the Apache Spark Pools for Azure Synapse. Depending on the libraries used to train the models, you can use a code experience to run your batch scoring.
+## SynapseML
+
+SynapseML (previously known as MMLSpark), is an open-source library that simplifies the creation of massively scalable machine learning (ML) pipelines. It is an ecosystem of tools used to expand the Apache Spark framework in several new directions. SynapseML unifies several existing machine learning frameworks and new Microsoft algorithms into a single, scalable API thatΓÇÖs usable across Python, R, Scala, .NET, and Java. To learn more, see the [key features of SynapseML](synapse-machine-learning-library.md).
+ ## Next steps * [Get started with Azure Synapse Analytics](../get-started.md)
synapse-analytics Overview Terminology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/overview-terminology.md
Previously updated : 01/13/2022 Last updated : 08/19/2022
This document guides you through the basic concepts of Azure Synapse Analytics.
-## Basics
+## Synapse workspace
A **Synapse workspace** is a securable collaboration boundary for doing cloud-based enterprise analytics in Azure. A workspace is deployed in a specific region and has an associated ADLS Gen2 account and file system (for storing temporary data). A workspace is under a resource group.
There are two ways within Synapse to use Spark:
* **Spark Notebooks** for doing data Data Science and Engineering use Scala, PySpark, C#, and SparkSQL * **Spark job definitions** for running batch Spark jobs using jar files.
+## SynapseML
+
+SynapseML (previously known as MMLSpark), is an open-source library that simplifies the creation of massively scalable machine learning (ML) pipelines. It is an ecosystem of tools used to expand the Apache Spark framework in several new directions. SynapseML unifies several existing machine learning frameworks and new Microsoft algorithms into a single, scalable API thatΓÇÖs usable across Python, R, Scala, .NET, and Java. To learn more, see the [key features of SynapseML](machine-learning/synapse-machine-learning-library.md).
+ ## Pipelines Pipelines are how Azure Synapse provides Data Integration - allowing you to move data between services and orchestrate activities.
synapse-analytics Resources Self Help Sql On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/resources-self-help-sql-on-demand.md
The error "Invalid object name 'table name'" indicates that you're using an obje
- If you don't see the object, maybe you're trying to query a table from a lake or Spark database. The table might not be available in the serverless SQL pool because: - The table has some column types that can't be represented in serverless SQL pool.
- - The table has a format that isn't supported in serverless SQL pool. Examples are Delta or ORC.
+ - The table has a format that isn't supported in serverless SQL pool. Examples are Avro or ORC.
### Unclosed quotation mark after the character string
There are some limitations and known issues that you might see in Delta Lake sup
- Make sure that you're referencing the root Delta Lake folder in the [OPENROWSET](./develop-openrowset.md) function or external table location. - The root folder must have a subfolder named `_delta_log`. The query fails if there's no `_delta_log` folder. If you don't see that folder, you're referencing plain Parquet files that must be [converted to Delta Lake](../spark/apache-spark-delta-lake-overview.md?pivots=programming-language-python#convert-parquet-to-delta) by using Apache Spark pools. - Don't specify wildcards to describe the partition schema. The Delta Lake query automatically identifies the Delta Lake partitions.-- Delta Lake tables created in the Apache Spark pools aren't automatically available in serverless SQL pool. To query such Delta Lake tables by using the T-SQL language, run the [CREATE EXTERNAL TABLE](./create-use-external-tables.md#delta-lake-external-table) statement and specify Delta as the format.
+- Delta Lake tables created in the Apache Spark pools are automatically available in serverless SQL pool, but the schema is not updated. If you add the columns in hte Delta table using Spark pool, the changes will not be shown in serverless database.
- External tables don't support partitioning. Use [partitioned views](create-use-views.md#delta-lake-partitioned-views) on the Delta Lake folder to use the partition elimination. See known issues and workarounds later in the article. - Serverless SQL pools don't support time travel queries. Use Apache Spark pools in Synapse Analytics to [read historical data](../spark/apache-spark-delta-lake-overview.md?pivots=programming-language-python#read-older-versions-of-data-using-time-travel). - Serverless SQL pools don't support updating Delta Lake files. You can use serverless SQL pool to query the latest version of Delta Lake. Use Apache Spark pools in Synapse Analytics to [update Delta Lake](../spark/apache-spark-delta-lake-overview.md?pivots=programming-language-python#update-table-data).
There are two options available to circumvent this error:
Our engineering team is currently working on a full support for Spark 3.3.
+### Delta tables in Lake databases do not have identical schema in Spark and serverless pools
+
+Serverless SQL pools enable you to access Parquet, CSV, and Delta tables that are created in Lake database using Spark or Synapse designer. Accessing the Delta tables is still in public preview, and currently serverless will synchronize a Delta table with Spark at the time of creation but will not update the schema if the columns are added later using the `ALTER TABLE` statement in Spark.
+
+This is a public preview limitation. Drop and re-create the Delta table in Spark (if it is possible) instead of altering tables to resolve this issue.
+ ## Performance Serverless SQL pool assigns the resources to the queries based on the size of the dataset and query complexity. You can't change or limit the resources that are provided to the queries. There are some cases where you might experience unexpected query performance degradations and you might have to identify the root causes.
virtual-desktop Set Up Scaling Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/set-up-scaling-script.md
First, you'll need an Azure Automation account to run the PowerShell runbook. Th
1. If you haven't created an automation account before, the cmdlet's output will include an encrypted webhook URI in the automation account variable. Make sure to keep a record of the URI because you'll use it as a parameter when you set up the execution schedule for the Azure Logic App. If you're updating an existing automation account, you can retrieve the webhook URI using [PowerShell to access variables](../automation/shared-resources/variables.md#powershell-cmdlets-to-access-variables).
-1. If you specified the parameter **WorkspaceName** for Log Analytics, the cmdlet's output will also include the Log Analytics Workspace ID and its Primary Key. Make sure to remember the URI because you'll need to use it again later as a parameter when you set up the execution schedule for the Azure Logic App.
+1. If you specified the parameter **WorkspaceName** for Log Analytics, the cmdlet's output will also include the Log Analytics Workspace ID and its Primary Key. Make a note of the Workspace ID and Primary Key because you'll need to use them again later with parameters when you set up the execution schedule for the Azure Logic App.
1. After you've set up your Azure Automation account, sign in to your Azure subscription and check to make sure your Azure Automation account and the relevant runbook have appeared in your specified resource group, as shown in the following image:
virtual-machine-scale-sets Use Spot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/use-spot.md
Title: Create a scale set that uses Azure Spot Virtual Machines description: Learn how to create Azure virtual machine scale sets that use Azure Spot Virtual Machines to save on costs.--++
Try & restore benefits:
- Attempts to restore Azure Spot Virtual Machines evicted due to capacity. - Restored Azure Spot Virtual Machines are expected to run for a longer duration with a lower probability of a capacity triggered eviction. - Improves the lifespan of an Azure Spot Virtual Machine, so workloads run for a longer duration.-- Helps Virtual Machine Scale Sets to maintain the target count for Azure Spot Virtual Machines, similar to maintain target count feature that already exist for Pay-As-You-Go VMs.
+- Helps Virtual Machine Scale Sets to maintain the target count for Azure Spot Virtual Machines, similar to maintain target count feature that already exists for Pay-As-You-Go VMs.
Try & restore is disabled in scale sets that use [Autoscale](virtual-machine-scale-sets-autoscale-overview.md). The number of VMs in the scale set is driven by the autoscale rules.
To deploy Azure Spot Virtual Machines on scale sets, you can set the new *Priori
## Portal
-The process to create a scale set that uses Azure Spot Virtual Machines is the same as detailed in the [getting started article](quick-create-portal.md). When you are deploying a scale set, you can choose to set the Spot flag, eviction type, eviction policy and if you want to enable try to restore instances:
+The process to create a scale set that uses Azure Spot Virtual Machines is the same as detailed in the [getting started article](quick-create-portal.md). When you are deploying a scale set, you can choose to set the Spot flag, eviction type, eviction policy and if you want to try to restore instances:
![Create a scale set with Azure Spot Virtual Machines](media/virtual-machine-scale-sets-use-spot/vmss-spot-portal-1.png)
virtual-machine-scale-sets Virtual Machine Scale Sets Scale In Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-scale-in-policy.md
The scale-in policy feature provides users a way to configure the order in which
2. NewestVM 3. OldestVM
+> [!IMPORTANT]
+> Flexible orchestration for virtual machine scale sets does not currently support scale-in policy.
+ ### Default scale-in policy By default, virtual machine scale set applies this policy to determine which instance(s) will be scaled in. With the *Default* policy, VMs are selected for scale-in in the following order:
virtual-machines Error Codes Spot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/error-codes-spot.md
Title: Error codes for Azure Spot Virtual Machines and scale sets instances description: Learn about error codes that you could possibly see when using Azure Spot Virtual Machines and scale set instances.-+ Last updated 03/25/2020-+ #pmcontact: jagaveer
virtual-machines Spot Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/spot-cli.md
Title: Use CLI to deploy Azure Spot Virtual Machines description: Learn how to use the CLI to deploy Azure Spot Virtual Machines to save costs.-+ Last updated 03/22/2021--++ # Deploy Azure Spot Virtual Machines using the Azure CLI
virtual-machines Spot Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/spot-template.md
Title: Use a template to deploy Azure Azure Spot Virtual Machines
+ Title: Use a template to deploy Azure Spot Virtual Machines
description: Learn how to use a template to deploy Azure Spot Virtual Machines to save costs.-+ Last updated 03/25/2020--++ # Deploy Azure Spot Virtual Machines using a Resource Manager template
Using [Azure Spot Virtual Machines](../spot-vms.md) allows you to take advantage
Pricing for Azure Spot Virtual Machines is variable, based on region and SKU. For more information, see VM pricing for [Linux](https://azure.microsoft.com/pricing/details/virtual-machines/linux/) and [Windows](https://azure.microsoft.com/pricing/details/virtual-machines/windows/).
-You have option to set a max price you are willing to pay, per hour, for the VM. The max price for a Azure Spot Virtual Machine can be set in US dollars (USD), using up to 5 decimal places. For example, the value `0.98765`would be a max price of $0.98765 USD per hour. If you set the max price to be `-1`, the VM won't be evicted based on price. The price for the VM will be the current price for Azure Spot Virtual Machines or the price for a standard VM, which ever is less, as long as there is capacity and quota available. For more information about setting the max price, see [Azure Spot Virtual Machines - Pricing](../spot-vms.md#pricing).
+You have option to set a max price you are willing to pay, per hour, for the VM. The max price for an Azure Spot Virtual Machine can be set in US dollars (USD), using up to 5 decimal places. For example, the value `0.98765`would be a max price of $0.98765 USD per hour. If you set the max price to be `-1`, the VM won't be evicted based on price. The price for the VM will be the current price for Azure Spot Virtual Machines or the price for a standard VM, which ever is less, as long as there is capacity and quota available. For more information about setting the max price, see [Azure Spot Virtual Machines - Pricing](../spot-vms.md#pricing).
## Use a template
For Azure Spot Virtual Machine template deployments, use`"apiVersion": "2019-03-
} ```
-Here is a sample template with the added properties for a Azure Spot Virtual Machine. Replace the resource names with your own and `<password>` with a password for the local administrator account on the VM.
+Here is a sample template with the added properties for an Azure Spot Virtual Machine. Replace the resource names with your own and `<password>` with a password for the local administrator account on the VM.
```json {
Here is a sample template with the added properties for a Azure Spot Virtual Mac
## Simulate an eviction
-You can [simulate an eviction](/rest/api/compute/virtualmachines/simulateeviction) of a Azure Spot Virtual Machine, to testing how well your application will repond to a sudden eviction.
+You can [simulate an eviction](/rest/api/compute/virtualmachines/simulateeviction) of an Azure Spot Virtual Machine, to testing how well your application will repond to a sudden eviction.
Replace the following with your information:
POST https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/
## Next steps
-You can also create a Azure Spot Virtual Machine using [Azure PowerShell](../windows/spot-powershell.md) or the [Azure CLI](spot-cli.md).
+You can also create an Azure Spot Virtual Machine using [Azure PowerShell](../windows/spot-powershell.md) or the [Azure CLI](spot-cli.md).
Query current pricing information using the [Azure retail prices API](/rest/api/cost-management/retail-prices/azure-retail-prices) for information about Azure Spot Virtual Machine pricing. The `meterName` and `skuName` will both contain `Spot`.
virtual-machines Spot Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/spot-portal.md
Title: Use the portal to deploy Azure Spot Virtual Machines description: How to use the Portal to deploy Spot Virtual Machines -+ Last updated 09/14/2020--++ # Deploy Azure Spot Virtual Machines using the Azure portal
Using [Azure Spot Virtual Machines](spot-vms.md) allows you to take advantage of
Pricing for Azure Spot Virtual Machines is variable, based on region and SKU. For more information, see VM pricing for [Linux](https://azure.microsoft.com/pricing/details/virtual-machines/linux/) and [Windows](https://azure.microsoft.com/pricing/details/virtual-machines/windows/). For more information about setting the max price, see [Azure Spot Virtual Machines - Pricing](spot-vms.md#pricing).
-You have option to set a max price you are willing to pay, per hour, for the VM. The max price for a Azure Spot Virtual Machine can be set in US dollars (USD), using up to 5 decimal places. For example, the value `0.05701`would be a max price of $0.05701 USD per hour. If you set the max price to be `-1`, the VM won't be evicted based on price. The price for the VM will be the current price for spot or the price for a standard VM, which ever is less, as long as there is capacity and quota available.
+You have option to set a max price you are willing to pay, per hour, for the VM. The max price for an Azure Spot Virtual Machine can be set in US dollars (USD), using up to 5 decimal places. For example, the value `0.05701`would be a max price of $0.05701 USD per hour. If you set the max price to be `-1`, the VM won't be evicted based on price. The price for the VM will be the current price for spot or the price for a standard VM, which ever is less, as long as there is capacity and quota available.
When the VM is evicted, you have the option to either delete the VM and the underlying disk or deallocate the VM so that it can be restarted later.
You can change the region by selecting the choice that works the best for you an
## Simulate an eviction
-You can [simulate an eviction](/rest/api/compute/virtualmachines/simulateeviction) of a Azure Spot Virtual Machine, to testing how well your application will respond to a sudden eviction.
+You can [simulate an eviction](/rest/api/compute/virtualmachines/simulateeviction) of an Azure Spot Virtual Machine, to testing how well your application will respond to a sudden eviction.
Replace the following with your information:
virtual-machines Spot Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/spot-vms.md
Title: Use Azure Spot Virtual Machines description: Learn how to use Azure Spot Virtual Machines to save on costs.--++
The *Deallocate* policy moves your VM to the stopped-deallocated state, allowing
If you would like your VM to be deleted when it is evicted, you can set the eviction policy to *delete*. The evicted VMs are deleted together with their underlying disks, so you will not continue to be charged for the storage.
-You can opt-in to receive in-VM notifications through [Azure Scheduled Events](./linux/scheduled-events.md). This will notify you if your VMs are being evicted and you will have 30 seconds to finish any jobs and perform shutdown tasks prior to the eviction.
+You can opt in to receive in-VM notifications through [Azure Scheduled Events](./linux/scheduled-events.md). This will notify you if your VMs are being evicted and you will have 30 seconds to finish any jobs and perform shutdown tasks prior to the eviction.
| Option | Outcome |
With variable pricing, you have option to set a max price, in US dollars (USD),
## Pricing and eviction history
+### Portal
+ You can see historical pricing and eviction rates per size in a region in the portal. Select **View pricing history and compare prices in nearby regions** to see a table or graph of pricing for a specific size. The pricing and eviction rates in the following images are only examples. **Chart**:
You can see historical pricing and eviction rates per size in a region in the po
:::image type="content" source="./media/spot-table.png" alt-text="Screenshot of the region options with the difference in pricing and eviction rates as a table.":::
+### Azure Resource Graph
+
+You can programmatically access relevant Spot VM SKU data through [Azure Resource Graph](/azure/governance/resource-graph/overview). Get pricing history in the last 90 days and eviction rates for the last 28 trailing days to identify SKUs that better meet your specific needs.
+
+Key benefits:
+- Query Spot eviction rates and the last few months of Spot prices programmatically through ARM or the [ARG Explorer in Azure portal](/azure/governance/resource-graph/first-query-portal)ΓÇ»
+- Create a custom query to extract the specific data relevant to your scenario with the ability to filter across a variety of parameters, such as SKU and regionΓÇ»
+- Easily compare data across multiple regions and SKUsΓÇ»
+- Find a different Spot SKU or region with a lower price and/or eviction rateΓÇ»
+
+Try out the following sample queries for Spot pricing history and eviction rates using the [ARG Explorer in Azure portal](/azure/governance/resource-graph/first-query-portal). Spot pricing history and eviction rates data are available in the `SpotResources` table.ΓÇ»
+
+**Spot pricing history sample query**:
+
+```sql
+SpotResources
+| where type =~ 'microsoft.compute/skuspotpricehistory/ostype/location'
+| where sku.name in~ ('standard_d2s_v4', 'standard_d4s_v4')
+| where properties.osType =~ 'linux'
+| where location in~ ('eastus', 'southcentralus')
+| project skuName = tostring(sku.name), osType = tostring(properties.osType), location, latestSpotPriceUSD = todouble(properties.spotPrices[0].priceUSD)
+| order by latestSpotPriceUSD asc
+```
+
+**Spot eviction rates sample query**:
+
+```sql
+SpotResources
+| where type =~ 'microsoft.compute/skuspotevictionrate/location'
+| where sku.name in~ ('standard_d2s_v4', 'standard_d4s_v4')
+| where location in~ ('eastus', 'southcentralus')
+| project skuName = tostring(sku.name), location, spotEvictionRate = tostring(properties.evictionRate)
+| order by skuName asc, location asc
+```
+Alternatively, try out the [ARG REST API](/rest/api/azure-resourcegraph/) to get the pricing history and eviction rate history data.
## Frequently asked questions
virtual-machines Virtual Machine Scale Sets Maintenance Control Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machine-scale-sets-maintenance-control-cli.md
Last updated 06/01/2021
ms.devlang: azurecli
-#pmcontact: shants
+#pmcontact: PPHILLIPS
# Maintenance control for OS image upgrades on Azure virtual machine scale sets using Azure CLI
virtual-machines Virtual Machine Scale Sets Maintenance Control Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machine-scale-sets-maintenance-control-portal.md
Last updated 06/01/2021
-#pmcontact: shants
+#pmcontact: PPHILLIPS
# Maintenance control for OS image upgrades on Azure virtual machine scale sets using Azure portal
virtual-machines Virtual Machine Scale Sets Maintenance Control Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machine-scale-sets-maintenance-control-powershell.md
Last updated 09/11/2020
-#pmcontact: shants
+#pmcontact: PPHILLIPS
# Maintenance control for OS image upgrades on Azure virtual machine scale sets using PowerShell
virtual-machines Virtual Machine Scale Sets Maintenance Control Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machine-scale-sets-maintenance-control-template.md
+
+ Title: Maintenance control for OS image upgrades on Azure virtual machine scale sets using an Azure Resource Manager template
+description: Learn how to control when automatic OS image upgrades are rolled out to your Azure virtual machine scale sets using Maintenance control and an Azure Resource Manager (ARM) template.
++++ Last updated : 08/31/2022++
+#pmcontact: PPHILLIPS
++
+# Maintenance control for OS image upgrades on Azure virtual machine scale sets using an ARM template
+
+Maintenance control lets you decide when to apply automatic OS image upgrades to your virtual machine scale sets. For more information on using Maintenance control, see [Maintenance control for Azure virtual machine scale sets](virtual-machine-scale-sets-maintenance-control.md).
+
+This article explains how you can use an Azure Resource Manager (ARM) template to create a maintenance configuration. You will learn how to:
+
+- Create the configuration
+- Assign the configuration to a virtual machine
++
+## Create the configuration
+
+While creating the configuration, it is important to note that there are different scopes and each will have unique properties in their creation template. Make sure you are using the right one.
+
+For more information about this Maintenance Configuration template, see [maintenanceConfigurations](/azure/templates/microsoft.maintenance/maintenanceconfigurations?tabs=json#template-format).
+
+### Host and OS image
+
+```json
+{
+ "type": "Microsoft.Maintenance/maintenanceConfigurations",
+ "apiVersion": "2021-09-01-preview",
+ "name": "string",
+ "location": "string",
+ "tags": {
+ "tagName1": "tagValue1",
+ "tagName2": "tagValue2"
+ },
+ "properties": {
+ "extensionProperties": {},
+ "installPatches": {
+ "linuxParameters": {
+ "classificationsToInclude": [ "string" ],
+ "packageNameMasksToExclude": [ "string" ],
+ "packageNameMasksToInclude": [ "string" ]
+ },
+ "rebootSetting": "string",
+ "tasks": {
+ "postTasks": [
+ {
+ "parameters": {},
+ "source": "string",
+ "taskScope": "string"
+ }
+ ],
+ "preTasks": [
+ {
+ "parameters": {},
+ "source": "string",
+ "taskScope": "string"
+ }
+ ]
+ },
+ "windowsParameters": {
+ "classificationsToInclude": [ "string" ],
+ "excludeKbsRequiringReboot": "bool",
+ "kbNumbersToExclude": [ "string" ],
+ "kbNumbersToInclude": [ "string" ]
+ }
+ },
+ "maintenanceScope": "string",
+ "maintenanceWindow": {
+ "duration": "string",
+ "expirationDateTime": "string",
+ "recurEvery": "string",
+ "startDateTime": "string",
+ "timeZone": "string"
+ },
+ "namespace": "string",
+ "visibility": "string"
+ }
+}
+```
+
+## Assign the configuration
+
+Assign the configuration to a virtual machine.
+
+For more information, see [configurationAssignments](/azure/templates/microsoft.maintenance/configurationassignments?tabs=json#property-values).
+
+```json
+{
+ "type": "Microsoft.Maintenance/configurationAssignments",
+ "apiVersion": "2021-09-01-preview",
+ "name": "string",
+ "location": "string",
+ "properties": {
+ "maintenanceConfigurationId": "string",
+ "resourceId": "string"
+ }
+}
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn about maintenance and updates for virtual machines running in Azure](maintenance-and-updates.md)
virtual-machines Virtual Machine Scale Sets Maintenance Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machine-scale-sets-maintenance-control.md
Last updated 09/11/2020
-#pmcontact: shants
+#pmcontact: PPHILLIPS
# Maintenance control for Azure virtual machine scale sets
virtual-machines Spot Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/spot-powershell.md
Title: Use PowerShell to deploy Azure Spot Virtual Machines description: Learn how to use Azure PowerShell to deploy Azure Spot Virtual Machines to save on costs.-+ Last updated 03/22/2021--++
virtual-machines Cal S4h https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/cal-s4h.md
The online library is continuously updated with Appliances for demo, proof of co
| Appliance Templates | Link | | -- | : |
-| **SAP Focused Run 3.0 FP03 (configured)** July 28 2022 | [Create Appliance](https://cal.sap.com/registration?sguid=517c6359-6b26-458d-b816-ca25c3e5af7d&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
-|SAP Focused Run is designed specifically for businesses that need high-volume system and application monitoring, alerting, and analytics. It's a powerful solution for service providers, who want to host all their customers in one central, scalable, safe, and automated environment. It also addresses customers with advanced needs regarding system management, user monitoring, integration monitoring, and configuration and security analytics. | [Details](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/517c6359-6b26-458d-b816-ca25c3e5af7d) |
+| **SAP S/4HANA 2021 FPS01, Fully-Activated Appliance** April 26 2022 | [Create Appliance](https://cal.sap.com/registration?sguid=3f4931de-b15b-47f1-b93d-a4267296b8bc&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
+|This appliance contains SAP S/4HANA 2021 (FPS01) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, Migration Cockpit, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Details]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/3f4931de-b15b-47f1-b93d-a4267296b8bc) ||
+| **SAP S/4HANA 2021 FPS02, Fully-Activated Appliance** July 19 2022 | [Create Appliance](https://cal.sap.com/registration?sguid=3f4931de-b15b-47f1-b93d-a4267296b8bc&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
+|This appliance contains SAP S/4HANA 2021 (FPS02) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, Migration Cockpit, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Details]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/3f4931de-b15b-47f1-b93d-a4267296b8bc) |
| **System Conversion for SAP S/4HANA ΓÇô SAP S/4HANA 2021 FPS01 after technical conversion** July 27 2022 | [Create Appliance](https://cal.sap.com/registration?sguid=93895065-7267-4d51-945b-9300836f6a80&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | |Third solution after performing a technical system conversion from SAP ERP to SAP S/4HANA before additional configuration. It has been tested and prepared as converted from SAP EHP7 for SAP ERP 6.0 to SAP S/4HANA 2020 FPS01. | [Details](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/93895065-7267-4d51-945b-9300836f6a80) | | **SAP Focused Run 3.0 FP03, unconfigured** July 21 2022 | [Create Appliance](https://cal.sap.com/registration?sguid=4c38b6ff-d598-4dbc-8f39-fdcf96ae0beb&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | |SAP Focused Run is designed specifically for businesses that need high-volume system and application monitoring, alerting, and analytics. It's a powerful solution for service providers, who want to host all their customers in one central, scalable, safe, and automated environment. It also addresses customers with advanced needs regarding system management, user monitoring, integration monitoring, and configuration and security analytics. | [Details](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/4c38b6ff-d598-4dbc-8f39-fdcf96ae0beb) |
-| **SAP S/4HANA 2021 FPS02, Fully-Activated Appliance** July 19 2022 | [Create Appliance](https://cal.sap.com/registration?sguid=3f4931de-b15b-47f1-b93d-a4267296b8bc&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
-|This appliance contains SAP S/4HANA 2021 (FPS02) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, Migration Cockpit, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Details]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/3f4931de-b15b-47f1-b93d-a4267296b8bc) |
- | **System Conversion for SAP S/4HANA ΓÇô Source system SAP ERP6.0 before running SUM** July 05 2022 | [Create Appliance](https://cal.sap.com/registration?sguid=b28b67f3-ebab-4b03-bee9-1cd57ddb41b6&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
+| **System Conversion for SAP S/4HANA ΓÇô Source system SAP ERP6.0 before running SUM** July 05 2022 | [Create Appliance](https://cal.sap.com/registration?sguid=b28b67f3-ebab-4b03-bee9-1cd57ddb41b6&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
|Second solution for performing a system conversion from SAP ERP to SAP S/4HANA after preparation steps before running Software Update Manager. It has been tested and prepared to be converted from SAP EHP7 for SAP ERP 6.0 to SAP S/4HANA 2021 FPS01 | [Details](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/b28b67f3-ebab-4b03-bee9-1cd57ddb41b6) | | **SAP NetWeaver 7.5 SP15 on SAP ASE** January 20 2020 | [Create Appliance](https://cal.sap.com/registration?sguid=69efd5d1-04de-42d8-a279-813b7a54c1f6&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | |SAP NetWeaver 7.5 SP15 on SAP ASE | [Details](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/solutions/69efd5d1-04de-42d8-a279-813b7a54c1f6) |
The following links highlight the Product stacks that you can quickly deploy on
| -- | : | | **SAP S/4HANA 2021 FPS01 for Productive Deployments** | [Deploy System](https://cal.sap.com/catalog#/products) | |This solution comes as a standard S/4HANA system installation including High Availability capabilities to ensure higher system uptime for productive usage. The system parameters can be customized during initial provisioning according to the requirements for the target system. |
-| **SAP S/4HANA 2021 FPS00 for Productive Deployments, Initial Shipment Stack** | [Deploy System](https://cal.sap.com/catalog#/products) |
+| **SAP S/4HANA 2021 FPS00 for Productive Deployments** | [Deploy System](https://cal.sap.com/catalog#/products) |
|This solution comes as a standard S/4HANA system installation including High Availability capabilities to ensure higher system uptime for productive usage. The system parameters can be customized during initial provisioning according to the requirements for the target system. |
-_Within a few hours, a healthy SAP S/4 appliance is deployed in Azure._
+_Within a few hours, a healthy SAP S/4HANA appliance or product is deployed in Azure._
If you bought an SAP CAL subscription, SAP fully supports deployments through SAP CAL on Azure. The support queue is BC-VCM-CAL.
virtual-network-manager Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/faq.md
For more information, see [deployment against membership types](concept-deployme
For more information, see [remove components checklist](concept-remove-components-checklist.md).
+### Does Azure Virtual Network Manager store customer data?
+No. Azure Virtual Network Manager doesn't store any customer data.
+ ### How can I see what configurations are applied to help me troubleshoot? You can view Azure Virtual Network Manager settings under **Network Manager** for a virtual network. You can see both connectivity and security admin configuration that are applied. For more information, see [view applied configuration](how-to-view-applied-configurations.md).
Azure SQL Managed Instance has some network requirements. These are enforced thr
* Azure Virtual Network Manager policies don't support the standard policy compliance evaluation cycle. For more information, see [Evaluation triggers](../governance/policy/how-to/get-compliance-data.md#evaluation-triggers). ## Next steps
-Create an [Azure Virtual Network Manager](create-virtual-network-manager-portal.md) instance using the Azure portal.
+Create an [Azure Virtual Network Manager](create-virtual-network-manager-portal.md) instance using the Azure portal.