Updates from: 02/05/2022 02:08:55
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/access-tokens.md
grant_type=authorization_code
&client_secret=2hMG2-_:y12n10vwH... ```
-If you're testing this POST HTTP request, you can use any HTTP client such as [Microsoft PowerShell](/powershell/scripting/overview.md) or [Postman](https://www.postman.com/).
+If you're testing this POST HTTP request, you can use any HTTP client such as [Microsoft PowerShell](/powershell/scripting/overview) or [Postman](https://www.postman.com/).
A successful token response looks like this:
active-directory-b2c Add Password Change Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/add-password-change-policy.md
zone_pivot_groups: b2c-policy-type
# Set up password change by using custom policies in Azure Active Directory B2C-
+
[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)] You can configure Azure Active Directory B2C (Azure AD B2C) so that a user who is signed in with a local account can change their password without using email verification to prove their identity.
active-directory-b2c Authorization Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/authorization-code-flow.md
grant_type=authorization_code&client_id=90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6&sco
| redirect_uri |Required |The redirect URI of the application where you received the authorization code. | | code_verifier | recommended | The same code_verifier that was used to obtain the authorization_code. Required if PKCE was used in the authorization code grant request. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). |
-If you're testing this POST HTTP request, you can use any HTTP client such as [Microsoft PowerShell](/powershell/scripting/overview.md) or [Postman](https://www.postman.com/).
+If you're testing this POST HTTP request, you can use any HTTP client such as [Microsoft PowerShell](/powershell/scripting/overview) or [Postman](https://www.postman.com/).
A successful token response looks like this:
active-directory-b2c Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/custom-domain.md
Azure Front Door passes the user's original IP address. It's the IP address that
### Can I use a third-party web application firewall (WAF) with B2C?
-To use your own web application firewall in front of Azure Front Door, you need to configure and validate that everything works correctly with your Azure AD B2C user flows, or custom polies.
+To use your own web application firewall in front of Azure Front Door, you need to configure and validate that everything works correctly with your Azure AD B2C user flows, or custom policies.
### Can my Azure Front Door instance be hosted in a different subscription than my Azure AD B2C tenant?
active-directory-b2c Supported Azure Ad Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/supported-azure-ad-features.md
Previously updated : 10/08/2021 Last updated : 02/04/2022
An Azure AD B2C tenant is different than an Azure Active Directory tenant, which
|Feature |Azure AD | Azure AD B2C | ||||
-| [Groups](../active-directory/fundamentals/active-directory-groups-create-azure-portal.md) | Groups can be used to manage administrative and user accounts.| Groups can be used to manage administrative accounts. [Consumer accounts](user-overview.md#consumer-user) can not be member of any group. |
+| [Groups](../active-directory/fundamentals/active-directory-groups-create-azure-portal.md) | Groups can be used to manage administrative and user accounts.| Groups can be used to manage administrative accounts. [Consumer accounts](user-overview.md#consumer-user) can't be member of any group, so you can't perform [group-based assignment of enterprise applications](../active-directory/manage-apps/assign-user-or-group-access-portal.md).|
| [Inviting External Identities guests](../active-directory//external-identities/add-users-administrator.md)| You can invite guest users and configure External Identities features such as federation and sign-in with Facebook and Google accounts. | You can invite only a Microsoft account or an Azure AD user as a guest to your Azure AD tenant for accessing applications or managing tenants. For [consumer accounts](user-overview.md#consumer-user), you use Azure AD B2C user flows and custom policies to manage users and sign-up or sign-in with external identity providers, such as Google or Facebook. | | [Roles and administrators](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md)| Fully supported for administrative and user accounts. | Roles are not supported with [consumer accounts](user-overview.md#consumer-user). Consumer accounts don't have access to any Azure resources.| | [Custom domain names](../active-directory/fundamentals/add-custom-domain.md) | You can use Azure AD custom domains for administrative accounts only. | [Consumer accounts](user-overview.md#consumer-user) can sign in with a username, phone number, or any email address. You can use [custom domains](custom-domain.md) in your redirect URLs.| | [Conditional Access](../active-directory/conditional-access/overview.md) | Fully supported for administrative and user accounts. | A subset of Azure AD Conditional Access features is supported with [consumer accounts](user-overview.md#consumer-user) Lean how to configure Azure AD B2C [conditional access](conditional-access-user-flow.md).|
-| [Premium P1](https://azure.microsoft.com/pricing/details/active-directory) | Fully supported for Azure AD premium P1 features. For example, [Password Protection](../active-directory/authentication/concept-password-ban-bad.md), [Hybrid Identities](../active-directory/hybrid/whatis-hybrid-identity.md), [Conditional Access](../active-directory/roles/permissions-reference.md#), [Dynamic groups](../active-directory/enterprise-users/groups-create-rule.md), and more. | A subset of Azure AD Conditional Access features is supported with [consumer accounts](user-overview.md#consumer-user). Learn how to configure Azure AD B2C [Conditional Access](conditional-access-user-flow.md).|
-| [Premium P2](https://azure.microsoft.com/pricing/details/active-directory/) | Fully supported for Azure AD premium P2 features. For example, [Identity Protection](../active-directory/identity-protection/overview-identity-protection.md), and [Identity Governance](../active-directory/governance/identity-governance-overview.md). | A subset of Azure AD Identity Protection features is supported with [consumer accounts](user-overview.md#consumer-user). Learn how to [Investigate risk with Identity Protection](identity-protection-investigate-risk.md) and configure Azure AD B2C [Conditional Access](conditional-access-user-flow.md). |
+| [Premium P1](https://azure.microsoft.com/pricing/details/active-directory) | Fully supported for Azure AD premium P1 features. For example, [Password Protection](../active-directory/authentication/concept-password-ban-bad.md), [Hybrid Identities](../active-directory/hybrid/whatis-hybrid-identity.md), [Conditional Access](../active-directory/roles/permissions-reference.md#), [Dynamic groups](../active-directory/enterprise-users/groups-create-rule.md), and more. | Azure AD B2C uses [Azure AD B2C Premium P1 license](https://azure.microsoft.com/pricing/details/active-directory/external-identities/), which is different from Azure AD premium P1. A subset of Azure AD Conditional Access features is supported with [consumer accounts](user-overview.md#consumer-user). Learn how to configure Azure AD B2C [Conditional Access](conditional-access-user-flow.md).|
+| [Premium P2](https://azure.microsoft.com/pricing/details/active-directory/) | Fully supported for Azure AD premium P2 features. For example, [Identity Protection](../active-directory/identity-protection/overview-identity-protection.md), and [Identity Governance](../active-directory/governance/identity-governance-overview.md). | Azure AD B2C uses [Azure AD B2C Premium P2 license](https://azure.microsoft.com/pricing/details/active-directory/external-identities/), which is different from Azure AD premium P2. A subset of Azure AD Identity Protection features is supported with [consumer accounts](user-overview.md#consumer-user). Learn how to [Investigate risk with Identity Protection](identity-protection-investigate-risk.md) and configure Azure AD B2C [Conditional Access](conditional-access-user-flow.md). |
> [!NOTE]
-> **Other Azure resources in your tenant:** <br>In an Azure AD B2C tenant, you can't provision other Azure resources such as virtual machines, Azure web apps, or Azure functions. You must create these resources in your Azure AD tenant.
+> **Other Azure resources in your tenant:** <br>In an Azure AD B2C tenant, you can't provision other Azure resources such as virtual machines, Azure web apps, or Azure functions. You must create these resources in your Azure AD tenant.
active-directory Howto Sspr Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/authentication/howto-sspr-deployment.md
For more information about pricing, see [Azure Active Directory pricing](https:/
| |[Azure AD password reset from the login screen for Windows 10](./howto-sspr-windows.md) | | FAQ|[Password management frequently asked questions](./active-directory-passwords-faq.yml) | - ### Solution architecture The following example describes the password reset solution architecture for common hybrid environments.
You can help users register quickly by deploying SSPR alongside another popular
Before deploying SSPR, you may opt to determine the number and the average cost of each password reset call. You can use this data post deployment to show the value SSPR is bringing to the organization.
-#### Enable combined registration for SSPR and MFA
- ### Combined registration for SSPR and Azure AD Multi-Factor Authentication We recommend that organizations use the [combined registration experience for Azure AD Multi-Factor Authentication and self-service password reset (SSPR)](howto-registration-mfa-sspr-combined.md). SSPR allows users to reset their password in a secure way using the same methods they use for Azure AD Multi-Factor Authentication. Combined registration is a single step for end users. To make sure you understand the functionality and end-user experience, see the [Combined security information registration concepts](concept-registration-mfa-sspr-combined.md).
active-directory Concept Conditional Access Cloud Apps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/concept-conditional-access-cloud-apps.md
Previously updated : 06/15/2021 Last updated : 02/03/2022
The Microsoft Azure Management application includes multiple services.
- Classic deployment model APIs - Azure PowerShell - Azure CLI
- - Visual Studio subscriptions administrator portal
- Azure DevOps - Azure Data Factory portal
+ - Azure Event Hubs
+ - Azure Service Bus
+ - [Azure SQL Database](../../azure-sql/database/conditional-access-configure.md)
+ - SQL Managed Instance
+ - Azure Synapse
+ - Visual Studio subscriptions administrator portal
> [!NOTE]
-> The Microsoft Azure Management application applies to Azure PowerShell, which calls the Azure Resource Manager API. It does not apply to Azure AD PowerShell, which calls Microsoft Graph.
+> The Microsoft Azure Management application applies to [Azure PowerShell](/powershell/azure/what-is-azure-powershell), which calls the [Azure Resource Manager API](../../azure-resource-manager/management/overview.md). It does not apply to [Azure AD PowerShell](/powershell/azure/active-directory/overview), which calls the [Microsoft Graph API](/graph/overview).
+
+For more information on how to set up a sample policy for Microsoft Azure Management, see [Conditional Access: Require MFA for Azure management](howto-conditional-access-policy-azure-management.md).
+
+For Azure Government, you should target the Azure Government Cloud Management API application.
### Other applications
active-directory Howto Conditional Access Policy Azure Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/conditional-access/howto-conditional-access-policy-azure-management.md
Previously updated : 11/05/2021 Last updated : 02/03/2022
Organizations can choose to deploy this policy using the steps outlined below or
The following steps will help create a Conditional Access policy to require users who access the [Microsoft Azure Management](concept-conditional-access-cloud-apps.md#microsoft-azure-management) suite do multi-factor authentication.
+> [!CAUTION]
+> Make sure you understand how Conditional Access works before setting up a policy to manage access to Microsoft Azure Management. Make sure you don't create conditions that could block your own access to the portal.
+ 1. Sign in to the **Azure portal** as a global administrator, security administrator, or Conditional Access administrator. 1. Browse to **Azure Active Directory** > **Security** > **Conditional Access**. 1. Select **New policy**.
active-directory Active Directory Signing Key Rollover https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/active-directory-signing-key-rollover.md
This guidance is **not** applicable for:
* On-premises applications published via application proxy don't have to worry about signing keys. ### <a name="nativeclient"></a>Native client applications accessing resources
-Applications that are only accessing resources (i.e Microsoft Graph, KeyVault, Outlook API, and other Microsoft APIs) generally only obtain a token and pass it along to the resource owner. Given that they are not protecting any resources, they do not inspect the token and therefore do not need to ensure it is properly signed.
+Applications that are only accessing resources (for example, Microsoft Graph, KeyVault, Outlook API, and other Microsoft APIs) generally only obtain a token and pass it along to the resource owner. Given that they are not protecting any resources, they do not inspect the token and therefore do not need to ensure it is properly signed.
Native client applications, whether desktop or mobile, fall into this category and are thus not impacted by the rollover. ### <a name="webclient"></a>Web applications / APIs accessing resources
-Applications that are only accessing resources (i.e Microsoft Graph, KeyVault, Outlook API, and other Microsoft APIs) generally only obtain a token and pass it along to the resource owner. Given that they are not protecting any resources, they do not inspect the token and therefore do not need to ensure it is properly signed.
+Applications that are only accessing resources (such as Microsoft Graph, KeyVault, Outlook API, and other Microsoft APIs) generally only obtain a token and pass it along to the resource owner. Given that they are not protecting any resources, they do not inspect the token and therefore do not need to ensure it is properly signed.
Web applications and web APIs that are using the app-only flow (client credentials / client certificate) to request tokens fall into this category and are thus not impacted by the rollover.
active-directory Msal Node Extensions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/msal-node-extensions.md
+
+ Title: "Learn about Microsoft Authentication Extensions for Node | Azure"
+
+description: The Microsoft Authentication Extensions for Node enables application developers to perform cross-platform token cache serialization and persistence. It gives extra support to the Microsoft Authentication Library for Node (MSAL Node).
++++++++ Last updated : 02/04/2022++
+#Customer intent: As an application developer, I want to learn how to use the Microsoft Authentication Extensions for Node to perform cross-platform token cache serialization and persistence.
++
+# Microsoft Authentication Extensions for Node
+
+The Microsoft Authentication Extensions for Node enables developers to perform cross-platform token cache serialization and persistence to disk. It gives extra support to the Microsoft Authentication Library (MSAL) for Node.
+
+The [MSAL for Node](tutorial-v2-nodejs-webapp-msal.md) supports an in-memory cache by default and provides the ICachePlugin interface to perform cache serialization, but doesn't provide a default way of storing the token cache to disk. The Microsoft Authentication Extensions for Node is the default implementation for persisting cache to disk across different platforms.
+
+The Microsoft Authentication Extensions for Node support the following platforms:
+
+- Windows - Data protection API (DPAPI) is used for protection.
+- Mac - The Mac Keychain is used.
+- Linux - LibSecret is used for storing to "Secret Service".
+
+## Installation
+
+The `msal-node-extensions` package is available on Node Package Manager (NPM).
+
+```bash
+npm i @azure/msal-node-extensions --save
+```
+
+## Configure the token cache
+
+Here's an example of code that uses Microsoft Authentication Extensions for Node to configure the token cache.
+
+```javascript
+const {
+ DataProtectionScope,
+ Environment,
+ PersistenceCreator,
+ PersistenceCachePlugin,
+} = require("@azure/msal-node-extensions");
+
+// You can use the helper functions provided through the Environment class to construct your cache path
+// The helper functions provide consistent implementations across Windows, Mac and Linux.
+const cachePath = path.join(Environment.getUserRootDirectory(), "./cache.json");
+
+const persistenceConfiguration = {
+ cachePath,
+ dataProtectionScope: DataProtectionScope.CurrentUser,
+ serviceName: "<SERVICE-NAME>",
+ accountName: "<ACCOUNT-NAME>",
+ usePlaintextFileOnLinux: false,
+};
+
+// The PersistenceCreator obfuscates a lot of the complexity by doing the following actions for you :-
+// 1. Detects the environment the application is running on and initializes the right persistence instance for the environment.
+// 2. Performs persistence validation for you.
+// 3. Performs any fallbacks if necessary.
+PersistenceCreator.createPersistence(persistenceConfiguration).then(
+ async (persistence) => {
+ const publicClientConfig = {
+ auth: {
+ clientId: "<CLIENT-ID>",
+ authority: "<AUTHORITY>",
+ },
+
+ // This hooks up the cross-platform cache into MSAL
+ cache: {
+ cachePlugin: new PersistenceCachePlugin(persistence),
+ },
+ };
+
+ const pca = new msal.PublicClientApplication(publicClientConfig);
+
+ // Use the public client application as required...
+ }
+);
+```
+
+The following table provides an explanation for all the arguments for the persistence configuration.
+
+| Field Name | Description | Required For |
+| -- | | - |
+| cachePath | The path to the lock file the library uses to synchronize the reads and the writes | Windows, Mac, and Linux |
+| dataProtectionScope | Specifies the scope of the data protection on Windows either the current user or the local machine. | Windows |
+| serviceName | Specifies the service name to be used on Mac and/or Linux | Mac and Linux |
+| accountName | Specifies the account name to be used on Mac and/or Linux | Mac and Linux |
+| usePlaintextFileOnLinux | The flag to default to plain text on linux if LibSecret fails. Defaults to `false` | Linux |
+
+## Next steps
+
+For more information about Microsoft Authentication Extensions for Node and MSAL Node, see:
+
+- [Microsoft Authentication Extensions for Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/extensions/msal-node-extensions)
+- [Microsoft Authentication Library for Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node)
active-directory Quickstart V2 Android https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-android.md
# Quickstart: Sign in users and call the Microsoft Graph API from an Android app
-In this quickstart, you download and run a code sample that demonstrates how an Android application can sign in users and get an access token to call the Microsoft Graph API.
-
-See [How the sample works](#how-the-sample-works) for an illustration.
-
-Applications must be represented by an app object in Azure Active Directory so that the Microsoft identity platform can provide tokens to your application.
-
-## Prerequisites
-
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* Android Studio
-* Android 16+
-
-### Step 1: Configure your application in the Azure portal
-For the code sample in this quickstart to work, add a **Redirect URI** compatible with the Auth broker.
-> [!div class="nextstepaction"]
-> [Make these changes for me]()
-
-> [!div class="alert alert-info"]
-> ![Already configured](media/quickstart-v2-android/green-check.png) Your application is configured with these attributes
-
-### Step 2: Download the project
-
-Run the project using Android Studio.
-> [!div class="nextstepaction"]
-> [Download the code sample](https://github.com/Azure-Samples/ms-identity-android-java/archive/master.zip)
--
-### Step 3: Your app is configured and ready to run
-
-We have configured your project with values of your app's properties and it's ready to run.
-The sample app starts on the **Single Account Mode** screen. A default scope, **user.read**, is provided by default, which is used when reading your own profile data during the Microsoft Graph API call. The URL for the Microsoft Graph API call is provided by default. You can change both of these if you wish.
-
-![MSAL sample app showing single and multiple account usage](./media/quickstart-v2-android/quickstart-sample-app.png)
-
-Use the app menu to change between single and multiple account modes.
-
-In single account mode, sign in using a work or home account:
-
-1. Select **Get graph data interactively** to prompt the user for their credentials. You'll see the output from the call to the Microsoft Graph API in the bottom of the screen.
-2. Once signed in, select **Get graph data silently** to make a call to the Microsoft Graph API without prompting the user for credentials again. You'll see the output from the call to the Microsoft Graph API in the bottom of the screen.
-
-In multiple account mode, you can repeat the same steps. Additionally, you can remove the signed-in account, which also removes the cached tokens for that account.
-
-> [!div class="sxs-lookup"]
-> > [!NOTE]
-> > `Enter_the_Supported_Account_Info_Here`
-
-## How the sample works
-![Screenshot of the sample app](media/quickstart-v2-android/android-intro.svg)
--
-The code is organized into fragments that show how to write a single and multiple accounts MSAL app. The code files are organized as follows:
-
-| File | Demonstrates |
-|||
-| MainActivity | Manages the UI |
-| MSGraphRequestWrapper | Calls the Microsoft Graph API using the token provided by MSAL |
-| MultipleAccountModeFragment | Initializes a multi-account application, loads a user account, and gets a token to call the Microsoft Graph API |
-| SingleAccountModeFragment | Initializes a single-account application, loads a user account, and gets a token to call the Microsoft Graph API |
-| res/auth_config_multiple_account.json | The multiple account configuration file |
-| res/auth_config_single_account.json | The single account configuration file |
-| Gradle Scripts/build.grade (Module:app) | The MSAL library dependencies are added here |
-
-We'll now look at these files in more detail and call out the MSAL-specific code in each.
-
-### Adding MSAL to the app
-
-MSAL ([com.microsoft.identity.client](https://javadoc.io/doc/com.microsoft.identity.client/msal)) is the library used to sign in users and request tokens used to access an API protected by Microsoft identity platform. Gradle 3.0+ installs the library when you add the following to **Gradle Scripts** > **build.gradle (Module: app)** under **Dependencies**:
-
-```java
-dependencies {
- ...
- implementation 'com.microsoft.identity.client:msal:2.+'
- ...
-}
-```
-
-This instructs Gradle to download and build MSAL from maven central.
-
-You must also add references to maven to the **allprojects** > **repositories** portion of the **build.gradle (Module: app)** like so:
-
-```java
-allprojects {
- repositories {
- mavenCentral()
- google()
- mavenLocal()
- maven {
- url 'https://pkgs.dev.azure.com/MicrosoftDeviceSDK/DuoSDK-Public/_packaging/Duo-SDK-Feed/maven/v1'
- }
- maven {
- name "vsts-maven-adal-android"
- url "https://identitydivision.pkgs.visualstudio.com/_packaging/AndroidADAL/maven/v1"
- credentials {
- username System.getenv("ENV_VSTS_MVN_ANDROIDADAL_USERNAME") != null ? System.getenv("ENV_VSTS_MVN_ANDROIDADAL_USERNAME") : project.findProperty("vstsUsername")
- password System.getenv("ENV_VSTS_MVN_ANDROIDADAL_ACCESSTOKEN") != null ? System.getenv("ENV_VSTS_MVN_ANDROIDADAL_ACCESSTOKEN") : project.findProperty("vstsMavenAccessToken")
- }
- }
- jcenter()
- }
-}
-```
-
-### MSAL imports
-
-The imports that are relevant to the MSAL library are `com.microsoft.identity.client.*`. For example, you'll see `import com.microsoft.identity.client.PublicClientApplication;` which is the namespace for the `PublicClientApplication` class, which represents your public client application.
-
-### SingleAccountModeFragment.java
-
-This file demonstrates how to create a single account MSAL app and call a Microsoft Graph API.
-
-Single account apps are only used by a single user. For example, you might just have one account that you sign into your mapping app with.
-
-#### Single account MSAL initialization
-
-In `auth_config_single_account.json`, in `onCreateView()`, a single account `PublicClientApplication` is created using the config information stored in the `auth_config_single_account.json` file. This is how you initialize the MSAL library for use in a single-account MSAL app:
-
-```java
-...
-// Creates a PublicClientApplication object with res/raw/auth_config_single_account.json
-PublicClientApplication.createSingleAccountPublicClientApplication(getContext(),
- R.raw.auth_config_single_account,
- new IPublicClientApplication.ISingleAccountApplicationCreatedListener() {
- @Override
- public void onCreated(ISingleAccountPublicClientApplication application) {
- /**
- * This test app assumes that the app is only going to support one account.
- * This requires "account_mode" : "SINGLE" in the config json file.
- **/
- mSingleAccountApp = application;
- loadAccount();
- }
-
- @Override
- public void onError(MsalException exception) {
- displayError(exception);
- }
- });
-```
-
-#### Sign in a user
-
-In `SingleAccountModeFragment.java`, the code to sign in a user is in `initializeUI()`, in the `signInButton` click handler.
-
-Call `signIn()` before trying to acquire tokens. `signIn()` behaves as though `acquireToken()` is called, resulting in an interactive prompt for the user to sign in.
-
-Signing in a user is an asynchronous operation. A callback is passed that calls the Microsoft Graph API and update the UI once the user signs in:
-
-```java
-mSingleAccountApp.signIn(getActivity(), null, getScopes(), getAuthInteractiveCallback());
-```
-
-#### Sign out a user
-
-In `SingleAccountModeFragment.java`, the code to sign out a user is in `initializeUI()`, in the `signOutButton` click handler. Signing a user out is an asynchronous operation. Signing the user out also clears the token cache for that account. A callback is created to update the UI once the user account is signed out:
-
-```java
-mSingleAccountApp.signOut(new ISingleAccountPublicClientApplication.SignOutCallback() {
- @Override
- public void onSignOut() {
- updateUI(null);
- performOperationOnSignOut();
- }
-
- @Override
- public void onError(@NonNull MsalException exception) {
- displayError(exception);
- }
-});
-```
-
-#### Get a token interactively or silently
-
-To present the fewest number of prompts to the user, you'll typically get a token silently. Then, if there's an error, attempt to get to token interactively. The first time the app calls `signIn()`, it effectively acts as a call to `acquireToken()`, which will prompt the user for credentials.
-
-Some situations when the user may be prompted to select their account, enter their credentials, or consent to the permissions your app has requested are:
-
-* The first time the user signs in to the application
-* If a user resets their password, they'll need to enter their credentials
-* If consent is revoked
-* If your app explicitly requires consent
-* When your application is requesting access to a resource for the first time
-* When MFA or other Conditional Access policies are required
-
-The code to get a token interactively, that is with UI that will involve the user, is in `SingleAccountModeFragment.java`, in `initializeUI()`, in the `callGraphApiInteractiveButton` click handler:
-
-```java
-/**
- * If acquireTokenSilent() returns an error that requires an interaction (MsalUiRequiredException),
- * invoke acquireToken() to have the user resolve the interrupt interactively.
- *
- * Some example scenarios are
- * - password change
- * - the resource you're acquiring a token for has a stricter set of requirement than your Single Sign-On refresh token.
- * - you're introducing a new scope which the user has never consented for.
- **/
-mSingleAccountApp.acquireToken(getActivity(), getScopes(), getAuthInteractiveCallback());
-```
-
-If the user has already signed in, `acquireTokenSilentAsync()` allows apps to request tokens silently as shown in `initializeUI()`, in the `callGraphApiSilentButton` click handler:
-
-```java
-/**
- * Once you've signed the user in,
- * you can perform acquireTokenSilent to obtain resources without interrupting the user.
- **/
- mSingleAccountApp.acquireTokenSilentAsync(getScopes(), AUTHORITY, getAuthSilentCallback());
-```
-
-#### Load an account
-
-The code to load an account is in `SingleAccountModeFragment.java` in `loadAccount()`. Loading the user's account is an asynchronous operation, so callbacks to handle when the account loads, changes, or an error occurs is passed to MSAL. The following code also handles `onAccountChanged()`, which occurs when an account is removed, the user changes to another account, and so on.
-
-```java
-private void loadAccount() {
- ...
-
- mSingleAccountApp.getCurrentAccountAsync(new ISingleAccountPublicClientApplication.CurrentAccountCallback() {
- @Override
- public void onAccountLoaded(@Nullable IAccount activeAccount) {
- // You can use the account data to update your UI or your app database.
- updateUI(activeAccount);
- }
-
- @Override
- public void onAccountChanged(@Nullable IAccount priorAccount, @Nullable IAccount currentAccount) {
- if (currentAccount == null) {
- // Perform a cleanup task as the signed-in account changed.
- performOperationOnSignOut();
- }
- }
-
- @Override
- public void onError(@NonNull MsalException exception) {
- displayError(exception);
- }
- });
-```
-
-#### Call Microsoft Graph
-
-When a user is signed in, the call to Microsoft Graph is made via an HTTP request by `callGraphAPI()` which is defined in `SingleAccountModeFragment.java`. This function is a wrapper that simplifies the sample by doing some tasks such as getting the access token from the `authenticationResult` and packaging the call to the MSGraphRequestWrapper, and displaying the results of the call.
-
-```java
-private void callGraphAPI(final IAuthenticationResult authenticationResult) {
- MSGraphRequestWrapper.callGraphAPIUsingVolley(
- getContext(),
- graphResourceTextView.getText().toString(),
- authenticationResult.getAccessToken(),
- new Response.Listener<JSONObject>() {
- @Override
- public void onResponse(JSONObject response) {
- /* Successfully called graph, process data and send to UI */
- ...
- }
- },
- new Response.ErrorListener() {
- @Override
- public void onErrorResponse(VolleyError error) {
- ...
- }
- });
-}
-```
-
-### auth_config_single_account.json
-
-This is the configuration file for a MSAL app that uses a single account.
-
-See [Understand the Android MSAL configuration file ](msal-configuration.md) for an explanation of these fields.
-
-Note the presence of `"account_mode" : "SINGLE"`, which configures this app to use a single account.
-
-`"client_id"` is preconfigured to use an app object registration that Microsoft maintains.
-`"redirect_uri"`is preconfigured to use the signing key provided with the code sample.
-
-```json
-{
- "client_id" : "0984a7b6-bc13-4141-8b0d-8f767e136bb7",
- "authorization_user_agent" : "DEFAULT",
- "redirect_uri" : "msauth://com.azuresamples.msalandroidapp/1wIqXSqBj7w%2Bh11ZifsnqwgyKrY%3D",
- "account_mode" : "SINGLE",
- "broker_redirect_uri_registered": true,
- "authorities" : [
- {
- "type": "AAD",
- "audience": {
- "type": "AzureADandPersonalMicrosoftAccount",
- "tenant_id": "common"
- }
- }
- ]
-}
-```
-
-### MultipleAccountModeFragment.java
-
-This file demonstrates how to create a multiple account MSAL app and call a Microsoft Graph API.
-
-An example of a multiple account app is a mail app that allows you to work with multiple user accounts such as a work account and a personal account.
-
-#### Multiple account MSAL initialization
-
-In the `MultipleAccountModeFragment.java` file, in `onCreateView()`, a multiple account app object (`IMultipleAccountPublicClientApplication`) is created using the config information stored in the `auth_config_multiple_account.json file`:
-
-```java
-// Creates a PublicClientApplication object with res/raw/auth_config_multiple_account.json
-PublicClientApplication.createMultipleAccountPublicClientApplication(getContext(),
- R.raw.auth_config_multiple_account,
- new IPublicClientApplication.IMultipleAccountApplicationCreatedListener() {
- @Override
- public void onCreated(IMultipleAccountPublicClientApplication application) {
- mMultipleAccountApp = application;
- loadAccounts();
- }
-
- @Override
- public void onError(MsalException exception) {
- ...
- }
- });
-```
-
-The created `MultipleAccountPublicClientApplication` object is stored in a class member variable so that it can be used to interact with the MSAL library to acquire tokens and load and remove the user account.
-
-#### Load an account
-
-Multiple account apps usually call `getAccounts()` to select the account to use for MSAL operations. The code to load an account is in the `MultipleAccountModeFragment.java` file, in `loadAccounts()`. Loading the user's account is an asynchronous operation. So a callback handles the situations when the account is loaded, changes, or an error occurs.
-
-```java
-/**
- * Load currently signed-in accounts, if there's any.
- **/
-private void loadAccounts() {
- if (mMultipleAccountApp == null) {
- return;
- }
-
- mMultipleAccountApp.getAccounts(new IPublicClientApplication.LoadAccountsCallback() {
- @Override
- public void onTaskCompleted(final List<IAccount> result) {
- // You can use the account data to update your UI or your app database.
- accountList = result;
- updateUI(accountList);
- }
-
- @Override
- public void onError(MsalException exception) {
- displayError(exception);
- }
- });
-}
-```
-
-#### Get a token interactively or silently
-
-Some situations when the user may be prompted to select their account, enter their credentials, or consent to the permissions your app has requested are:
-
-* The first time users sign in to the application
-* If a user resets their password, they'll need to enter their credentials
-* If consent is revoked
-* If your app explicitly requires consent
-* When your application is requesting access to a resource for the first time
-* When MFA or other Conditional Access policies are required
-
-Multiple account apps should typically acquire tokens interactively, that is with UI that involves the user, with a call to `acquireToken()`. The code to get a token interactively is in the `MultipleAccountModeFragment.java` file in `initializeUI()`, in the `callGraphApiInteractiveButton` click handler:
-
-```java
-/**
- * Acquire token interactively. It will also create an account object for the silent call as a result (to be obtained by getAccount()).
- *
- * If acquireTokenSilent() returns an error that requires an interaction,
- * invoke acquireToken() to have the user resolve the interrupt interactively.
- *
- * Some example scenarios are
- * - password change
- * - the resource you're acquiring a token for has a stricter set of requirement than your SSO refresh token.
- * - you're introducing a new scope which the user has never consented for.
- **/
-mMultipleAccountApp.acquireToken(getActivity(), getScopes(), getAuthInteractiveCallback());
-```
-
-Apps shouldn't require the user to sign in every time they request a token. If the user has already signed in, `acquireTokenSilentAsync()` allows apps to request tokens without prompting the user, as shown in the `MultipleAccountModeFragment.java` file, in`initializeUI()` in the `callGraphApiSilentButton` click handler:
-
-```java
-/**
- * Performs acquireToken without interrupting the user.
- *
- * This requires an account object of the account you're obtaining a token for.
- * (can be obtained via getAccount()).
- */
-mMultipleAccountApp.acquireTokenSilentAsync(getScopes(),
- accountList.get(accountListSpinner.getSelectedItemPosition()),
- AUTHORITY,
- getAuthSilentCallback());
-```
-
-#### Remove an account
-
-The code to remove an account, and any cached tokens for the account, is in the `MultipleAccountModeFragment.java` file in `initializeUI()` in the handler for the remove account button. Before you can remove an account, you need an account object, which you obtain from MSAL methods like `getAccounts()` and `acquireToken()`. Because removing an account is an asynchronous operation, the `onRemoved` callback is supplied to update the UI.
-
-```java
-/**
- * Removes the selected account and cached tokens from this app (or device, if the device is in shared mode).
- **/
-mMultipleAccountApp.removeAccount(accountList.get(accountListSpinner.getSelectedItemPosition()),
- new IMultipleAccountPublicClientApplication.RemoveAccountCallback() {
- @Override
- public void onRemoved() {
- ...
- /* Reload account asynchronously to get the up-to-date list. */
- loadAccounts();
- }
-
- @Override
- public void onError(@NonNull MsalException exception) {
- displayError(exception);
- }
- });
-```
-
-### auth_config_multiple_account.json
-
-This is the configuration file for a MSAL app that uses multiple accounts.
-
-See [Understand the Android MSAL configuration file ](msal-configuration.md) for an explanation of the various fields.
-
-Unlike the [auth_config_single_account.json](#auth_config_single_accountjson) configuration file, this config file has `"account_mode" : "MULTIPLE"` instead of `"account_mode" : "SINGLE"` because this is a multiple account app.
-
-`"client_id"` is preconfigured to use an app object registration that Microsoft maintains.
-`"redirect_uri"`is preconfigured to use the signing key provided with the code sample.
-
-```json
-{
- "client_id" : "0984a7b6-bc13-4141-8b0d-8f767e136bb7",
- "authorization_user_agent" : "DEFAULT",
- "redirect_uri" : "msauth://com.azuresamples.msalandroidapp/1wIqXSqBj7w%2Bh11ZifsnqwgyKrY%3D",
- "account_mode" : "MULTIPLE",
- "broker_redirect_uri_registered": true,
- "authorities" : [
- {
- "type": "AAD",
- "audience": {
- "type": "AzureADandPersonalMicrosoftAccount",
- "tenant_id": "common"
- }
- }
- ]
-}
-```
--
-## Next steps
-
-Move on to the Android tutorial in which you build an Android app that gets an access token from the Microsoft identity platform and uses it to call the Microsoft Graph API.
-
-> [!div class="nextstepaction"]
-> [Tutorial: Sign in users and call the Microsoft Graph from an Android application](tutorial-v2-android.md)
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. We're currently working on a fix, but for now, please use the link below - it should take you to the right article:
+>
+> > [Quickstart: Android app with user sign-in](mobile-app-quickstart.md?pivots=devlang-android)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" class="sxs-lookup"]
+> In this quickstart, you download and run a code sample that demonstrates how an Android application can sign in users and get an access token to call the Microsoft Graph API.
+>
+> See [How the sample works](#how-the-sample-works) for an illustration.
+>
+> Applications must be represented by an app object in Azure Active Directory so that the Microsoft identity platform can provide tokens to your application.
+>
+> ## Prerequisites
+>
+> * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+> * Android Studio
+> * Android 16+
+>
+> ### Step 1: Configure your application in the Azure portal
+> For the code sample in this quickstart to work, add a **Redirect URI** compatible with the Auth broker.
+> > [!div class="nextstepaction"]
+> > [Make these changes for me]()
+>
+> > [!div class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-android/green-check.png) Your application is configured with these attributes
+>
+> ### Step 2: Download the project
+>
+> Run the project using Android Studio.
+> > [!div class="nextstepaction"]
+> > [Download the code sample](https://github.com/Azure-Samples/ms-identity-android-java/archive/master.zip)
+>
+>
+> ### Step 3: Your app is configured and ready to run
+>
+> We have configured your project with values of your app's properties and it's ready to run.
+> The sample app starts on the **Single Account Mode** screen. A default scope, **user.read**, is provided by default, which is used when reading your own profile data during the Microsoft Graph API call. The URL for the Microsoft Graph API call is provided by default. You can change both of these if you wish.
+>
+> ![MSAL sample app showing single and multiple account usage](./media/quickstart-v2-android/quickstart-sample-app.png)
+>
+> Use the app menu to change between single and multiple account modes.
+>
+> In single account mode, sign in using a work or home account:
+>
+> 1. Select **Get graph data interactively** to prompt the user for their credentials. You'll see the output from the call to the Microsoft Graph API in the bottom of the screen.
+> 2. Once signed in, select **Get graph data silently** to make a call to the Microsoft Graph API without prompting the user for credentials again. You'll see the output from the call to the Microsoft Graph API in the bottom of the screen.
+>
+> In multiple account mode, you can repeat the same steps. Additionally, you can remove the signed-in account, which also removes the cached tokens for that account.
+>
+> > [!div class="sxs-lookup"]
+> > > [!NOTE]
+> > > `Enter_the_Supported_Account_Info_Here`
+>
+> ## How the sample works
+> ![Screenshot of the sample app](media/quickstart-v2-android/android-intro.svg)
+>
+>
+> The code is organized into fragments that show how to write a single and multiple accounts MSAL app. The code files are organized as follows:
+>
+> | File | Demonstrates |
+> |||
+> | MainActivity | Manages the UI |
+> | MSGraphRequestWrapper | Calls the Microsoft Graph API using the token provided by MSAL |
+> | MultipleAccountModeFragment | Initializes a multi-account application, loads a user account, and gets a token to call the Microsoft Graph API |
+> | SingleAccountModeFragment | Initializes a single-account application, loads a user account, and gets a token to call the Microsoft Graph API |
+> | res/auth_config_multiple_account.json | The multiple account configuration file |
+> | res/auth_config_single_account.json | The single account configuration file |
+> | Gradle Scripts/build.grade (Module:app) | The MSAL library dependencies are added here |
+>
+> We'll now look at these files in more detail and call out the MSAL-specific code in each.
+>
+> ### Adding MSAL to the app
+>
+> MSAL ([com.microsoft.identity.client](https://javadoc.io/doc/com.microsoft.identity.client/msal)) is the library used to sign in users and request tokens used to access an API protected by Microsoft identity platform. Gradle 3.0+ installs the library when you add the following to **Gradle Scripts** > **build.gradle (Module: app)** under **Dependencies**:
+>
+> ```java
+> dependencies {
+> ...
+> implementation 'com.microsoft.identity.client:msal:2.+'
+> ...
+> }
+> ```
+>
+> This instructs Gradle to download and build MSAL from maven central.
+>
+> You must also add references to maven to the **allprojects** > **repositories** portion of the **build.gradle (Module: app)** like so:
+>
+> ```java
+> allprojects {
+> repositories {
+> mavenCentral()
+> google()
+> mavenLocal()
+> maven {
+> url 'https://pkgs.dev.azure.com/MicrosoftDeviceSDK/DuoSDK-Public/_packaging/Duo-SDK-Feed/maven/v1'
+> }
+> maven {
+> name "vsts-maven-adal-android"
+> url "https://identitydivision.pkgs.visualstudio.com/_packaging/AndroidADAL/maven/v1"
+> credentials {
+> username System.getenv("ENV_VSTS_MVN_ANDROIDADAL_USERNAME") != null ? System.getenv("ENV_VSTS_MVN_ANDROIDADAL_USERNAME") : project.findProperty("vstsUsername")
+> password System.getenv("ENV_VSTS_MVN_ANDROIDADAL_ACCESSTOKEN") != null ? System.getenv("ENV_VSTS_MVN_ANDROIDADAL_ACCESSTOKEN") : project.findProperty("vstsMavenAccessToken")
+> }
+> }
+> jcenter()
+> }
+> }
+> ```
+>
+> ### MSAL imports
+>
+> The imports that are relevant to the MSAL library are `com.microsoft.identity.client.*`. For example, you'll see `import > com.microsoft.identity.client.PublicClientApplication;` which is the namespace for the `PublicClientApplication` class, which represents your public client application.
+>
+> ### SingleAccountModeFragment.java
+>
+> This file demonstrates how to create a single account MSAL app and call a Microsoft Graph API.
+>
+> Single account apps are only used by a single user. For example, you might just have one account that you sign into your mapping app with.
+>
+> #### Single account MSAL initialization
+>
+> In `auth_config_single_account.json`, in `onCreateView()`, a single account `PublicClientApplication` is created using the config information stored in the `auth_config_single_account.json` file. This is how you initialize the MSAL library for use in a single-account MSAL app:
+>
+> ```java
+> ...
+> // Creates a PublicClientApplication object with res/raw/auth_config_single_account.json
+> PublicClientApplication.createSingleAccountPublicClientApplication(getContext(),
+> R.raw.auth_config_single_account,
+> new IPublicClientApplication.ISingleAccountApplicationCreatedListener() {
+> @Override
+> public void onCreated(ISingleAccountPublicClientApplication application) {
+> /**
+> * This test app assumes that the app is only going to support one account.
+> * This requires "account_mode" : "SINGLE" in the config json file.
+> **/
+> mSingleAccountApp = application;
+> loadAccount();
+> }
+>
+> @Override
+> public void onError(MsalException exception) {
+> displayError(exception);
+> }
+> });
+> ```
+>
+> #### Sign in a user
+>
+> In `SingleAccountModeFragment.java`, the code to sign in a user is in `initializeUI()`, in the `signInButton` click handler.
+>
+> Call `signIn()` before trying to acquire tokens. `signIn()` behaves as though `acquireToken()` is called, resulting in an interactive prompt for the user to sign in.
+>
+> Signing in a user is an asynchronous operation. A callback is passed that calls the Microsoft Graph API and update the UI once the user signs in:
+>
+> ```java
+> mSingleAccountApp.signIn(getActivity(), null, getScopes(), getAuthInteractiveCallback());
+> ```
+>
+> #### Sign out a user
+>
+> In `SingleAccountModeFragment.java`, the code to sign out a user is in `initializeUI()`, in the `signOutButton` click handler. Signing a user out is an asynchronous operation. Signing the user out also clears the token cache for that account. A callback is created to update the UI once the user account is signed out:
+>
+> ```java
+> mSingleAccountApp.signOut(new ISingleAccountPublicClientApplication.SignOutCallback() {
+> @Override
+> public void onSignOut() {
+> updateUI(null);
+> performOperationOnSignOut();
+> }
+>
+> @Override
+> public void onError(@NonNull MsalException exception) {
+> displayError(exception);
+> }
+> });
+> ```
+>
+> #### Get a token interactively or silently
+>
+> To present the fewest number of prompts to the user, you'll typically get a token silently. Then, if there's an error, attempt to get to token interactively. The first time the app calls `signIn()`, it effectively acts as a call to `acquireToken()`, which will prompt the user for credentials.
+>
+> Some situations when the user may be prompted to select their account, enter their credentials, or consent to the permissions your app has requested are:
+>
+> * The first time the user signs in to the application
+> * If a user resets their password, they'll need to enter their credentials
+> * If consent is revoked
+> * If your app explicitly requires consent
+> * When your application is requesting access to a resource for the first time
+> * When MFA or other Conditional Access policies are required
+>
+> The code to get a token interactively, that is with UI that will involve the user, is in `SingleAccountModeFragment.java`, in `initializeUI()`, in the `callGraphApiInteractiveButton` click handler:
+>
+> ```java
+> /**
+> * If acquireTokenSilent() returns an error that requires an interaction (MsalUiRequiredException),
+> * invoke acquireToken() to have the user resolve the interrupt interactively.
+> *
+> * Some example scenarios are
+> * - password change
+> * - the resource you're acquiring a token for has a stricter set of requirement than your Single Sign-On refresh token.
+> * - you're introducing a new scope which the user has never consented for.
+> **/
+> mSingleAccountApp.acquireToken(getActivity(), getScopes(), getAuthInteractiveCallback());
+> ```
+>
+> If the user has already signed in, `acquireTokenSilentAsync()` allows apps to request tokens silently as shown in > `initializeUI()`, in the `callGraphApiSilentButton` click handler:
+>
+> ```java
+> /**
+> * Once you've signed the user in,
+> * you can perform acquireTokenSilent to obtain resources without interrupting the user.
+> **/
+> mSingleAccountApp.acquireTokenSilentAsync(getScopes(), AUTHORITY, getAuthSilentCallback());
+> ```
+>
+> #### Load an account
+>
+> The code to load an account is in `SingleAccountModeFragment.java` in `loadAccount()`. Loading the user's account is an asynchronous operation, so callbacks to handle when the account loads, changes, or an error occurs is passed to MSAL. The following code also handles `onAccountChanged()`, which occurs when an account is removed, the user changes to another account, and so on.
+>
+> ```java
+> private void loadAccount() {
+> ...
+>
+> mSingleAccountApp.getCurrentAccountAsync(new ISingleAccountPublicClientApplication.CurrentAccountCallback() {
+> @Override
+> public void onAccountLoaded(@Nullable IAccount activeAccount) {
+> // You can use the account data to update your UI or your app database.
+> updateUI(activeAccount);
+> }
+>
+> @Override
+> public void onAccountChanged(@Nullable IAccount priorAccount, @Nullable IAccount currentAccount) {
+> if (currentAccount == null) {
+> // Perform a cleanup task as the signed-in account changed.
+> performOperationOnSignOut();
+> }
+> }
+>
+> @Override
+> public void onError(@NonNull MsalException exception) {
+> displayError(exception);
+> }
+> });
+> ```
+>
+> #### Call Microsoft Graph
+>
+> When a user is signed in, the call to Microsoft Graph is made via an HTTP request by `callGraphAPI()` which is defined in `SingleAccountModeFragment.java`. This function is a wrapper that simplifies the sample by doing some tasks such as getting the access token from the `authenticationResult` and packaging the call to the MSGraphRequestWrapper, and displaying the results of the call.
+>
+> ```java
+> private void callGraphAPI(final IAuthenticationResult authenticationResult) {
+> MSGraphRequestWrapper.callGraphAPIUsingVolley(
+> getContext(),
+> graphResourceTextView.getText().toString(),
+> authenticationResult.getAccessToken(),
+> new Response.Listener<JSONObject>() {
+> @Override
+> public void onResponse(JSONObject response) {
+> /* Successfully called graph, process data and send to UI */
+> ...
+> }
+> },
+> new Response.ErrorListener() {
+> @Override
+> public void onErrorResponse(VolleyError error) {
+> ...
+> }
+> });
+> }
+> ```
+>
+> ### auth_config_single_account.json
+>
+> This is the configuration file for a MSAL app that uses a single account.
+>
+> See [Understand the Android MSAL configuration file ](msal-configuration.md) for an explanation of these fields.
+>
+> Note the presence of `"account_mode" : "SINGLE"`, which configures this app to use a single account.
+>
+> `"client_id"` is preconfigured to use an app object registration that Microsoft maintains.
+> `"redirect_uri"`is preconfigured to use the signing key provided with the code sample.
+>
+> ```json
+> {
+> "client_id" : "0984a7b6-bc13-4141-8b0d-8f767e136bb7",
+> "authorization_user_agent" : "DEFAULT",
+> "redirect_uri" : "msauth://com.azuresamples.msalandroidapp/1wIqXSqBj7w%2Bh11ZifsnqwgyKrY%3D",
+> "account_mode" : "SINGLE",
+> "broker_redirect_uri_registered": true,
+> "authorities" : [
+> {
+> "type": "AAD",
+> "audience": {
+> "type": "AzureADandPersonalMicrosoftAccount",
+> "tenant_id": "common"
+> }
+> }
+> ]
+> }
+> ```
+>
+> ### MultipleAccountModeFragment.java
+>
+> This file demonstrates how to create a multiple account MSAL app and call a Microsoft Graph API.
+>
+> An example of a multiple account app is a mail app that allows you to work with multiple user accounts such as a work account and a personal account.
+>
+> #### Multiple account MSAL initialization
+>
+> In the `MultipleAccountModeFragment.java` file, in `onCreateView()`, a multiple account app object (`IMultipleAccountPublicClientApplication`) is created using the config information stored in the `auth_config_multiple_account.json file`:
+>
+> ```java
+> // Creates a PublicClientApplication object with res/raw/auth_config_multiple_account.json
+> PublicClientApplication.createMultipleAccountPublicClientApplication(getContext(),
+> R.raw.auth_config_multiple_account,
+> new IPublicClientApplication.IMultipleAccountApplicationCreatedListener() {
+> @Override
+> public void onCreated(IMultipleAccountPublicClientApplication application) {
+> mMultipleAccountApp = application;
+> loadAccounts();
+> }
+>
+> @Override
+> public void onError(MsalException exception) {
+> ...
+> }
+> });
+> ```
+>
+> The created `MultipleAccountPublicClientApplication` object is stored in a class member variable so that it can be used to interact with the MSAL library to acquire tokens and load and remove the user account.
+>
+> #### Load an account
+>
+> Multiple account apps usually call `getAccounts()` to select the account to use for MSAL operations. The code to load an account is in the `MultipleAccountModeFragment.java` file, in `loadAccounts()`. Loading the user's account is an asynchronous operation. So a callback handles the situations when the account is loaded, changes, or an error occurs.
+>
+> ```java
+> /**
+> * Load currently signed-in accounts, if there's any.
+> **/
+> private void loadAccounts() {
+> if (mMultipleAccountApp == null) {
+> return;
+> }
+>
+> mMultipleAccountApp.getAccounts(new IPublicClientApplication.LoadAccountsCallback() {
+> @Override
+> public void onTaskCompleted(final List<IAccount> result) {
+> // You can use the account data to update your UI or your app database.
+> accountList = result;
+> updateUI(accountList);
+> }
+>
+> @Override
+> public void onError(MsalException exception) {
+> displayError(exception);
+> }
+> });
+> }
+> ```
+>
+> #### Get a token interactively or silently
+>
+> Some situations when the user may be prompted to select their account, enter their credentials, or consent to the permissions your app has requested are:
+>
+> * The first time users sign in to the application
+> * If a user resets their password, they'll need to enter their credentials
+> * If consent is revoked
+> * If your app explicitly requires consent
+> * When your application is requesting access to a resource for the first time
+> * When MFA or other Conditional Access policies are required
+>
+> Multiple account apps should typically acquire tokens interactively, that is with UI that involves the user, with a call to `acquireToken()`. The code to get a token interactively is in the `MultipleAccountModeFragment.java` file in `initializeUI> ()`, in the `callGraphApiInteractiveButton` click handler:
+>
+> ```java
+> /**
+> * Acquire token interactively. It will also create an account object for the silent call as a result (to be obtained by > getAccount()).
+> *
+> * If acquireTokenSilent() returns an error that requires an interaction,
+> * invoke acquireToken() to have the user resolve the interrupt interactively.
+> *
+> * Some example scenarios are
+> * - password change
+> * - the resource you're acquiring a token for has a stricter set of requirement than your SSO refresh token.
+> * - you're introducing a new scope which the user has never consented for.
+> **/
+> mMultipleAccountApp.acquireToken(getActivity(), getScopes(), getAuthInteractiveCallback());
+> ```
+>
+> Apps shouldn't require the user to sign in every time they request a token. If the user has already signed in, `acquireTokenSilentAsync()` allows apps to request tokens without prompting the user, as shown in the `MultipleAccountModeFragment.java` file, in`initializeUI()` in the `callGraphApiSilentButton` click handler:
+>
+> ```java
+> /**
+> * Performs acquireToken without interrupting the user.
+> *
+> * This requires an account object of the account you're obtaining a token for.
+> * (can be obtained via getAccount()).
+> */
+> mMultipleAccountApp.acquireTokenSilentAsync(getScopes(),
+> accountList.get(accountListSpinner.getSelectedItemPosition()),
+> AUTHORITY,
+> getAuthSilentCallback());
+> ```
+>
+> #### Remove an account
+>
+> The code to remove an account, and any cached tokens for the account, is in the `MultipleAccountModeFragment.java` file in `initializeUI()` in the handler for the remove account button. Before you can remove an account, you need an account object, which you obtain from MSAL methods like `getAccounts()` and `acquireToken()`. Because removing an account is an asynchronous operation, the `onRemoved` callback is supplied to update the UI.
+>
+> ```java
+> /**
+> * Removes the selected account and cached tokens from this app (or device, if the device is in shared mode).
+> **/
+> mMultipleAccountApp.removeAccount(accountList.get(accountListSpinner.getSelectedItemPosition()),
+> new IMultipleAccountPublicClientApplication.RemoveAccountCallback() {
+> @Override
+> public void onRemoved() {
+> ...
+> /* Reload account asynchronously to get the up-to-date list. */
+> loadAccounts();
+> }
+>
+> @Override
+> public void onError(@NonNull MsalException exception) {
+> displayError(exception);
+> }
+> });
+> ```
+>
+> ### auth_config_multiple_account.json
+>
+> This is the configuration file for a MSAL app that uses multiple accounts.
+>
+> See [Understand the Android MSAL configuration file ](msal-configuration.md) for an explanation of the various fields.
+>
+> Unlike the [auth_config_single_account.json](#auth_config_single_accountjson) configuration file, this config file has `"account_mode" : "MULTIPLE"` instead of `"account_mode" : "SINGLE"` because this is a multiple account app.
+>
+> `"client_id"` is preconfigured to use an app object registration that Microsoft maintains.
+> `"redirect_uri"`is preconfigured to use the signing key provided with the code sample.
+>
+> ```json
+> {
+> "client_id" : "0984a7b6-bc13-4141-8b0d-8f767e136bb7",
+> "authorization_user_agent" : "DEFAULT",
+> "redirect_uri" : "msauth://com.azuresamples.msalandroidapp/1wIqXSqBj7w%2Bh11ZifsnqwgyKrY%3D",
+> "account_mode" : "MULTIPLE",
+> "broker_redirect_uri_registered": true,
+> "authorities" : [
+> {
+> "type": "AAD",
+> "audience": {
+> "type": "AzureADandPersonalMicrosoftAccount",
+> "tenant_id": "common"
+> }
+> }
+> ]
+> }
+> ```
+>
+> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
+>
+> ## Next steps
+>
+> Move on to the Android tutorial in which you build an Android app that gets an access token from the Microsoft identity platform and uses it to call the Microsoft Graph API.
+>
+> > [!div class="nextstepaction"]
+> > [Tutorial: Sign in users and call the Microsoft Graph from an Android application](tutorial-v2-android.md)
active-directory Quickstart V2 Aspnet Core Web Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-core-web-api.md
# Quickstart: Protect an ASP.NET Core web API with the Microsoft identity platform
-In this quickstart, you download an ASP.NET Core web API code sample and review the way it restricts resource access to authorized accounts only. The sample supports authorization of personal Microsoft accounts and accounts in any Azure Active Directory (Azure AD) organization.
--
-## Prerequisites
--- Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- [Azure Active Directory tenant](quickstart-create-new-tenant.md)-- [.NET Core SDK 3.1+](https://dotnet.microsoft.com/)-- [Visual Studio 2019](https://visualstudio.microsoft.com/vs/) or [Visual Studio Code](https://code.visualstudio.com/)-
-## Step 1: Register the application
-
-First, register the web API in your Azure AD tenant and add a scope by following these steps:
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations** > **New registration**.
-1. For **Name**, enter a name for your application. For example, enter **AspNetCoreWebApi-Quickstart**. Users of your app will see this name, and you can change it later.
-1. Select **Register**.
-1. Under **Manage**, select **Expose an API** > **Add a scope**. For **Application ID URI**, accept the default by selecting **Save and continue**, and then enter the following details:
- - **Scope name**: `access_as_user`
- - **Who can consent?**: **Admins and users**
- - **Admin consent display name**: `Access AspNetCoreWebApi-Quickstart`
- - **Admin consent description**: `Allows the app to access AspNetCoreWebApi-Quickstart as the signed-in user.`
- - **User consent display name**: `Access AspNetCoreWebApi-Quickstart`
- - **User consent description**: `Allow the application to access AspNetCoreWebApi-Quickstart on your behalf.`
- - **State**: **Enabled**
-1. Select **Add scope** to complete the scope addition.
-
-## Step 2: Download the ASP.NET Core project
-
-[Download the ASP.NET Core solution](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/archive/aspnetcore3-1.zip) from GitHub.
---
-## Step 3: Configure the ASP.NET Core project
-
-In this step, configure the sample code to work with the app registration that you created earlier.
-
-1. Extract the .zip archive into a folder near the root of your drive. For example, extract into *C:\Azure-Samples*.
-
- We recommend extracting the archive into a directory near the root of your drive to avoid errors caused by path length limitations on Windows.
-
-1. Open the solution in the *webapi* folder in your code editor.
-1. Open the *appsettings.json* file and modify the following code:
-
- ```json
- "ClientId": "Enter_the_Application_Id_here",
- "TenantId": "Enter_the_Tenant_Info_Here"
- ```
-
- - Replace `Enter_the_Application_Id_here` with the application (client) ID of the application that you registered in the Azure portal. You can find the application (client) ID on the app's **Overview** page.
- - Replace `Enter_the_Tenant_Info_Here` with one of the following:
- - If your application supports **Accounts in this organizational directory only**, replace this value with the directory (tenant) ID (a GUID) or tenant name (for example, `contoso.onmicrosoft.com`). You can find the directory (tenant) ID on the app's **Overview** page.
- - If your application supports **Accounts in any organizational directory**, replace this value with `organizations`.
- - If your application supports **All Microsoft account users**, leave this value as `common`.
-
-For this quickstart, don't change any other values in the *appsettings.json* file.
-
-## How the sample works
-
-The web API receives a token from a client application, and the code in the web API validates the token. This scenario is explained in more detail in [Scenario: Protected web API](scenario-protected-web-api-overview.md).
-
-### Startup class
-
-The *Microsoft.AspNetCore.Authentication* middleware uses a `Startup` class that's executed when the hosting process starts. In its `ConfigureServices` method, the `AddMicrosoftIdentityWebApi` extension method provided by *Microsoft.Identity.Web* is called.
-
-```csharp
- public void ConfigureServices(IServiceCollection services)
- {
- services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
- .AddMicrosoftIdentityWebApi(Configuration, "AzureAd");
- }
-```
-
-The `AddAuthentication()` method configures the service to add JwtBearer-based authentication.
-
-The line that contains `.AddMicrosoftIdentityWebApi` adds the Microsoft identity platform authorization to your web API. It's then configured to validate access tokens issued by the Microsoft identity platform based on the information in the `AzureAD` section of the *appsettings.json* configuration file:
-
-| *appsettings.json* key | Description |
-||-|
-| `ClientId` | Application (client) ID of the application registered in the Azure portal. |
-| `Instance` | Security token service (STS) endpoint for the user to authenticate. This value is typically `https://login.microsoftonline.com/`, indicating the Azure public cloud. |
-| `TenantId` | Name of your tenant or its tenant ID (a GUID), or `common` to sign in users with work or school accounts or Microsoft personal accounts. |
-
-The `Configure()` method contains two important methods, `app.UseAuthentication()` and `app.UseAuthorization()`, that enable their named functionality:
-
-```csharp
-// The runtime calls this method. Use this method to configure the HTTP request pipeline.
-public void Configure(IApplicationBuilder app, IHostingEnvironment env)
-{
- // more code
- app.UseAuthentication();
- app.UseAuthorization();
- // more code
-}
-```
-
-### Protecting a controller, a controller's method, or a Razor page
-
-You can protect a controller or controller methods by using the `[Authorize]` attribute. This attribute restricts access to the controller or methods by allowing only authenticated users. An authentication challenge can be started to access the controller if the user isn't authenticated.
-
-```csharp
-namespace webapi.Controllers
-{
- [Authorize]
- [ApiController]
- [Route("[controller]")]
- public class WeatherForecastController : ControllerBase
-```
-
-### Validation of scope in the controller
-
-The code in the API verifies that the required scopes are in the token by using `HttpContext.VerifyUserHasAnyAcceptedScope(scopeRequiredByApi);`:
-
-```csharp
-namespace webapi.Controllers
-{
- [Authorize]
- [ApiController]
- [Route("[controller]")]
- public class WeatherForecastController : ControllerBase
- {
- // The web API will only accept tokens 1) for users, and 2) having the "access_as_user" scope for this API
- static readonly string[] scopeRequiredByApi = new string[] { "access_as_user" };
-
- [HttpGet]
- public IEnumerable<WeatherForecast> Get()
- {
- HttpContext.VerifyUserHasAnyAcceptedScope(scopeRequiredByApi);
-
- // some code here
- }
- }
-}
-```
--
-## Next steps
-
-The GitHub repository that contains this ASP.NET Core web API code sample includes instructions and more code samples that show you how to:
--- Add authentication to a new ASP.NET Core web API.-- Call the web API from a desktop application.-- Call downstream APIs like Microsoft Graph and other Microsoft APIs.-
-> [!div class="nextstepaction"]
-> [ASP.NET Core web API tutorials on GitHub](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2)
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart:Protect an ASP.NET Core web API](web-api-quickstart.md?pivots=devlang-aspnet-core)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" class="sxs-lookup"]
+> In this quickstart, you download an ASP.NET Core web API code sample and review the way it restricts resource access to authorized accounts only. The sample supports authorization of personal Microsoft accounts and accounts in any Azure Active Directory (Azure AD) organization.
+>
+>
+> ## Prerequisites
+>
+> - Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+> - [Azure Active Directory tenant](quickstart-create-new-tenant.md)
+> - [.NET Core SDK 3.1+](https://dotnet.microsoft.com/)
+> - [Visual Studio 2019](https://visualstudio.microsoft.com/vs/) or [Visual Studio Code](https://code.visualstudio.com/)
+>
+> ## Step 1: Register the application
+>
+> First, register the web API in your Azure AD tenant and add a scope by following these steps:
+>
+> 1. Sign in to the [Azure portal](https://portal.azure.com/).
+> 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+> 1. Search for and select **Azure Active Directory**.
+> 1. Under **Manage**, select **App registrations** > **New registration**.
+> 1. For **Name**, enter a name for your application. For example, enter **AspNetCoreWebApi-Quickstart**. Users of your app will see this name, and you can change it later.
+> 1. Select **Register**.
+> 1. Under **Manage**, select **Expose an API** > **Add a scope**. For **Application ID URI**, accept the default by selecting **Save and continue**, and then enter the following details:
+> - **Scope name**: `access_as_user`
+> - **Who can consent?**: **Admins and users**
+> - **Admin consent display name**: `Access AspNetCoreWebApi-Quickstart`
+> - **Admin consent description**: `Allows the app to access AspNetCoreWebApi-Quickstart as the signed-in user.`
+> - **User consent display name**: `Access AspNetCoreWebApi-Quickstart`
+> - **User consent description**: `Allow the application to access AspNetCoreWebApi-Quickstart on your behalf.`
+> - **State**: **Enabled**
+> 1. Select **Add scope** to complete the scope addition.
+>
+> ## Step 2: Download the ASP.NET Core project
+>
+> [Download the ASP.NET Core solution](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/archive/aspnetcore3-1.zip) from GitHub.
+>
+> [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)]
+>
+>
+> ## Step 3: Configure the ASP.NET Core project
+>
+> In this step, configure the sample code to work with the app registration that you created earlier.
+>
+> 1. Extract the .zip archive into a folder near the root of your drive. For example, extract into *C:\Azure-Samples*.
+>
+> We recommend extracting the archive into a directory near the root of your drive to avoid errors caused by path length limitations on Windows.
+>
+> 1. Open the solution in the *webapi* folder in your code editor.
+> 1. Open the *appsettings.json* file and modify the following code:
+>
+> ```json
+> "ClientId": "Enter_the_Application_Id_here",
+> "TenantId": "Enter_the_Tenant_Info_Here"
+> ```
+>
+> - Replace `Enter_the_Application_Id_here` with the application (client) ID of the application that you registered in the Azure portal. You can find the application (client) ID on the app's **Overview** page.
+> - Replace `Enter_the_Tenant_Info_Here` with one of the following:
+> - If your application supports **Accounts in this organizational directory only**, replace this value with the directory (tenant) ID (a GUID) or tenant name (for example, `contoso.onmicrosoft.com`). You can find the directory (tenant) ID on the app's **Overview** page.
+> - If your application supports **Accounts in any organizational directory**, replace this value with `organizations`.
+> - If your application supports **All Microsoft account users**, leave this value as `common`.
+>
+> For this quickstart, don't change any other values in the *appsettings.json* file.
+>
+> ## How the sample works
+>
+> The web API receives a token from a client application, and the code in the web API validates the token. This scenario is explained in more detail in [Scenario: Protected web API](scenario-protected-web-api-overview.md).
+>
+> ### Startup class
+>
+> The *Microsoft.AspNetCore.Authentication* middleware uses a `Startup` class that's executed when the hosting process starts. In its `ConfigureServices` method, the `AddMicrosoftIdentityWebApi` extension method provided by *Microsoft.Identity.Web* is called.
+>
+> ```csharp
+> public void ConfigureServices(IServiceCollection services)
+> {
+> services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
+> .AddMicrosoftIdentityWebApi(Configuration, "AzureAd");
+> }
+> ```
+>
+> The `AddAuthentication()` method configures the service to add JwtBearer-based authentication.
+>
+> The line that contains `.AddMicrosoftIdentityWebApi` adds the Microsoft identity platform authorization to your web API. It's then configured to validate access tokens issued by the Microsoft identity platform based on the information in the `AzureAD` section of the *appsettings.json* configuration file:
+>
+> | *appsettings.json* key | Description |
+> ||-|
+> | `ClientId` | Application (client) ID of the application registered in the Azure portal. |
+> | `Instance` | Security token service (STS) endpoint for the user to authenticate. This value is typically `https://login.microsoftonline.com/`, indicating the Azure public cloud. |
+> | `TenantId` | Name of your tenant or its tenant ID (a GUID), or `common` to sign in users with work or school accounts or Microsoft personal accounts. |
+>
+> The `Configure()` method contains two important methods, `app.UseAuthentication()` and `app.UseAuthorization()`, that enable their named functionality:
+>
+> ```csharp
+> // The runtime calls this method. Use this method to configure the HTTP request pipeline.
+> public void Configure(IApplicationBuilder app, IHostingEnvironment env)
+> {
+> // more code
+> app.UseAuthentication();
+> app.UseAuthorization();
+> // more code
+> }
+> ```
+>
+> ### Protecting a controller, a controller's method, or a Razor page
+>
+> You can protect a controller or controller methods by using the `[Authorize]` attribute. This attribute restricts access to the controller or methods by allowing only authenticated users. An authentication challenge can be started to access the controller if the user isn't authenticated.
+>
+> ```csharp
+> namespace webapi.Controllers
+> {
+> [Authorize]
+> [ApiController]
+> [Route("[controller]")]
+> public class WeatherForecastController : ControllerBase
+> ```
+>
+> ### Validation of scope in the controller
+>
+> The code in the API verifies that the required scopes are in the token by using `HttpContext.VerifyUserHasAnyAcceptedScope> (scopeRequiredByApi);`:
+>
+> ```csharp
+> namespace webapi.Controllers
+> {
+> [Authorize]
+> [ApiController]
+> [Route("[controller]")]
+> public class WeatherForecastController : ControllerBase
+> {
+> // The web API will only accept tokens 1) for users, and 2) having the "access_as_user" scope for this API
+> static readonly string[] scopeRequiredByApi = new string[] { "access_as_user" };
+>
+> [HttpGet]
+> public IEnumerable<WeatherForecast> Get()
+> {
+> HttpContext.VerifyUserHasAnyAcceptedScope(scopeRequiredByApi);
+>
+> // some code here
+> }
+> }
+> }
+> ```
+>
+> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
+>
+> ## Next steps
+>
+> The GitHub repository that contains this ASP.NET Core web API code sample includes instructions and more code samples that show you how to:
+>
+> - Add authentication to a new ASP.NET Core web API.
+> - Call the web API from a desktop application.
+> - Call downstream APIs like Microsoft Graph and other Microsoft APIs.
+>
+> > [!div class="nextstepaction"]
+> > [ASP.NET Core web API tutorials on GitHub](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2)
active-directory Quickstart V2 Aspnet Core Webapp Calls Graph https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp-calls-graph.md
# Quickstart: ASP.NET Core web app that signs in users and calls Microsoft Graph on their behalf
-In this quickstart, you download and run a code sample that demonstrates how an ASP.NET Core web app can sign in users from any Azure Active Directory (Azure AD) organization and calls Microsoft Graph.
-
-See [How the sample works](#how-the-sample-works) for an illustration.
-
-## Step 1: Configure your application in the Azure portal
-
-For the code sample in this quickstart to work, add a **Redirect URI** of `https://localhost:44321/signin-oidc` and **Front-channel logout URL** of `https://localhost:44321/signout-oidc` in the app registration.
-> [!div class="nextstepaction"]
-> [Make this change for me]()
-
-> [!div class="alert alert-info"]
-> ![Already configured](media/quickstart-v2-aspnet-webapp/green-check.png) Your application is configured with these attributes.
-
-## Step 2: Download the ASP.NET Core project
-
-Run the project.
-
-> [!div class="nextstepaction"]
-> [Download the code sample](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/archive/aspnetcore3-1-callsgraph.zip)
---
-## Step 3: Your app is configured and ready to run
-
-We have configured your project with values of your app's properties and it's ready to run.
-
-> [!NOTE]
-> `Enter_the_Supported_Account_Info_Here`
-
-## About the code
-
-This section gives an overview of the code required to sign in users and call the Microsoft Graph API on their behalf. This overview can be useful to understand how the code works, main arguments, and also if you want to add sign-in to an existing ASP.NET Core application and call Microsoft Graph. It uses [Microsoft.Identity.Web](microsoft-identity-web.md), which is a wrapper around [MSAL.NET](msal-overview.md).
-
-### How the sample works
-
-![Shows how the sample app generated by this quickstart works](media/quickstart-v2-aspnet-core-webapp-calls-graph/aspnetcorewebapp-intro.svg)
-
-### Startup class
-
-The *Microsoft.AspNetCore.Authentication* middleware uses a `Startup` class that's executed when the hosting process initializes:
-
-```csharp
-
- public void ConfigureServices(IServiceCollection services)
- {
- // Get the scopes from the configuration (appsettings.json)
- var initialScopes = Configuration.GetValue<string>("DownstreamApi:Scopes")?.Split(' ');
-
- // Add sign-in with Microsoft
- services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme)
- .AddMicrosoftIdentityWebApp(Configuration.GetSection("AzureAd"))
-
- // Add the possibility of acquiring a token to call a protected web API
- .EnableTokenAcquisitionToCallDownstreamApi(initialScopes)
-
- // Enables controllers and pages to get GraphServiceClient by dependency injection
- // And use an in memory token cache
- .AddMicrosoftGraph(Configuration.GetSection("DownstreamApi"))
- .AddInMemoryTokenCaches();
-
- services.AddControllersWithViews(options =>
- {
- var policy = new AuthorizationPolicyBuilder()
- .RequireAuthenticatedUser()
- .Build();
- options.Filters.Add(new AuthorizeFilter(policy));
- });
-
- // Enables a UI and controller for sign in and sign out.
- services.AddRazorPages()
- .AddMicrosoftIdentityUI();
- }
-```
-
-The `AddAuthentication()` method configures the service to add cookie-based authentication, which is used in browser scenarios and to set the challenge to OpenID Connect.
-
-The line containing `.AddMicrosoftIdentityWebApp` adds the Microsoft identity platform authentication to your application. This is provided by [Microsoft.Identity.Web](microsoft-identity-web.md). It's then configured to sign in using the Microsoft identity platform based on the information in the `AzureAD` section of the *appsettings.json* configuration file:
-
-| *appsettings.json* key | Description |
-||-|
-| `ClientId` | **Application (client) ID** of the application registered in the Azure portal. |
-| `Instance` | Security token service (STS) endpoint for the user to authenticate. This value is typically `https://login.microsoftonline.com/`, indicating the Azure public cloud. |
-| `TenantId` | Name of your tenant or its tenant ID (a GUID), or *common* to sign in users with work or school accounts or Microsoft personal accounts. |
-
-The `EnableTokenAcquisitionToCallDownstreamApi` method enables your application to acquire a token to call protected web APIs. `AddMicrosoftGraph` enables your controllers or Razor pages to benefit directly the `GraphServiceClient` (by dependency injection) and the `AddInMemoryTokenCaches` methods enables your app to benefit from a token cache.
-
-The `Configure()` method contains two important methods, `app.UseAuthentication()` and `app.UseAuthorization()`, that enable their named functionality. Also in the `Configure()` method, you must register Microsoft Identity Web's routes with at least one call to `endpoints.MapControllerRoute()` or a call to `endpoints.MapControllers()`.
-
-```csharp
-app.UseAuthentication();
-app.UseAuthorization();
-
-app.UseEndpoints(endpoints =>
-{
-
- endpoints.MapControllerRoute(
- name: "default",
- pattern: "{controller=Home}/{action=Index}/{id?}");
- endpoints.MapRazorPages();
-});
-
-// endpoints.MapControllers(); // REQUIRED if MapControllerRoute() isn't called.
-```
-
-### Protect a controller or a controller's method
-
-You can protect a controller or its methods by applying the `[Authorize]` attribute to the controller's class or one or more of its methods. This `[Authorize]` attribute restricts access by allowing only authenticated users. If the user isn't already authenticated, an authentication challenge can be started to access the controller. In this quickstart, the scopes are read from the configuration file:
-
-```csharp
-[AuthorizeForScopes(ScopeKeySection = "DownstreamApi:Scopes")]
-public async Task<IActionResult> Index()
-{
- var user = await _graphServiceClient.Me.Request().GetAsync();
- ViewData["ApiResult"] = user.DisplayName;
-
- return View();
-}
-```
--
-## Next steps
-
-The GitHub repo that contains the ASP.NET Core code sample referenced in this quickstart includes instructions and more code samples that show you how to:
--- Add authentication to a new ASP.NET Core Web application-- Call Microsoft Graph, other Microsoft APIs, or your own web APIs-- Add authorization-- Sign in users in national clouds or with social identities-
-> [!div class="nextstepaction"]
-> [ASP.NET Core web app tutorials on GitHub](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/)
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: ASP.NET Core web app that signs in users and calls a web API](web-app-quickstart.md?pivots=devlang-aspnet-core)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" class="sxs-lookup"]
+> In this quickstart, you download and run a code sample that demonstrates how an ASP.NET Core web app can sign in users from any Azure Active Directory (Azure AD) organization and calls Microsoft Graph.
+>
+> See [How the sample works](#how-the-sample-works) for an illustration.
+>
+> ## Step 1: Configure your application in the Azure portal
+>
+> For the code sample in this quickstart to work, add a **Redirect URI** of `https://localhost:44321/signin-oidc` and > **Front-channel logout URL** of `https://localhost:44321/signout-oidc` in the app registration.
+> > [!div class="nextstepaction"]
+> > [Make this change for me]()
+>
+> > [!div class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-aspnet-webapp/green-check.png) Your application is configured with these attributes.
+>
+> ## Step 2: Download the ASP.NET Core project
+>
+> Run the project.
+>
+> > [!div class="nextstepaction"]
+> > [Download the code sample](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/archive/aspnetcore3-1-callsgraph.zip)
+>
+> [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)]
+>
+>
+> ## Step 3: Your app is configured and ready to run
+>
+> We have configured your project with values of your app's properties and it's ready to run.
+>
+> > [!NOTE]
+> > `Enter_the_Supported_Account_Info_Here`
+>
+> ## About the code
+>
+> This section gives an overview of the code required to sign in users and call the Microsoft Graph API on their behalf. This overview can be useful to understand how the code works, main arguments, and also if you want to add sign-in to an existing ASP.NET Core application and call Microsoft Graph. It uses [Microsoft.Identity.Web](microsoft-identity-web.md), which is a wrapper around [MSAL.NET](msal-overview.md).
+>
+> ### How the sample works
+>
+> ![Shows how the sample app generated by this quickstart works](media/quickstart-v2-aspnet-core-webapp-calls-graph/> aspnetcorewebapp-intro.svg)
+>
+> ### Startup class
+>
+> The *Microsoft.AspNetCore.Authentication* middleware uses a `Startup` class that's executed when the hosting process initializes:
+>
+> ```csharp
+>
+> public void ConfigureServices(IServiceCollection services)
+> {
+> // Get the scopes from the configuration (appsettings.json)
+> var initialScopes = Configuration.GetValue<string>("DownstreamApi:Scopes")?.Split(' ');
+>
+> // Add sign-in with Microsoft
+> services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme)
+> .AddMicrosoftIdentityWebApp(Configuration.GetSection("AzureAd"))
+>
+> // Add the possibility of acquiring a token to call a protected web API
+> .EnableTokenAcquisitionToCallDownstreamApi(initialScopes)
+>
+> // Enables controllers and pages to get GraphServiceClient by dependency injection
+> // And use an in memory token cache
+> .AddMicrosoftGraph(Configuration.GetSection("DownstreamApi"))
+> .AddInMemoryTokenCaches();
+>
+> services.AddControllersWithViews(options =>
+> {
+> var policy = new AuthorizationPolicyBuilder()
+> .RequireAuthenticatedUser()
+> .Build();
+> options.Filters.Add(new AuthorizeFilter(policy));
+> });
+>
+> // Enables a UI and controller for sign in and sign out.
+> services.AddRazorPages()
+> .AddMicrosoftIdentityUI();
+> }
+> ```
+>
+> The `AddAuthentication()` method configures the service to add cookie-based authentication, which is used in browser scenarios and to set the challenge to OpenID Connect.
+>
+> The line containing `.AddMicrosoftIdentityWebApp` adds the Microsoft identity platform authentication to your application. This is provided by [Microsoft.Identity.Web](microsoft-identity-web.md). It's then configured to sign in using the Microsoft identity platform based on the information in the `AzureAD` section of the *appsettings.json* configuration file:
+>
+> | *appsettings.json* key | Description > |
+> ||-|
+> | `ClientId` | **Application (client) ID** of the application registered in the Azure portal. |
+> | `Instance` | Security token service (STS) endpoint for the user to authenticate. This value is typically `https://login.microsoftonline.com/`, indicating the Azure public cloud. |
+> | `TenantId` | Name of your tenant or its tenant ID (a GUID), or *common* to sign in users with work or school accounts or Microsoft personal accounts. |
+>
+> The `EnableTokenAcquisitionToCallDownstreamApi` method enables your application to acquire a token to call protected web APIs. `AddMicrosoftGraph` enables your controllers or Razor pages to benefit directly the `GraphServiceClient` (by dependency injection) and the `AddInMemoryTokenCaches` methods enables your app to benefit from a token cache.
+>
+> The `Configure()` method contains two important methods, `app.UseAuthentication()` and `app.UseAuthorization()`, that enable their named functionality. Also in the `Configure()` method, you must register Microsoft Identity Web's routes with at least one call to `endpoints.MapControllerRoute()` or a call to `endpoints.MapControllers()`.
+>
+> ```csharp
+> app.UseAuthentication();
+> app.UseAuthorization();
+>
+> app.UseEndpoints(endpoints =>
+> {
+>
+> endpoints.MapControllerRoute(
+> name: "default",
+> pattern: "{controller=Home}/{action=Index}/{id?}");
+> endpoints.MapRazorPages();
+> });
+>
+> // endpoints.MapControllers(); // REQUIRED if MapControllerRoute() isn't called.
+> ```
+>
+> ### Protect a controller or a controller's method
+>
+> You can protect a controller or its methods by applying the `[Authorize]` attribute to the controller's class or one or more of its methods. This `[Authorize]` attribute restricts access by allowing only authenticated users. If the user isn't already authenticated, an authentication challenge can be started to access the controller. In this quickstart, the scopes are read from the configuration file:
+>
+> ```csharp
+> [AuthorizeForScopes(ScopeKeySection = "DownstreamApi:Scopes")]
+> public async Task<IActionResult> Index()
+> {
+> var user = await _graphServiceClient.Me.Request().GetAsync();
+> ViewData["ApiResult"] = user.DisplayName;
+>
+> return View();
+> }
+> ```
+>
+> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
+>
+> ## Next steps
+>
+> The GitHub repo that contains the ASP.NET Core code sample referenced in this quickstart includes instructions and more code samples that show you how to:
+>
+> - Add authentication to a new ASP.NET Core Web application
+> - Call Microsoft Graph, other Microsoft APIs, or your own web APIs
+> - Add authorization
+> - Sign in users in national clouds or with social identities
+>
+> > [!div class="nextstepaction"]
+> > [ASP.NET Core web app tutorials on GitHub](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/)
active-directory Quickstart V2 Aspnet Core Webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-core-webapp.md
# Quickstart: Add sign-in with Microsoft to an ASP.NET Core web app
-In this quickstart, you download and run a code sample that demonstrates how an ASP.NET Core web app can sign in users from any Azure Active Directory (Azure AD) organization.
-#### Step 1: Configure your application in the Azure portal
-For the code sample in this quickstart to work:
-- For **Redirect URI**, enter **https://localhost:44321/** and **https://localhost:44321/signin-oidc**.-- For **Front-channel logout URL**, enter **https://localhost:44321/signout-oidc**. -
-The authorization endpoint will issue request ID tokens.
-> [!div class="nextstepaction"]
-> [Make this change for me]()
-
-> [!div class="alert alert-info"]
-> ![Already configured](media/quickstart-v2-aspnet-webapp/green-check.png) Your application is configured with these attributes.
-
-#### Step 2: Download the ASP.NET Core project
-
-Run the project.
-
-> [!div class="nextstepaction"]
-> [Download the code sample](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/archive/aspnetcore3-1.zip)
---
-#### Step 3: Your app is configured and ready to run
-We've configured your project with values of your app's properties, and it's ready to run.
-
-> [!NOTE]
-> `Enter_the_Supported_Account_Info_Here`
-
-## More information
-
-This section gives an overview of the code required to sign in users. This overview can be useful to understand how the code works, what the main arguments are, and how to add sign-in to an existing ASP.NET Core application.
-
-> [!div class="sxs-lookup" renderon="portal"]
-> ### How the sample works
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
>
-> ![Diagram of the interaction between the web browser, the web app, and the Microsoft identity platform in the sample app.](media/quickstart-v2-aspnet-core-webapp/aspnetcorewebapp-intro.svg)
-
-### Startup class
-
-The *Microsoft.AspNetCore.Authentication* middleware uses a `Startup` class that's run when the hosting process starts:
-
-```csharp
-public void ConfigureServices(IServiceCollection services)
-{
- services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme)
- .AddMicrosoftIdentityWebApp(Configuration.GetSection("AzureAd"));
-
- services.AddControllersWithViews(options =>
- {
- var policy = new AuthorizationPolicyBuilder()
- .RequireAuthenticatedUser()
- .Build();
- options.Filters.Add(new AuthorizeFilter(policy));
- });
- services.AddRazorPages()
- .AddMicrosoftIdentityUI();
-}
-```
-
-The `AddAuthentication()` method configures the service to add cookie-based authentication. This authentication is used in browser scenarios and to set the challenge to OpenID Connect.
-
-The line that contains `.AddMicrosoftIdentityWebApp` adds Microsoft identity platform authentication to your application. The application is then configured to sign in users based on the following information in the `AzureAD` section of the *appsettings.json* configuration file:
-
-| *appsettings.json* key | Description |
-||-|
-| `ClientId` | Application (client) ID of the application registered in the Azure portal. |
-| `Instance` | Security token service (STS) endpoint for the user to authenticate. This value is typically `https://login.microsoftonline.com/`, indicating the Azure public cloud. |
-| `TenantId` | Name of your tenant or the tenant ID (a GUID), or `common` to sign in users with work or school accounts or Microsoft personal accounts. |
-
-The `Configure()` method contains two important methods, `app.UseAuthentication()` and `app.UseAuthorization()`, that enable their named functionality. Also in the `Configure()` method, you must register Microsoft Identity Web routes with at least one call to `endpoints.MapControllerRoute()` or a call to `endpoints.MapControllers()`:
-
-```csharp
-app.UseAuthentication();
-app.UseAuthorization();
-
-app.UseEndpoints(endpoints =>
-{
- endpoints.MapControllerRoute(
- name: "default",
- pattern: "{controller=Home}/{action=Index}/{id?}");
- endpoints.MapRazorPages();
-});
-```
-
-### Attribute for protecting a controller or methods
-
-You can protect a controller or controller methods by using the `[Authorize]` attribute. This attribute restricts access to the controller or methods by allowing only authenticated users. An authentication challenge can then be started to access the controller if the user isn't authenticated.
--
-## Next steps
-
-The GitHub repo that contains this ASP.NET Core tutorial includes instructions and more code samples that show you how to:
--- Add authentication to a new ASP.NET Core web application.-- Call Microsoft Graph, other Microsoft APIs, or your own web APIs.-- Add authorization.-- Sign in users in national clouds or with social identities.-
-> [!div class="nextstepaction"]
-> [ASP.NET Core web app tutorials on GitHub](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/)
+> > [Quickstart: ASP.NET Core web app with user sign-in](web-app-quickstart.md?pivots=devlang-aspnet-core)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" class="sxs-lookup"]
+> In this quickstart, you download and run a code sample that demonstrates how an ASP.NET Core web app can sign in users from any Azure Active Directory (Azure AD) organization.
+>
+> #### Step 1: Configure your application in the Azure portal
+> For the code sample in this quickstart to work:
+> - For **Redirect URI**, enter **https://localhost:44321/** and **https://localhost:44321/signin-oidc**.
+> - For **Front-channel logout URL**, enter **https://localhost:44321/signout-oidc**.
+>
+> The authorization endpoint will issue request ID tokens.
+> > [!div class="nextstepaction"]
+> > [Make this change for me]()
+>
+> > [!div class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-aspnet-webapp/green-check.png) Your application is configured with these attributes.
+>
+> #### Step 2: Download the ASP.NET Core project
+>
+> Run the project.
+>
+> > [!div class="nextstepaction"]
+> > [Download the code sample](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/archive/aspnetcore3-1.zip)
+>
+> [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)]
+>
+>
+> #### Step 3: Your app is configured and ready to run
+> We've configured your project with values of your app's properties, and it's ready to run.
+>
+> > [!NOTE]
+> > `Enter_the_Supported_Account_Info_Here`
+>
+> ## More information
+>
+> This section gives an overview of the code required to sign in users. This overview can be useful to understand how the code works, what the main arguments are, and how to add sign-in to an existing ASP.NET Core application.
+>
+> > [!div class="sxs-lookup" renderon="portal"]
+> > ### How the sample works
+> >
+> > ![Diagram of the interaction between the web browser, the web app, and the Microsoft identity platform in the sample app.](media/quickstart-v2-aspnet-core-webapp/aspnetcorewebapp-intro.svg)
+>
+> ### Startup class
+>
+> The *Microsoft.AspNetCore.Authentication* middleware uses a `Startup` class that's run when the hosting process starts:
+>
+> ```csharp
+> public void ConfigureServices(IServiceCollection services)
+> {
+> services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme)
+> .AddMicrosoftIdentityWebApp(Configuration.GetSection("AzureAd"));
+>
+> services.AddControllersWithViews(options =>
+> {
+> var policy = new AuthorizationPolicyBuilder()
+> .RequireAuthenticatedUser()
+> .Build();
+> options.Filters.Add(new AuthorizeFilter(policy));
+> });
+> services.AddRazorPages()
+> .AddMicrosoftIdentityUI();
+> }
+> ```
+>
+> The `AddAuthentication()` method configures the service to add cookie-based authentication. This authentication is used in browser scenarios and to set the challenge to OpenID Connect.
+>
+> The line that contains `.AddMicrosoftIdentityWebApp` adds Microsoft identity platform authentication to your application. The application is then configured to sign in users based on the following information in the `AzureAD` section of the *appsettings.json* configuration file:
+>
+> | *appsettings.json* key | Description |
+> ||-|
+> | `ClientId` | Application (client) ID of the application registered in the Azure portal. |
+> | `Instance` | Security token service (STS) endpoint for the user to authenticate. This value is typically `https://login.microsoftonline.com/`, indicating the Azure public cloud. |
+> | `TenantId` | Name of your tenant or the tenant ID (a GUID), or `common` to sign in users with work or school accounts or Microsoft personal accounts. |
+>
+> The `Configure()` method contains two important methods, `app.UseAuthentication()` and `app.UseAuthorization()`, that enable their named functionality. Also in the `Configure()` method, you must register Microsoft Identity Web routes with at least one call to `endpoints.MapControllerRoute()` or a call to `endpoints.MapControllers()`:
+>
+> ```csharp
+> app.UseAuthentication();
+> app.UseAuthorization();
+>
+> app.UseEndpoints(endpoints =>
+> {
+> endpoints.MapControllerRoute(
+> name: "default",
+> pattern: "{controller=Home}/{action=Index}/{id?}");
+> endpoints.MapRazorPages();
+> });
+> ```
+>
+> ### Attribute for protecting a controller or methods
+>
+> You can protect a controller or controller methods by using the `[Authorize]` attribute. This attribute restricts access to the controller or methods by allowing only authenticated users. An authentication challenge can then be started to access the controller if the user isn't authenticated.
+>
+> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
+>
+> ## Next steps
+>
+> The GitHub repo that contains this ASP.NET Core tutorial includes instructions and more code samples that show you how to:
+>
+> - Add authentication to a new ASP.NET Core web application.
+> - Call Microsoft Graph, other Microsoft APIs, or your own web APIs.
+> - Add authorization.
+> - Sign in users in national clouds or with social identities.
+>
+> > [!div class="nextstepaction"]
+> > [ASP.NET Core web app tutorials on GitHub](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/)
active-directory Quickstart V2 Aspnet Webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-aspnet-webapp.md
# Quickstart: ASP.NET web app that signs in Azure AD users
-In this quickstart, you download and run a code sample that demonstrates an ASP.NET web application that can sign in users with Azure Active Directory (Azure AD) accounts.
-
-#### Step 1: Configure your application in the Azure portal
-For the code sample in this quickstart to work, enter **https://localhost:44368/** for **Redirect URI**.
-
-> [!div class="nextstepaction"]
-> [Make this change for me]()
-
-> [!div class="alert alert-info"]
-> ![Already configured](media/quickstart-v2-aspnet-webapp/green-check.png) Your application is configured with this attribute.
-
-#### Step 2: Download the project
-
-Run the project by using Visual Studio 2019.
-> [!div class="sxs-lookup nextstepaction"]
-> [Download the code sample](https://github.com/AzureADQuickStarts/AppModelv2-WebApp-OpenIDConnect-DotNet/archive/master.zip)
---
-#### Step 3: Your app is configured and ready to run
-We've configured your project with values of your app's properties.
-
-1. Extract the .zip file to a local folder that's close to the root folder. For example, extract to *C:\Azure-Samples*.
-
- We recommend extracting the archive into a directory near the root of your drive to avoid errors caused by path length limitations on Windows.
-2. Open the solution in Visual Studio (*AppModelv2-WebApp-OpenIDConnect-DotNet.sln*).
-3. Depending on the version of Visual Studio, you might need to right-click the project **AppModelv2-WebApp-OpenIDConnect-DotNet** and then select **Restore NuGet packages**.
-4. Open the Package Manager Console by selecting **View** > **Other Windows** > **Package Manager Console**. Then run `Update-Package Microsoft.CodeDom.Providers.DotNetCompilerPlatform -r`.
-
-> [!NOTE]
-> `Enter_the_Supported_Account_Info_Here`
-
-## More information
-
-This section gives an overview of the code required to sign in users. This overview can be useful to understand how the code works, what the main arguments are, and how to add sign-in to an existing ASP.NET application.
--
-### How the sample works
-
-![Diagram of the interaction between the web browser, the web app, and the Microsoft identity platform in the sample app.](media/quickstart-v2-aspnet-webapp/aspnetwebapp-intro.svg)
-
-### OWIN middleware NuGet packages
-
-You can set up the authentication pipeline with cookie-based authentication by using OpenID Connect in ASP.NET with OWIN middleware packages. You can install these packages by running the following commands in Package Manager Console within Visual Studio:
-
-```powershell
-Install-Package Microsoft.Owin.Security.OpenIdConnect
-Install-Package Microsoft.Owin.Security.Cookies
-Install-Package Microsoft.Owin.Host.SystemWeb
-```
-
-### OWIN startup class
-
-The OWIN middleware uses a *startup class* that runs when the hosting process starts. In this quickstart, the *startup.cs* file is in the root folder. The following code shows the parameters that this quickstart uses:
-
-```csharp
-public void Configuration(IAppBuilder app)
-{
- app.SetDefaultSignInAsAuthenticationType(CookieAuthenticationDefaults.AuthenticationType);
-
- app.UseCookieAuthentication(new CookieAuthenticationOptions());
- app.UseOpenIdConnectAuthentication(
- new OpenIdConnectAuthenticationOptions
- {
- // Sets the client ID, authority, and redirect URI as obtained from Web.config
- ClientId = clientId,
- Authority = authority,
- RedirectUri = redirectUri,
- // PostLogoutRedirectUri is the page that users will be redirected to after sign-out. In this case, it's using the home page
- PostLogoutRedirectUri = redirectUri,
- Scope = OpenIdConnectScope.OpenIdProfile,
- // ResponseType is set to request the code id_token, which contains basic information about the signed-in user
- ResponseType = OpenIdConnectResponseType.CodeIdToken,
- // ValidateIssuer set to false to allow personal and work accounts from any organization to sign in to your application
- // To only allow users from a single organization, set ValidateIssuer to true and the 'tenant' setting in Web.config to the tenant name
- // To allow users from only a list of specific organizations, set ValidateIssuer to true and use the ValidIssuers parameter
- TokenValidationParameters = new TokenValidationParameters()
- {
- ValidateIssuer = false // Simplification (see note below)
- },
- // OpenIdConnectAuthenticationNotifications configures OWIN to send notification of failed authentications to the OnAuthenticationFailed method
- Notifications = new OpenIdConnectAuthenticationNotifications
- {
- AuthenticationFailed = OnAuthenticationFailed
- }
- }
- );
-}
-```
-
-> |Where | Description |
-> |||
-> | `ClientId` | The application ID from the application registered in the Azure portal. |
-> | `Authority` | The security token service (STS) endpoint for the user to authenticate. It's usually `https://login.microsoftonline.com/{tenant}/v2.0` for the public cloud. In that URL, *{tenant}* is the name of your tenant, your tenant ID, or `common` for a reference to the common endpoint. (The common endpoint is used for multitenant applications.) |
-> | `RedirectUri` | The URL where users are sent after authentication against the Microsoft identity platform. |
-> | `PostLogoutRedirectUri` | The URL where users are sent after signing off. |
-> | `Scope` | The list of scopes being requested, separated by spaces. |
-> | `ResponseType` | The request that the response from authentication contains an authorization code and an ID token. |
-> | `TokenValidationParameters` | A list of parameters for token validation. In this case, `ValidateIssuer` is set to `false` to indicate that it can accept sign-ins from any personal, work, or school account type. |
-> | `Notifications` | A list of delegates that can be run on `OpenIdConnect` messages. |
--
-> [!NOTE]
-> Setting `ValidateIssuer = false` is a simplification for this quickstart. In real applications, validate the issuer. See the samples to understand how to do that.
-
-### Authentication challenge
-
-You can force a user to sign in by requesting an authentication challenge in your controller:
-
-```csharp
-public void SignIn()
-{
- if (!Request.IsAuthenticated)
- {
- HttpContext.GetOwinContext().Authentication.Challenge(
- new AuthenticationProperties{ RedirectUri = "/" },
- OpenIdConnectAuthenticationDefaults.AuthenticationType);
- }
-}
-```
-
-> [!TIP]
-> Requesting an authentication challenge by using this method is optional. You'd normally use it when you want a view to be accessible from both authenticated and unauthenticated users. Alternatively, you can protect controllers by using the method described in the next section.
-
-### Attribute for protecting a controller or a controller actions
-
-You can protect a controller or controller actions by using the `[Authorize]` attribute. This attribute restricts access to the controller or actions by allowing only authenticated users to access the actions in the controller. An authentication challenge will then happen automatically when an unauthenticated user tries to access one of the actions or controllers decorated by the `[Authorize]` attribute.
--
-## Next steps
-
-For a complete step-by-step guide on building applications and new features, including a full explanation of this quickstart, try out the ASP.NET tutorial.
-
-> [!div class="nextstepaction"]
-> [Add sign-in to an ASP.NET web app](tutorial-v2-asp-webapp.md)
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: ASP.NET web app that signs in users](web-app-quickstart.md?pivots=devlang-aspnet)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" class="sxs-lookup"]
+> In this quickstart, you download and run a code sample that demonstrates an ASP.NET web application that can sign in users with Azure Active Directory (Azure AD) accounts.
+>
+> #### Step 1: Configure your application in the Azure portal
+> For the code sample in this quickstart to work, enter **https://localhost:44368/** for **Redirect URI**.
+>
+> > [!div class="nextstepaction"]
+> > [Make this change for me]()
+>
+> > [!div class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-aspnet-webapp/green-check.png) Your application is configured with this attribute.
+>
+> #### Step 2: Download the project
+>
+> Run the project by using Visual Studio 2019.
+> > [!div class="sxs-lookup nextstepaction"]
+> > [Download the code sample](https://github.com/AzureADQuickStarts/AppModelv2-WebApp-OpenIDConnect-DotNet/archive/master.zip)
+>
+> [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)]
+>
+>
+> #### Step 3: Your app is configured and ready to run
+> We've configured your project with values of your app's properties.
+>
+> 1. Extract the .zip file to a local folder that's close to the root folder. For example, extract to *C:\Azure-Samples*.
+>
+> We recommend extracting the archive into a directory near the root of your drive to avoid errors caused by path length limitations on Windows.
+> 2. Open the solution in Visual Studio (*AppModelv2-WebApp-OpenIDConnect-DotNet.sln*).
+> 3. Depending on the version of Visual Studio, you might need to right-click the project > **AppModelv2-WebApp-OpenIDConnect-DotNet** and then select **Restore NuGet packages**.
+> 4. Open the Package Manager Console by selecting **View** > **Other Windows** > **Package Manager Console**. Then run `Update-Package Microsoft.CodeDom.Providers.DotNetCompilerPlatform -r`.
+>
+> > [!NOTE]
+> > `Enter_the_Supported_Account_Info_Here`
+>
+> ## More information
+>
+> This section gives an overview of the code required to sign in users. This overview can be useful to understand how the code works, what the main arguments are, and how to add sign-in to an existing ASP.NET application.
+>
+>
+> ### How the sample works
+>
+> ![Diagram of the interaction between the web browser, the web app, and the Microsoft identity platform in the sample app.](media/quickstart-v2-aspnet-webapp/aspnetwebapp-intro.svg)
+>
+> ### OWIN middleware NuGet packages
+>
+> You can set up the authentication pipeline with cookie-based authentication by using OpenID Connect in ASP.NET with OWIN middleware packages. You can install these packages by running the following commands in Package Manager Console within Visual Studio:
+>
+> ```powershell
+> Install-Package Microsoft.Owin.Security.OpenIdConnect
+> Install-Package Microsoft.Owin.Security.Cookies
+> Install-Package Microsoft.Owin.Host.SystemWeb
+> ```
+>
+> ### OWIN startup class
+>
+> The OWIN middleware uses a *startup class* that runs when the hosting process starts. In this quickstart, the *startup.cs* file is in the root folder. The following code shows the parameters that this quickstart uses:
+>
+> ```csharp
+> public void Configuration(IAppBuilder app)
+> {
+> app.SetDefaultSignInAsAuthenticationType(CookieAuthenticationDefaults.AuthenticationType);
+>
+> app.UseCookieAuthentication(new CookieAuthenticationOptions());
+> app.UseOpenIdConnectAuthentication(
+> new OpenIdConnectAuthenticationOptions
+> {
+> // Sets the client ID, authority, and redirect URI as obtained from Web.config
+> ClientId = clientId,
+> Authority = authority,
+> RedirectUri = redirectUri,
+> // PostLogoutRedirectUri is the page that users will be redirected to after sign-out. In this case, it's using the home page
+> PostLogoutRedirectUri = redirectUri,
+> Scope = OpenIdConnectScope.OpenIdProfile,
+> // ResponseType is set to request the code id_token, which contains basic information about the signed-in user
+> ResponseType = OpenIdConnectResponseType.CodeIdToken,
+> // ValidateIssuer set to false to allow personal and work accounts from any organization to sign in to your application
+> // To only allow users from a single organization, set ValidateIssuer to true and the 'tenant' setting in Web.> config to the tenant name
+> // To allow users from only a list of specific organizations, set ValidateIssuer to true and use the ValidIssuers parameter
+> TokenValidationParameters = new TokenValidationParameters()
+> {
+> ValidateIssuer = false // Simplification (see note below)
+> },
+> // OpenIdConnectAuthenticationNotifications configures OWIN to send notification of failed authentications to > the OnAuthenticationFailed method
+> Notifications = new OpenIdConnectAuthenticationNotifications
+> {
+> AuthenticationFailed = OnAuthenticationFailed
+> }
+> }
+> );
+> }
+> ```
+>
+> > |Where | Description |
+> > |||
+> > | `ClientId` | The application ID from the application registered in the Azure portal. |
+> > | `Authority` | The security token service (STS) endpoint for the user to authenticate. It's usually `https://login.microsoftonline.com/{tenant}/v2.0` for the public cloud. In that URL, *{tenant}* is the name of your tenant, your tenant ID, or `common` for a reference to the common endpoint. (The common endpoint is used for multitenant applications.) |
+> > | `RedirectUri` | The URL where users are sent after authentication against the Microsoft identity platform. |
+> > | `PostLogoutRedirectUri` | The URL where users are sent after signing off. |
+> > | `Scope` | The list of scopes being requested, separated by spaces. |
+> > | `ResponseType` | The request that the response from authentication contains an authorization code and an ID token. |
+> > | `TokenValidationParameters` | A list of parameters for token validation. In this case, `ValidateIssuer` is set to `false` to indicate that it can accept sign-ins from any personal, work, or school account type. |
+> > | `Notifications` | A list of delegates that can be run on `OpenIdConnect` messages. |
+>
+>
+> > [!NOTE]
+> > Setting `ValidateIssuer = false` is a simplification for this quickstart. In real applications, validate the issuer. See the samples to understand how to do that.
+>
+> ### Authentication challenge
+>
+> You can force a user to sign in by requesting an authentication challenge in your controller:
+>
+> ```csharp
+> public void SignIn()
+> {
+> if (!Request.IsAuthenticated)
+> {
+> HttpContext.GetOwinContext().Authentication.Challenge(
+> new AuthenticationProperties{ RedirectUri = "/" },
+> OpenIdConnectAuthenticationDefaults.AuthenticationType);
+> }
+> }
+> ```
+>
+> > [!TIP]
+> > Requesting an authentication challenge by using this method is optional. You'd normally use it when you want a view to be accessible from both authenticated and unauthenticated users. Alternatively, you can protect controllers by using the method described in the next section.
+>
+> ### Attribute for protecting a controller or a controller actions
+>
+> You can protect a controller or controller actions by using the `[Authorize]` attribute. This attribute restricts access to the controller or actions by allowing only authenticated users to access the actions in the controller. An authentication challenge will then happen automatically when an unauthenticated user tries to access one of the actions or controllers decorated by the `[Authorize]` attribute.
+>
+> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
+>
+> ## Next steps
+>
+> For a complete step-by-step guide on building applications and new features, including a full explanation of this quickstart, try out the ASP.NET tutorial.
+>
+> > [!div class="nextstepaction"]
+> > [Add sign-in to an ASP.NET web app](tutorial-v2-asp-webapp.md)
active-directory Quickstart V2 Dotnet Native Aspnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-dotnet-native-aspnet.md
# Quickstart: Call an ASP.NET web API that's protected by Microsoft identity platform
-In this quickstart, you download and run a code sample that demonstrates how to protect an ASP.NET web API by restricting access to its resources to authorized accounts only. The sample supports authorization of personal Microsoft accounts and accounts in any Azure Active Directory (Azure AD) organization.
-
-The article also uses a Windows Presentation Foundation (WPF) app to demonstrate how you can request an access token to access a web API.
-
-## Prerequisites
-
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* Visual Studio 2017 or 2019. Download [Visual Studio for free](https://www.visualstudio.com/downloads/).
-
-## Clone or download the sample
-
-You can obtain the sample in either of two ways:
-
-* Clone it from your shell or command line:
-
- ```console
- git clone https://github.com/AzureADQuickStarts/AppModelv2-NativeClient-DotNet.git
- ```
-
-* [Download it as a ZIP file](https://github.com/AzureADQuickStarts/AppModelv2-NativeClient-DotNet/archive/complete.zip).
--
-## Register the web API (TodoListService)
-
-Register your web API in **App registrations** in the Azure portal.
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application.
-1. Find and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations** > **New registration**.
-1. Enter a **Name** for your application, for example `AppModelv2-NativeClient-DotNet-TodoListService`. Users of your app might see this name, and you can change it later.
-1. For **Supported account types**, select **Accounts in any organizational directory**.
-1. Select **Register** to create the application.
-1. On the app **Overview** page, look for the **Application (client) ID** value, and then record it for later use. You'll need it to configure the Visual Studio configuration file for this project (that is, `ClientId` in the *TodoListService\Web.config* file).
-1. Under **Manage**, select **Expose an API** > **Add a scope**. Accept the proposed Application ID URI (`api://{clientId}`) by selecting **Save and continue**, and then enter the following information:
-
- 1. For **Scope name**, enter `access_as_user`.
- 1. For **Who can consent**, ensure that the **Admins and users** option is selected.
- 1. In the **Admin consent display name** box, enter `Access TodoListService as a user`.
- 1. In the **Admin consent description** box, enter `Accesses the TodoListService web API as a user`.
- 1. In the **User consent display name** box, enter `Access TodoListService as a user`.
- 1. In the **User consent description** box, enter `Accesses the TodoListService web API as a user`.
- 1. For **State**, keep **Enabled**.
-1. Select **Add scope**.
-
-### Configure the service project
-
-Configure the service project to match the registered web API.
-
-1. Open the solution in Visual Studio, and then open the *Web.config* file under the root of the TodoListService project.
-
-1. Replace the value of the `ida:ClientId` parameter with the Client ID (Application ID) value from the application you registered in the **App registrations** portal.
-
-### Add the new scope to the app.config file
-
-To add the new scope to the TodoListClient *app.config* file, follow these steps:
-
-1. In the TodoListClient project root folder, open the *app.config* file.
-
-1. Paste the Application ID from the application that you registered for your TodoListService project in the `TodoListServiceScope` parameter, replacing the `{Enter the Application ID of your TodoListService from the app registration portal}` string.
-
- > [!NOTE]
- > Make sure that the Application ID uses the following format: `api://{TodoListService-Application-ID}/access_as_user` (where `{TodoListService-Application-ID}` is the GUID representing the Application ID for your TodoListService app).
-
-## Register the web app (TodoListClient)
-
-Register your TodoListClient app in **App registrations** in the Azure portal, and then configure the code in the TodoListClient project. If the client and server are considered the same application, you can reuse the application that's registered in step 2. Use the same application if you want users to sign in with a personal Microsoft account.
-
-### Register the app
-
-To register the TodoListClient app, follow these steps:
-
-1. Go to the Microsoft identity platform for developers [App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) portal.
-1. Select **New registration**.
-1. When the **Register an application page** opens, enter your application's registration information:
-
- 1. In the **Name** section, enter a meaningful application name that will be displayed to users of the app (for example, **NativeClient-DotNet-TodoListClient**).
- 1. For **Supported account types**, select **Accounts in any organizational directory**.
- 1. Select **Register** to create the application.
-
- > [!NOTE]
- > In the TodoListClient project *app.config* file, the default value of `ida:Tenant` is set to `common`. The possible values are:
- >
- > - `common`: You can sign in by using a work or school account or a personal Microsoft account (because you selected **Accounts in any organizational directory** in a previous step).
- > - `organizations`: You can sign in by using a work or school account.
- > - `consumers`: You can sign in only by using a Microsoft personal account.
-
-1. On the app **Overview** page, select **Authentication**, and then complete these steps to add a platform:
-
- 1. Under **Platform configurations**, select the **Add a platform** button.
- 1. For **Mobile and desktop applications**, select **Mobile and desktop applications**.
- 1. For **Redirect URIs**, select the `https://login.microsoftonline.com/common/oauth2/nativeclient` check box.
- 1. Select **Configure**.
-
-1. Select **API permissions**, and then complete these steps to add permissions:
-
- 1. Select the **Add a permission** button.
- 1. Select the **My APIs** tab.
- 1. In the list of APIs, select **AppModelv2-NativeClient-DotNet-TodoListService API** or the name you entered for the web API.
- 1. Select the **access_as_user** permission check box if it's not already selected. Use the Search box if necessary.
- 1. Select the **Add permissions** button.
-
-### Configure your project
-
-Configure your TodoListClient project by adding the Application ID to the *app.config* file.
-
-1. In the **App registrations** portal, on the **Overview** page, copy the value of the **Application (client) ID**.
-
-1. From the TodoListClient project root folder, open the *app.config* file, and then paste the Application ID value in the `ida:ClientId` parameter.
-
-## Run your projects
-
-Start both projects. If you are using Visual Studio:
-
-1. Right click on the Visual Studio solution and select **Properties**
-
-1. In the **Common Properties** select **Startup Project** and then **Multiple startup projects**.
-
-1. For both projects choose **Start** as the action
-
-1. Ensure the TodoListService service starts first by moving it to the fist position in the list, using the up arrow.
-
-Sign in to run your TodoListClient project.
-
-1. Press F5 to start the projects. The service page opens, as well as the desktop application.
-
-1. In the TodoListClient, at the upper right, select **Sign in**, and then sign in with the same credentials you used to register your application, or sign in as a user in the same directory.
-
- If you're signing in for the first time, you might be prompted to consent to the TodoListService web API.
-
- To help you access the TodoListService web API and manipulate the *To-Do* list, the sign-in also requests an access token to the *access_as_user* scope.
-
-## Pre-authorize your client application
-
-You can allow users from other directories to access your web API by pre-authorizing the client application to access your web API. You do this by adding the Application ID from the client app to the list of pre-authorized applications for your web API. By adding a pre-authorized client, you're allowing users to access your web API without having to provide consent.
-
-1. In the **App registrations** portal, open the properties of your TodoListService app.
-1. In the **Expose an API** section, under **Authorized client applications**, select **Add a client application**.
-1. In the **Client ID** box, paste the Application ID of the TodoListClient app.
-1. In the **Authorized scopes** section, select the scope for the `api://<Application ID>/access_as_user` web API.
-1. Select **Add application**.
-
-### Run your project
-
-1. Press <kbd>F5</kbd> to run your project. Your TodoListClient app opens.
-1. At the upper right, select **Sign in**, and then sign in by using a personal Microsoft account, such as a *live.com* or *hotmail.com* account, or a work or school account.
-
-## Optional: Limit sign-in access to certain users
-
-By default, any personal accounts, such as *outlook.com* or *live.com* accounts, or work or school accounts from organizations that are integrated with Azure AD can request tokens and access your web API.
-
-To specify who can sign in to your application, use one of the following options:
-
-### Option 1: Limit access to a single organization (single tenant)
-
-You can limit sign-in access to your application to user accounts that are in a single Azure AD tenant, including guest accounts of that tenant. This scenario is common for line-of-business applications.
-
-1. Open the *App_Start\Startup.Auth* file, and then change the value of the metadata endpoint that's passed into the `OpenIdConnectSecurityTokenProvider` to `https://login.microsoftonline.com/{Tenant ID}/v2.0/.well-known/openid-configuration`. You can also use the tenant name, such as `contoso.onmicrosoft.com`.
-1. In the same file, set the `ValidIssuer` property on the `TokenValidationParameters` to `https://sts.windows.net/{Tenant ID}/`, and set the `ValidateIssuer` argument to `true`.
-
-### Option 2: Use a custom method to validate issuers
-
-You can implement a custom method to validate issuers by using the `IssuerValidator` parameter. For more information about this parameter, see [TokenValidationParameters class](/dotnet/api/microsoft.identitymodel.tokens.tokenvalidationparameters).
--
-## Next steps
-
-Learn more about the protected web API scenario that the Microsoft identity platform supports.
-> [!div class="nextstepaction"]
-> [Protected web API scenario](scenario-protected-web-api-overview.md)
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: Call a protected ASP.NET web API](web-api-quickstart.md?pivots=devlang-aspnet)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" class="sxs-lookup"]
+> In this quickstart, you download and run a code sample that demonstrates how to protect an ASP.NET web API by restricting access to its resources to authorized accounts only. The sample supports authorization of personal Microsoft accounts and accounts in any Azure Active Directory (Azure AD) organization.
+>
+> The article also uses a Windows Presentation Foundation (WPF) app to demonstrate how you can request an access token to access a web API.
+>
+> ## Prerequisites
+>
+> * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+> * Visual Studio 2017 or 2019. Download [Visual Studio for free](https://www.visualstudio.com/downloads/).
+>
+> ## Clone or download the sample
+>
+> You can obtain the sample in either of two ways:
+>
+> * Clone it from your shell or command line:
+>
+> ```console
+> git clone https://github.com/AzureADQuickStarts/AppModelv2-NativeClient-DotNet.git
+> ```
+>
+> * [Download it as a ZIP file](https://github.com/AzureADQuickStarts/AppModelv2-NativeClient-DotNet/archive/complete.zip).
+>
+> [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)]
+>
+> ## Register the web API (TodoListService)
+>
+> Register your web API in **App registrations** in the Azure portal.
+>
+> 1. Sign in to the [Azure portal](https://portal.azure.com/).
+> 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application.
+> 1. Find and select **Azure Active Directory**.
+> 1. Under **Manage**, select **App registrations** > **New registration**.
+> 1. Enter a **Name** for your application, for example `AppModelv2-NativeClient-DotNet-TodoListService`. Users of your app might see this name, and you can change it later.
+> 1. For **Supported account types**, select **Accounts in any organizational directory**.
+> 1. Select **Register** to create the application.
+> 1. On the app **Overview** page, look for the **Application (client) ID** value, and then record it for later use. You'll need it to configure the Visual Studio configuration file for this project (that is, `ClientId` in the *TodoListService\Web.config* file).
+> 1. Under **Manage**, select **Expose an API** > **Add a scope**. Accept the proposed Application ID URI (`api://{clientId}> `) by selecting **Save and continue**, and then enter the following information:
+>
+> 1. For **Scope name**, enter `access_as_user`.
+> 1. For **Who can consent**, ensure that the **Admins and users** option is selected.
+> 1. In the **Admin consent display name** box, enter `Access TodoListService as a user`.
+> 1. In the **Admin consent description** box, enter `Accesses the TodoListService web API as a user`.
+> 1. In the **User consent display name** box, enter `Access TodoListService as a user`.
+> 1. In the **User consent description** box, enter `Accesses the TodoListService web API as a user`.
+> 1. For **State**, keep **Enabled**.
+> 1. Select **Add scope**.
+>
+> ### Configure the service project
+>
+> Configure the service project to match the registered web API.
+>
+> 1. Open the solution in Visual Studio, and then open the *Web.config* file under the root of the TodoListService project.
+>
+> 1. Replace the value of the `ida:ClientId` parameter with the Client ID (Application ID) value from the application you registered in the **App registrations** portal.
+>
+> ### Add the new scope to the app.config file
+>
+> To add the new scope to the TodoListClient *app.config* file, follow these steps:
+>
+> 1. In the TodoListClient project root folder, open the *app.config* file.
+>
+> 1. Paste the Application ID from the application that you registered for your TodoListService project in the `TodoListServiceScope` parameter, replacing the `{Enter the Application ID of your TodoListService from the app registration portal}` string.
+>
+> > [!NOTE]
+> > Make sure that the Application ID uses the following format: `api://{TodoListService-Application-ID}/access_as_user` (where `{TodoListService-Application-ID}` is the GUID representing the Application ID for your TodoListService app).
+>
+> ## Register the web app (TodoListClient)
+>
+> Register your TodoListClient app in **App registrations** in the Azure portal, and then configure the code in the TodoListClient project. If the client and server are considered the same application, you can reuse the application that's registered in step 2. Use the same application if you want users to sign in with a personal Microsoft account.
+>
+> ### Register the app
+>
+> To register the TodoListClient app, follow these steps:
+>
+> 1. Go to the Microsoft identity platform for developers [App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) portal.
+> 1. Select **New registration**.
+> 1. When the **Register an application page** opens, enter your application's registration information:
+>
+> 1. In the **Name** section, enter a meaningful application name that will be displayed to users of the app (for example, **NativeClient-DotNet-TodoListClient**).
+> 1. For **Supported account types**, select **Accounts in any organizational directory**.
+> 1. Select **Register** to create the application.
+>
+> > [!NOTE]
+> > In the TodoListClient project *app.config* file, the default value of `ida:Tenant` is set to `common`. The possible values are:
+> >
+> > - `common`: You can sign in by using a work or school account or a personal Microsoft account (because you selected **Accounts in any organizational directory** in a previous step).
+> > - `organizations`: You can sign in by using a work or school account.
+> > - `consumers`: You can sign in only by using a Microsoft personal account.
+>
+> 1. On the app **Overview** page, select **Authentication**, and then complete these steps to add a platform:
+>
+> 1. Under **Platform configurations**, select the **Add a platform** button.
+> 1. For **Mobile and desktop applications**, select **Mobile and desktop applications**.
+> 1. For **Redirect URIs**, select the `https://login.microsoftonline.com/common/oauth2/nativeclient` check box.
+> 1. Select **Configure**.
+>
+> 1. Select **API permissions**, and then complete these steps to add permissions:
+>
+> 1. Select the **Add a permission** button.
+> 1. Select the **My APIs** tab.
+> 1. In the list of APIs, select **AppModelv2-NativeClient-DotNet-TodoListService API** or the name you entered for the web API.
+> 1. Select the **access_as_user** permission check box if it's not already selected. Use the Search box if necessary.
+> 1. Select the **Add permissions** button.
+>
+> ### Configure your project
+>
+> Configure your TodoListClient project by adding the Application ID to the *app.config* file.
+>
+> 1. In the **App registrations** portal, on the **Overview** page, copy the value of the **Application (client) ID**.
+>
+> 1. From the TodoListClient project root folder, open the *app.config* file, and then paste the Application ID value in the `ida:ClientId` parameter.
+>
+> ## Run your projects
+>
+> Start both projects. If you are using Visual Studio:
+>
+> 1. Right click on the Visual Studio solution and select **Properties**
+>
+> 1. In the **Common Properties** select **Startup Project** and then **Multiple startup projects**.
+>
+> 1. For both projects choose **Start** as the action
+>
+> 1. Ensure the TodoListService service starts first by moving it to the fist position in the list, using the up arrow.
+>
+> Sign in to run your TodoListClient project.
+>
+> 1. Press F5 to start the projects. The service page opens, as well as the desktop application.
+>
+> 1. In the TodoListClient, at the upper right, select **Sign in**, and then sign in with the same credentials you used to register your application, or sign in as a user in the same directory.
+>
+> If you're signing in for the first time, you might be prompted to consent to the TodoListService web API.
+>
+> To help you access the TodoListService web API and manipulate the *To-Do* list, the sign-in also requests an access token to the *access_as_user* scope.
+>
+> ## Pre-authorize your client application
+>
+> You can allow users from other directories to access your web API by pre-authorizing the client application to access your web API. You do this by adding the Application ID from the client app to the list of pre-authorized applications for your web API. By adding a pre-authorized client, you're allowing users to access your web API without having to provide consent.
+>
+> 1. In the **App registrations** portal, open the properties of your TodoListService app.
+> 1. In the **Expose an API** section, under **Authorized client applications**, select **Add a client application**.
+> 1. In the **Client ID** box, paste the Application ID of the TodoListClient app.
+> 1. In the **Authorized scopes** section, select the scope for the `api://<Application ID>/access_as_user` web API.
+> 1. Select **Add application**.
+>
+> ### Run your project
+>
+> 1. Press <kbd>F5</kbd> to run your project. Your TodoListClient app opens.
+> 1. At the upper right, select **Sign in**, and then sign in by using a personal Microsoft account, such as a *live.com* or *hotmail.com* account, or a work or school account.
+>
+> ## Optional: Limit sign-in access to certain users
+>
+> By default, any personal accounts, such as *outlook.com* or *live.com* accounts, or work or school accounts from organizations that are integrated with Azure AD can request tokens and access your web API.
+>
+> To specify who can sign in to your application, use one of the following options:
+>
+> ### Option 1: Limit access to a single organization (single tenant)
+>
+> You can limit sign-in access to your application to user accounts that are in a single Azure AD tenant, including guest accounts of that tenant. This scenario is common for line-of-business applications.
+>
+> 1. Open the *App_Start\Startup.Auth* file, and then change the value of the metadata endpoint that's passed into the `OpenIdConnectSecurityTokenProvider` to `https://login.microsoftonline.com/{Tenant ID}/v2.0/.well-known/openid-configuration`. You can also use the tenant name, such as `contoso.onmicrosoft.com`.
+> 1. In the same file, set the `ValidIssuer` property on the `TokenValidationParameters` to `https://sts.windows.net/{Tenant ID}/`, and set the `ValidateIssuer` argument to `true`.
+>
+> ### Option 2: Use a custom method to validate issuers
+>
+> You can implement a custom method to validate issuers by using the `IssuerValidator` parameter. For more information about this parameter, see [TokenValidationParameters class](/dotnet/api/microsoft.identitymodel.tokens.tokenvalidationparameters).
+>
+> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
+>
+> ## Next steps
+>
+> Learn more about the protected web API scenario that the Microsoft identity platform supports.
+> > [!div class="nextstepaction"]
+> > [Protected web API scenario](scenario-protected-web-api-overview.md)
active-directory Quickstart V2 Ios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-ios.md
# Quickstart: Sign in users and call the Microsoft Graph API from an iOS or macOS app
-In this quickstart, you download and run a code sample that demonstrates how a native iOS or macOS application can sign in users and get an access token to call the Microsoft Graph API.
-
-The quickstart applies to both iOS and macOS apps. Some steps are needed only for iOS apps and will be indicated as such.
-
-## Prerequisites
-
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* XCode 10+
-* iOS 10+
-* macOS 10.12+
-
-## How the sample works
-
-![Shows how the sample app generated by this quickstart works](media/quickstart-v2-ios/ios-intro.svg)
-
-#### Step 1: Configure your application
-For the code sample for this quickstart to work, add a **Redirect URI** compatible with the Auth broker.
-> [!div class="nextstepaction"]
-> [Make this change for me]()
-
-> [!div class="alert alert-info"]
-> ![Already configured](media/quickstart-v2-ios/green-check.png) Your application is configured with these attributes
-
-#### Step 2: Download the sample project
-> [!div class="nextstepaction"]
-> [Download the code sample for iOS]()
-
-> [!div class="nextstepaction"]
-> [Download the code sample for macOS]()
-
-#### Step 3: Install dependencies
-
-1. Extract the zip file.
-2. In a terminal window, navigate to the folder with the downloaded code sample and run `pod install` to install the latest MSAL library.
-
-#### Step 4: Your app is configured and ready to run
-We have configured your project with values of your app's properties and it's ready to run.
-> [!NOTE]
-> `Enter_the_Supported_Account_Info_Here`
-
-1. If you're building an app for [Azure AD national clouds](/graph/deployments#app-registration-and-token-service-root-endpoints), replace the line starting with 'let kGraphEndpoint' and 'let kAuthority' with correct endpoints. For global access, use default values:
-
- ```swift
- let kGraphEndpoint = "https://graph.microsoft.com/"
- let kAuthority = "https://login.microsoftonline.com/common"
- ```
-
-1. Other endpoints are documented [here](/graph/deployments#app-registration-and-token-service-root-endpoints). For example, to run the quickstart with Azure AD Germany, use following:
-
- ```swift
- let kGraphEndpoint = "https://graph.microsoft.de/"
- let kAuthority = "https://login.microsoftonline.de/common"
- ```
-
-3. Open the project settings. In the **Identity** section, enter the **Bundle Identifier** that you entered into the portal.
-4. Right-click **Info.plist** and select **Open As** > **Source Code**.
-5. Under the dict root node, replace `Enter_the_bundle_Id_Here` with the ***Bundle Id*** that you used in the portal. Notice the `msauth.` prefix in the string.
-
- ```xml
- <key>CFBundleURLTypes</key>
- <array>
- <dict>
- <key>CFBundleURLSchemes</key>
- <array>
- <string>msauth.Enter_the_Bundle_Id_Here</string>
- </array>
- </dict>
- </array>
- ```
-
-6. Build and run the app!
-
-## More Information
-
-Read these sections to learn more about this quickstart.
-
-### Get MSAL
-
-MSAL ([MSAL.framework](https://github.com/AzureAD/microsoft-authentication-library-for-objc)) is the library used to sign in users and request tokens used to access an API protected by Microsoft identity platform. You can add MSAL to your application using the following process:
-
-```
-$ vi Podfile
-```
-
-Add the following to this podfile (with your project's target):
-
-```
-use_frameworks!
-
-target 'MSALiOS' do
- pod 'MSAL'
-end
-```
-
-Run CocoaPods installation command:
-
-`pod install`
-
-### Initialize MSAL
-
-You can add the reference for MSAL by adding the following code:
-
-```swift
-import MSAL
-```
-
-Then, initialize MSAL using the following code:
-
-```swift
-let authority = try MSALAADAuthority(url: URL(string: kAuthority)!)
-
-let msalConfiguration = MSALPublicClientApplicationConfig(clientId: kClientID, redirectUri: nil, authority: authority)
-self.applicationContext = try MSALPublicClientApplication(configuration: msalConfiguration)
-```
-
-> |Where: | Description |
-> |||
-> | `clientId` | The Application ID from the application registered in *portal.azure.com* |
-> | `authority` | The Microsoft identity platform. In most of cases this will be `https://login.microsoftonline.com/common` |
-> | `redirectUri` | The redirect URI of the application. You can pass 'nil' to use the default value, or your custom redirect URI. |
-
-### For iOS only, additional app requirements
-
-Your app must also have the following in your `AppDelegate`. This lets MSAL SDK handle token response from the Auth broker app when you do authentication.
-
-```swift
-func application(_ app: UIApplication, open url: URL, options: [UIApplication.OpenURLOptionsKey : Any] = [:]) -> Bool {
-
- return MSALPublicClientApplication.handleMSALResponse(url, sourceApplication: options[UIApplication.OpenURLOptionsKey.sourceApplication] as? String)
-}
-```
-
-> [!NOTE]
-> On iOS 13+, if you adopt `UISceneDelegate` instead of `UIApplicationDelegate`, place this code into the `scene:openURLContexts:` callback instead (See [Apple's documentation](https://developer.apple.com/documentation/uikit/uiscenedelegate/3238059-scene?language=objc)).
-> If you support both UISceneDelegate and UIApplicationDelegate for compatibility with older iOS, MSAL callback needs to be placed into both places.
-
-```swift
-func scene(_ scene: UIScene, openURLContexts URLContexts: Set<UIOpenURLContext>) {
-
- guard let urlContext = URLContexts.first else {
- return
- }
-
- let url = urlContext.url
- let sourceApp = urlContext.options.sourceApplication
-
- MSALPublicClientApplication.handleMSALResponse(url, sourceApplication: sourceApp)
-}
-```
-
-Finally, your app must have an `LSApplicationQueriesSchemes` entry in your ***Info.plist*** alongside the `CFBundleURLTypes`. The sample comes with this included.
-
- ```xml
- <key>LSApplicationQueriesSchemes</key>
- <array>
- <string>msauthv2</string>
- <string>msauthv3</string>
- </array>
- ```
-
-### Sign in users & request tokens
-
-MSAL has two methods used to acquire tokens: `acquireToken` and `acquireTokenSilent`.
-
-#### acquireToken: Get a token interactively
-
-Some situations require users to interact with Microsoft identity platform. In these cases, the end user may be required to select their account, enter their credentials, or consent to your app's permissions. For example,
-
-* The first time users sign in to the application
-* If a user resets their password, they'll need to enter their credentials
-* When your application is requesting access to a resource for the first time
-* When MFA or other Conditional Access policies are required
-
-```swift
-let parameters = MSALInteractiveTokenParameters(scopes: kScopes, webviewParameters: self.webViewParamaters!)
-self.applicationContext!.acquireToken(with: parameters) { (result, error) in /* Add your handling logic */}
-```
-
-> |Where:| Description |
-> |||
-> | `scopes` | Contains the scopes being requested (that is, `[ "user.read" ]` for Microsoft Graph or `[ "<Application ID URL>/scope" ]` for custom web APIs (`api://<Application ID>/access_as_user`) |
-
-#### acquireTokenSilent: Get an access token silently
-
-Apps shouldn't require their users to sign in every time they request a token. If the user has already signed in, this method allows apps to request tokens silently.
-
-```swift
-self.applicationContext!.getCurrentAccount(with: nil) { (currentAccount, previousAccount, error) in
-
- guard let account = currentAccount else {
- return
- }
-
- let silentParams = MSALSilentTokenParameters(scopes: self.kScopes, account: account)
- self.applicationContext!.acquireTokenSilent(with: silentParams) { (result, error) in /* Add your handling logic */}
-}
-```
-
-> |Where: | Description |
-> |||
-> | `scopes` | Contains the scopes being requested (that is, `[ "user.read" ]` for Microsoft Graph or `[ "<Application ID URL>/scope" ]` for custom web APIs (`api://<Application ID>/access_as_user`) |
-> | `account` | The account a token is being requested for. This quickstart is about a single account application. If you want to build a multi-account app you'll need to define logic to identify which account to use for token requests using `accountsFromDeviceForParameters:completionBlock:` and passing correct `accountIdentifier` |
--
-## Next steps
-
-Move on to the step-by-step tutorial in which you build an iOS or macOS app that gets an access token from the Microsoft identity platform and uses it to call the Microsoft Graph API.
-
-> [!div class="nextstepaction"]
-> [Tutorial: Sign in users and call Microsoft Graph from an iOS or macOS app](tutorial-v2-ios.md)
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: iOS or macOS app that signs in users and calls a web API](mobile-app-quickstart.md?pivots=devlang-ios)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" class="sxs-lookup"]
+> In this quickstart, you download and run a code sample that demonstrates how a native iOS or macOS application can sign in users and get an access token to call the Microsoft Graph API.
+>
+> The quickstart applies to both iOS and macOS apps. Some steps are needed only for iOS apps and will be indicated as such.
+>
+> ## Prerequisites
+>
+> * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+> * XCode 10+
+> * iOS 10+
+> * macOS 10.12+
+>
+> ## How the sample works
+>
+> ![Shows how the sample app generated by this quickstart works](media/quickstart-v2-ios/ios-intro.svg)
+>
+> #### Step 1: Configure your application
+> For the code sample for this quickstart to work, add a **Redirect URI** compatible with the Auth broker.
+> > [!div class="nextstepaction"]
+> > [Make this change for me]()
+>
+> > [!div class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-ios/green-check.png) Your application is configured with these attributes
+>
+> #### Step 2: Download the sample project
+> > [!div class="nextstepaction"]
+> > [Download the code sample for iOS]()
+>
+> > [!div class="nextstepaction"]
+> > [Download the code sample for macOS]()
+>
+> #### Step 3: Install dependencies
+>
+> 1. Extract the zip file.
+> 2. In a terminal window, navigate to the folder with the downloaded code sample and run `pod install` to install the latest MSAL library.
+>
+> #### Step 4: Your app is configured and ready to run
+> We have configured your project with values of your app's properties and it's ready to run.
+> > [!NOTE]
+> > `Enter_the_Supported_Account_Info_Here`
+>
+> 1. If you're building an app for [Azure AD national clouds](/graph/deployments#app-registration-and-token-service-root-endpoints), replace the line starting with 'let kGraphEndpoint' and 'let kAuthority' with correct endpoints. For global access, use default values:
+>
+> ```swift
+> let kGraphEndpoint = "https://graph.microsoft.com/"
+> let kAuthority = "https://login.microsoftonline.com/common"
+> ```
+>
+> 1. Other endpoints are documented [here](/graph/deployments#app-registration-and-token-service-root-endpoints). For example, to run the quickstart with Azure AD Germany, use following:
+>
+> ```swift
+> let kGraphEndpoint = "https://graph.microsoft.de/"
+> let kAuthority = "https://login.microsoftonline.de/common"
+> ```
+>
+> 3. Open the project settings. In the **Identity** section, enter the **Bundle Identifier** that you entered into the portal.
+> 4. Right-click **Info.plist** and select **Open As** > **Source Code**.
+> 5. Under the dict root node, replace `Enter_the_bundle_Id_Here` with the ***Bundle Id*** that you used in the portal. Notice the `msauth.` prefix in the string.
+>
+> ```xml
+> <key>CFBundleURLTypes</key>
+> <array>
+> <dict>
+> <key>CFBundleURLSchemes</key>
+> <array>
+> <string>msauth.Enter_the_Bundle_Id_Here</string>
+> </array>
+> </dict>
+> </array>
+> ```
+>
+> 6. Build and run the app!
+>
+> ## More Information
+>
+> Read these sections to learn more about this quickstart.
+>
+> ### Get MSAL
+>
+> MSAL ([MSAL.framework](https://github.com/AzureAD/microsoft-authentication-library-for-objc)) is the library used to sign in users and request tokens used to access an API protected by Microsoft identity platform. You can add MSAL to your application using the following process:
+>
+> ```
+> $ vi Podfile
+> ```
+>
+> Add the following to this podfile (with your project's target):
+>
+> ```
+> use_frameworks!
+>
+> target 'MSALiOS' do
+> pod 'MSAL'
+> end
+> ```
+>
+> Run CocoaPods installation command:
+>
+> `pod install`
+>
+> ### Initialize MSAL
+>
+> You can add the reference for MSAL by adding the following code:
+>
+> ```swift
+> import MSAL
+> ```
+>
+> Then, initialize MSAL using the following code:
+>
+> ```swift
+> let authority = try MSALAADAuthority(url: URL(string: kAuthority)!)
+>
+> let msalConfiguration = MSALPublicClientApplicationConfig(clientId: kClientID, redirectUri: nil, authority: authority)
+> self.applicationContext = try MSALPublicClientApplication(configuration: msalConfiguration)
+> ```
+>
+> > |Where: | Description |
+> > |||
+> > | `clientId` | The Application ID from the application registered in *portal.azure.com* |
+> > | `authority` | The Microsoft identity platform. In most of cases this will be `https://login.microsoftonline.com/common` > |
+> > | `redirectUri` | The redirect URI of the application. You can pass 'nil' to use the default value, or your custom redirect URI. |
+>
+> ### For iOS only, additional app requirements
+>
+> Your app must also have the following in your `AppDelegate`. This lets MSAL SDK handle token response from the Auth broker app when you do authentication.
+>
+> ```swift
+> func application(_ app: UIApplication, open url: URL, options: [UIApplication.OpenURLOptionsKey : Any] = [:]) -> Bool {
+>
+> return MSALPublicClientApplication.handleMSALResponse(url, sourceApplication: options[UIApplication.OpenURLOptionsKey.sourceApplication] as? String)
+> }
+> ```
+>
+> > [!NOTE]
+> > On iOS 13+, if you adopt `UISceneDelegate` instead of `UIApplicationDelegate`, place this code into the `scene:openURLContexts:` callback instead (See [Apple's documentation](https://developer.apple.com/documentation/uikit/uiscenedelegate/3238059-scene?language=objc)).
+> > If you support both UISceneDelegate and UIApplicationDelegate for compatibility with older iOS, MSAL callback needs to be placed into both places.
+>
+> ```swift
+> func scene(_ scene: UIScene, openURLContexts URLContexts: Set<UIOpenURLContext>) {
+>
+> guard let urlContext = URLContexts.first else {
+> return
+> }
+>
+> let url = urlContext.url
+> let sourceApp = urlContext.options.sourceApplication
+>
+> MSALPublicClientApplication.handleMSALResponse(url, sourceApplication: sourceApp)
+> }
+> ```
+>
+> Finally, your app must have an `LSApplicationQueriesSchemes` entry in your ***Info.plist*** alongside the `CFBundleURLTypes`. The sample comes with this included.
+>
+> ```xml
+> <key>LSApplicationQueriesSchemes</key>
+> <array>
+> <string>msauthv2</string>
+> <string>msauthv3</string>
+> </array>
+> ```
+>
+> ### Sign in users & request tokens
+>
+> MSAL has two methods used to acquire tokens: `acquireToken` and `acquireTokenSilent`.
+>
+> #### acquireToken: Get a token interactively
+>
+> Some situations require users to interact with Microsoft identity platform. In these cases, the end user may be required to select their account, enter their credentials, or consent to your app's permissions. For example,
+>
+> * The first time users sign in to the application
+> * If a user resets their password, they'll need to enter their credentials
+> * When your application is requesting access to a resource for the first time
+> * When MFA or other Conditional Access policies are required
+>
+> ```swift
+> let parameters = MSALInteractiveTokenParameters(scopes: kScopes, webviewParameters: self.webViewParamaters!)
+> self.applicationContext!.acquireToken(with: parameters) { (result, error) in /* Add your handling logic */}
+> ```
+>
+> > |Where:| Description |
+> > |||
+> > | `scopes` | Contains the scopes being requested (that is, `[ "user.read" ]` for Microsoft Graph or `[ "<Application ID URL>/scope" ]` for custom web APIs (`api://<Application ID>/access_as_user`) |
+>
+> #### acquireTokenSilent: Get an access token silently
+>
+> Apps shouldn't require their users to sign in every time they request a token. If the user has already signed in, this method allows apps to request tokens silently.
+>
+> ```swift
+> self.applicationContext!.getCurrentAccount(with: nil) { (currentAccount, previousAccount, error) in
+>
+> guard let account = currentAccount else {
+> return
+> }
+>
+> let silentParams = MSALSilentTokenParameters(scopes: self.kScopes, account: account)
+> self.applicationContext!.acquireTokenSilent(with: silentParams) { (result, error) in /* Add your handling logic */}
+> }
+> ```
+>
+> > |Where: | Description |
+> > |||
+> > | `scopes` | Contains the scopes being requested (that is, `[ "user.read" ]` for Microsoft Graph or `[ "<Application ID URL>/scope" ]` for custom web APIs (`api://<Application ID>/access_as_user`) |
+> > | `account` | The account a token is being requested for. This quickstart is about a single account application. If you want to build a multi-account app you'll need to define logic to identify which account to use for token requests using `accountsFromDeviceForParameters:completionBlock:` and passing correct `accountIdentifier` |
+>
+> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
+>
+> ## Next steps
+>
+> Move on to the step-by-step tutorial in which you build an iOS or macOS app that gets an access token from the Microsoft identity platform and uses it to call the Microsoft Graph API.
+>
+> > [!div class="nextstepaction"]
+> > [Tutorial: Sign in users and call Microsoft Graph from an iOS or macOS app](tutorial-v2-ios.md)
active-directory Quickstart V2 Java Daemon https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-java-daemon.md
# Quickstart: Acquire a token and call Microsoft Graph API from a Java console app using app's identity
-In this quickstart, you download and run a code sample that demonstrates how a Java application can get an access token using the app's identity to call the Microsoft Graph API and display a [list of users](/graph/api/user-list) in the directory. The code sample demonstrates how an unattended job or Windows service can run with an application identity, instead of a user's identity.
-
-## Prerequisites
-
-To run this sample, you need:
--- [Java Development Kit (JDK)](https://openjdk.java.net/) 8 or greater-- [Maven](https://maven.apache.org/)-
-> [!div class="sxs-lookup"]
-### Download and configure the quickstart app
-
-#### Step 1: Configure the application in Azure portal
-For the code sample for this quickstart to work, you need to create a client secret, and add Graph API's **User.Read.All** application permission.
-> [!div class="nextstepaction"]
-> [Make these changes for me]()
-
-> [!div class="alert alert-info"]
-> ![Already configured](media/quickstart-v2-netcore-daemon/green-check.png) Your application is configured with these attributes.
-
-#### Step 2: Download the Java project
-
-> [!div class="sxs-lookup nextstepaction"]
-> [Download the code sample](https://github.com/Azure-Samples/ms-identity-java-daemon/archive/master.zip)
-
-> [!div class="sxs-lookup"]
-> > [!NOTE]
-> > `Enter_the_Supported_Account_Info_Here`
-
-#### Step 3: Admin consent
-
-If you try to run the application at this point, you'll receive *HTTP 403 - Forbidden* error: `Insufficient privileges to complete the operation`. This error happens because any *app-only permission* requires Admin consent: a global administrator of your directory must give consent to your application. Select one of the options below depending on your role:
-
-##### Global tenant administrator
-
-If you are a global administrator, go to **API Permissions** page select **Grant admin consent for Enter_the_Tenant_Name_Here**.
-> [!div id="apipermissionspage"]
-> [Go to the API Permissions page]()
-
-##### Standard user
-
-If you're a standard user of your tenant, then you need to ask a global administrator to grant admin consent for your application. To do this, give the following URL to your administrator:
-
-```url
-https://login.microsoftonline.com/Enter_the_Tenant_Id_Here/adminconsent?client_id=Enter_the_Application_Id_Here
-```
-#### Step 4: Run the application
-
-You can test the sample directly by running the main method of ClientCredentialGrant.java from your IDE.
-
-From your shell or command line:
-
-```
-$ mvn clean compile assembly:single
-```
-
-This will generate a msal-client-credential-secret-1.0.0.jar file in your /targets directory. Run this using your Java executable like below:
-
-```
-$ java -jar msal-client-credential-secret-1.0.0.jar
-```
-
-After running, the application should display the list of users in the configured tenant.
-
-> [!IMPORTANT]
-> This quickstart application uses a client secret to identify itself as confidential client. Because the client secret is added as a plain-text to your project files, for security reasons, it is recommended that you use a certificate instead of a client secret before considering the application as production application. For more information on how to use a certificate, see [these instructions](https://github.com/Azure-Samples/ms-identity-java-daemon/tree/master/msal-client-credential-certificate) in the same GitHub repository for this sample, but in the second folder **msal-client-credential-certificate**.
-
-## More information
-
-### MSAL Java
-
-[MSAL Java](https://github.com/AzureAD/microsoft-authentication-library-for-java) is the library used to sign in users and request tokens used to access an API protected by Microsoft identity platform. As described, this quickstart requests tokens by using the application own identity instead of delegated permissions. The authentication flow used in this case is known as *[client credentials oauth flow](v2-oauth2-client-creds-grant-flow.md)*. For more information on how to use MSAL Java with daemon apps, see [this article](scenario-daemon-overview.md).
-
-Add MSAL4J to your application by using Maven or Gradle to manage your dependencies by making the following changes to the application's pom.xml (Maven) or build.gradle (Gradle) file.
-
-In pom.xml:
-
-```xml
-<dependency>
- <groupId>com.microsoft.azure</groupId>
- <artifactId>msal4j</artifactId>
- <version>1.0.0</version>
-</dependency>
-```
-
-In build.gradle:
-
-```$xslt
-compile group: 'com.microsoft.azure', name: 'msal4j', version: '1.0.0'
-```
-
-### MSAL initialization
-
-Add a reference to MSAL for Java by adding the following code to the top of the file where you will be using MSAL4J:
-
-```Java
-import com.microsoft.aad.msal4j.*;
-```
-
-Then, initialize MSAL using the following code:
-
-```Java
-IClientCredential credential = ClientCredentialFactory.createFromSecret(CLIENT_SECRET);
-
-ConfidentialClientApplication cca =
- ConfidentialClientApplication
- .builder(CLIENT_ID, credential)
- .authority(AUTHORITY)
- .build();
-```
-
-> | Where: |Description |
-> |||
-> | `CLIENT_SECRET` | Is the client secret created for the application in Azure portal. |
-> | `CLIENT_ID` | Is the **Application (client) ID** for the application registered in the Azure portal. You can find this value in the app's **Overview** page in the Azure portal. |
-> | `AUTHORITY` | The STS endpoint for user to authenticate. Usually `https://login.microsoftonline.com/{tenant}` for public cloud, where {tenant} is the name of your tenant or your tenant Id.|
-
-### Requesting tokens
-
-To request a token using app's identity, use `acquireToken` method:
-
-```Java
-IAuthenticationResult result;
- try {
- SilentParameters silentParameters =
- SilentParameters
- .builder(SCOPE)
- .build();
-
- // try to acquire token silently. This call will fail since the token cache does not
- // have a token for the application you are requesting an access token for
- result = cca.acquireTokenSilently(silentParameters).join();
- } catch (Exception ex) {
- if (ex.getCause() instanceof MsalException) {
-
- ClientCredentialParameters parameters =
- ClientCredentialParameters
- .builder(SCOPE)
- .build();
-
- // Try to acquire a token. If successful, you should see
- // the token information printed out to console
- result = cca.acquireToken(parameters).join();
- } else {
- // Handle other exceptions accordingly
- throw ex;
- }
- }
- return result;
-```
-
-> |Where:| Description |
-> |||
-> | `SCOPE` | Contains the scopes requested. For confidential clients, this should use the format similar to `{Application ID URI}/.default` to indicate that the scopes being requested are the ones statically defined in the app object set in the Azure portal (for Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`). For custom web APIs, `{Application ID URI}` is defined under the **Expose an API** section in **App registrations** in the Azure portal.|
--
-## Next steps
-
-To learn more about daemon applications, see the scenario landing page.
-
-> [!div class="nextstepaction"]
-> [Daemon application that calls web APIs](scenario-daemon-overview.md)
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: Java daemon that calls a protected API](console-app-quickstart.md?pivots=devlang-java)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" class="sxs-lookup"]
+> In this quickstart, you download and run a code sample that demonstrates how a Java application can get an access token using the app's identity to call the Microsoft Graph API and display a [list of users](/graph/api/user-list) in the directory. The code sample demonstrates how an unattended job or Windows service can run with an application identity, instead of a user's identity.
+>
+> ## Prerequisites
+>
+> To run this sample, you need:
+>
+> - [Java Development Kit (JDK)](https://openjdk.java.net/) 8 or greater
+> - [Maven](https://maven.apache.org/)
+>
+> > [!div class="sxs-lookup"]
+> ### Download and configure the quickstart app
+>
+> #### Step 1: Configure the application in Azure portal
+> For the code sample for this quickstart to work, you need to create a client secret, and add Graph API's **User.Read.All** application permission.
+> > [!div class="nextstepaction"]
+> > [Make these changes for me]()
+>
+> > [!div class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-netcore-daemon/green-check.png) Your application is configured with these attributes.
+>
+> #### Step 2: Download the Java project
+>
+> > [!div class="sxs-lookup nextstepaction"]
+> > [Download the code sample](https://github.com/Azure-Samples/ms-identity-java-daemon/archive/master.zip)
+>
+> > [!div class="sxs-lookup"]
+> > > [!NOTE]
+> > > `Enter_the_Supported_Account_Info_Here`
+>
+> #### Step 3: Admin consent
+>
+> If you try to run the application at this point, you'll receive *HTTP 403 - Forbidden* error: `Insufficient privileges to complete the operation`. This error happens because any *app-only permission* requires Admin consent: a global administrator of your directory must give consent to your application. Select one of the options below depending on your role:
+>
+> ##### Global tenant administrator
+>
+> If you are a global administrator, go to **API Permissions** page select **Grant admin consent for Enter_the_Tenant_Name_Here**.
+> > [!div id="apipermissionspage"]
+> > [Go to the API Permissions page]()
+>
+> ##### Standard user
+>
+> If you're a standard user of your tenant, then you need to ask a global administrator to grant admin consent for your application. To do this, give the following URL to your administrator:
+>
+> ```url
+> https://login.microsoftonline.com/Enter_the_Tenant_Id_Here/adminconsent?client_id=Enter_the_Application_Id_Here
+> ```
+> #### Step 4: Run the application
+>
+> You can test the sample directly by running the main method of ClientCredentialGrant.java from your IDE.
+>
+> From your shell or command line:
+>
+> ```
+> $ mvn clean compile assembly:single
+> ```
+>
+> This will generate a msal-client-credential-secret-1.0.0.jar file in your /targets directory. Run this using your Java executable like below:
+>
+> ```
+> $ java -jar msal-client-credential-secret-1.0.0.jar
+> ```
+>
+> After running, the application should display the list of users in the configured tenant.
+>
+> > [!IMPORTANT]
+> > This quickstart application uses a client secret to identify itself as confidential client. Because the client secret is added as a plain-text to your project files, for security reasons, it is recommended that you use a certificate instead of a client secret before considering the application as production application. For more information on how to use a certificate, see [these instructions](https://github.com/Azure-Samples/ms-identity-java-daemon/tree/master/msal-client-credential-certificate) in the same GitHub repository for this sample, but in the second folder **msal-client-credential-certificate**.
+>
+> ## More information
+>
+> ### MSAL Java
+>
+> [MSAL Java](https://github.com/AzureAD/microsoft-authentication-library-for-java) is the library used to sign in users and request tokens used to access an API protected by Microsoft identity platform. As described, this quickstart requests tokens by using the application own identity instead of delegated permissions. The authentication flow used in this case is known as *[client credentials oauth flow](v2-oauth2-client-creds-grant-flow.md)*. For more information on how to use MSAL Java with daemon apps, see [this article](scenario-daemon-overview.md).
+>
+> Add MSAL4J to your application by using Maven or Gradle to manage your dependencies by making the following changes to the application's pom.xml (Maven) or build.gradle (Gradle) file.
+>
+> In pom.xml:
+>
+> ```xml
+> <dependency>
+> <groupId>com.microsoft.azure</groupId>
+> <artifactId>msal4j</artifactId>
+> <version>1.0.0</version>
+> </dependency>
+> ```
+>
+> In build.gradle:
+>
+> ```$xslt
+> compile group: 'com.microsoft.azure', name: 'msal4j', version: '1.0.0'
+> ```
+>
+> ### MSAL initialization
+>
+> Add a reference to MSAL for Java by adding the following code to the top of the file where you will be using MSAL4J:
+>
+> ```Java
+> import com.microsoft.aad.msal4j.*;
+> ```
+>
+> Then, initialize MSAL using the following code:
+>
+> ```Java
+> IClientCredential credential = ClientCredentialFactory.createFromSecret(CLIENT_SECRET);
+>
+> ConfidentialClientApplication cca =
+> ConfidentialClientApplication
+> .builder(CLIENT_ID, credential)
+> .authority(AUTHORITY)
+> .build();
+> ```
+>
+> > | Where: |Description |
+> > |||
+> > | `CLIENT_SECRET` | Is the client secret created for the application in Azure portal. |
+> > | `CLIENT_ID` | Is the **Application (client) ID** for the application registered in the Azure portal. You can find this value in the app's **Overview** page in the Azure portal. |
+> > | `AUTHORITY` | The STS endpoint for user to authenticate. Usually `https://login.microsoftonline.com/{tenant}` for public cloud, where {tenant} is the name of your tenant or your tenant Id.|
+>
+> ### Requesting tokens
+>
+> To request a token using app's identity, use `acquireToken` method:
+>
+> ```Java
+> IAuthenticationResult result;
+> try {
+> SilentParameters silentParameters =
+> SilentParameters
+> .builder(SCOPE)
+> .build();
+>
+> // try to acquire token silently. This call will fail since the token cache does not
+> // have a token for the application you are requesting an access token for
+> result = cca.acquireTokenSilently(silentParameters).join();
+> } catch (Exception ex) {
+> if (ex.getCause() instanceof MsalException) {
+>
+> ClientCredentialParameters parameters =
+> ClientCredentialParameters
+> .builder(SCOPE)
+> .build();
+>
+> // Try to acquire a token. If successful, you should see
+> // the token information printed out to console
+> result = cca.acquireToken(parameters).join();
+> } else {
+> // Handle other exceptions accordingly
+> throw ex;
+> }
+> }
+> return result;
+> ```
+>
+> > |Where:| Description |
+> > |||
+> > | `SCOPE` | Contains the scopes requested. For confidential clients, this should use the format similar to `{Application ID URI}/.default` to indicate that the scopes being requested are the ones statically defined in the app object set in the Azure portal (for Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`). For custom web APIs, `{Application ID URI}` is defined under the **Expose an API** section in **App registrations** in the Azure portal.|
+>
+> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
+>
+> ## Next steps
+>
+> To learn more about daemon applications, see the scenario landing page.
+>
+> > [!div class="nextstepaction"]
+> > [Daemon application that calls web APIs](scenario-daemon-overview.md)
active-directory Quickstart V2 Java Webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-java-webapp.md
# Quickstart: Add sign-in with Microsoft to a Java web app
-In this quickstart, you download and run a code sample that demonstrates how a Java web application can sign in users and call the Microsoft Graph API. Users from any Azure Active Directory (Azure AD) organization can sign in to the application.
- For an overview, see the [diagram of how the sample works](#how-the-sample-works).
-
-## Prerequisites
-
-To run this sample, you need:
--- [Java Development Kit (JDK)](https://openjdk.java.net/) 8 or later.-- [Maven](https://maven.apache.org/).--
-#### Step 1: Configure your application in the Azure portal
-
-To use the code sample in this quickstart:
-
-1. Add reply URLs `https://localhost:8443/msal4jsample/secure/aad` and `https://localhost:8443/msal4jsample/graph/me`.
-1. Create a client secret.
-> [!div class="nextstepaction"]
-> [Make these changes for me]()
-
-> [!div class="alert alert-info"]
-> ![Already configured](media/quickstart-v2-aspnet-webapp/green-check.png) Your application is configured with these attributes.
-
-#### Step 2: Download the code sample
-
-Download the project and extract the .zip file into a folder near the root of your drive. For example, *C:\Azure-Samples*.
-
-To use HTTPS with localhost, provide the `server.ssl.key` properties. To generate a self-signed certificate, use the keytool utility (included in JRE).
-
-Here's an example:
-```
- keytool -genkeypair -alias testCert -keyalg RSA -storetype PKCS12 -keystore keystore.p12 -storepass password
-
- server.ssl.key-store-type=PKCS12
- server.ssl.key-store=classpath:keystore.p12
- server.ssl.key-store-password=password
- server.ssl.key-alias=testCert
- ```
- Put the generated keystore file in the *resources* folder.
-
-> [!div class="sxs-lookup nextstepaction"]
-> [Download the code sample](https://github.com/Azure-Samples/ms-identity-java-webapp/archive/master.zip)
-
-> [!div class="sxs-lookup"]
-> > [!NOTE]
-> > `Enter_the_Supported_Account_Info_Here`
-
-> [!div class="sxs-lookup"]
-
-#### Step 3: Run the code sample
-
-To run the project, take one of these steps:
--- Run it directly from your IDE by using the embedded Spring Boot server.-- Package it to a WAR file by using [Maven](https://maven.apache.org/plugins/maven-war-plugin/usage.html), and then deploy it to a J2EE container solution like [Apache Tomcat](http://tomcat.apache.org/).-
-##### Running the project from an IDE
-
-To run the web application from an IDE, select run, and then go to the home page of the project. For this sample, the standard home page URL is https://localhost:8443.
-
-1. On the front page, select the **Login** button to redirect users to Azure Active Directory and prompt them for credentials.
-
-1. After users are authenticated, they're redirected to `https://localhost:8443/msal4jsample/secure/aad`. They're now signed in, and the page will show information about the user account. The sample UI has these buttons:
- - **Sign Out**: Signs the current user out of the application and redirects that user to the home page.
- - **Show User Info**: Acquires a token for Microsoft Graph and calls Microsoft Graph with a request that contains the token, which returns basic information about the signed-in user.
-
-##### Running the project from Tomcat
-
-If you want to deploy the web sample to Tomcat, make a couple changes to the source code.
-
-1. Open *ms-identity-java-webapp/src/main/java/com.microsoft.azure.msalwebsample/MsalWebSampleApplication*.
-
- - Delete all source code and replace it with this code:
-
- ```Java
- package com.microsoft.azure.msalwebsample;
-
- import org.springframework.boot.SpringApplication;
- import org.springframework.boot.autoconfigure.SpringBootApplication;
- import org.springframework.boot.builder.SpringApplicationBuilder;
- import org.springframework.boot.web.servlet.support.SpringBootServletInitializer;
-
- @SpringBootApplication
- public class MsalWebSampleApplication extends SpringBootServletInitializer {
-
- public static void main(String[] args) {
- SpringApplication.run(MsalWebSampleApplication.class, args);
- }
-
- @Override
- protected SpringApplicationBuilder configure(SpringApplicationBuilder builder) {
- return builder.sources(MsalWebSampleApplication.class);
- }
- }
- ```
-
-2. Tomcat's default HTTP port is 8080, but you need an HTTPS connection over port 8443. To configure this setting:
- - Go to *tomcat/conf/server.xml*.
- - Search for the `<connector>` tag, and replace the existing connector with this connector:
-
- ```xml
- <Connector
- protocol="org.apache.coyote.http11.Http11NioProtocol"
- port="8443" maxThreads="200"
- scheme="https" secure="true" SSLEnabled="true"
- keystoreFile="C:/Path/To/Keystore/File/keystore.p12" keystorePass="KeystorePassword"
- clientAuth="false" sslProtocol="TLS"/>
- ```
-
-3. Open a Command Prompt window. Go to the root folder of this sample (where the pom.xml file is located), and run `mvn package` to build the project.
- - This command will generate a *msal-web-sample-0.1.0.war* file in your */targets* directory.
- - Rename this file to *msal4jsample.war*.
- - Deploy the WAR file by using Tomcat or any other J2EE container solution.
- - To deploy the msal4jsample.war file, copy it to the */webapps/* directory in your Tomcat installation, and then start the Tomcat server.
-
-4. After the file is deployed, go to https://localhost:8443/msal4jsample by using a browser.
-
-> [!IMPORTANT]
-> This quickstart application uses a client secret to identify itself as a confidential client. Because the client secret is added as plain text to your project files, for security reasons we recommend that you use a certificate instead of a client secret before using the application in a production environment. For more information on how to use a certificate, see [Certificate credentials for application authentication](./active-directory-certificate-credentials.md).
-
-## More information
-
-### How the sample works
-![Diagram that shows how the sample app generated by this quickstart works.](media/quickstart-v2-java-webapp/java-quickstart.svg)
-
-### Get MSAL
-
-MSAL for Java (MSAL4J) is the Java library used to sign in users and request tokens that are used to access an API that's protected by the Microsoft identity platform.
-
-Add MSAL4J to your application by using Maven or Gradle to manage your dependencies by making the following changes to the application's pom.xml (Maven) or build.gradle (Gradle) file.
-
-In pom.xml:
-
-```xml
-<dependency>
- <groupId>com.microsoft.azure</groupId>
- <artifactId>msal4j</artifactId>
- <version>1.0.0</version>
-</dependency>
-```
-
-In build.gradle:
-
-```$xslt
-compile group: 'com.microsoft.azure', name: 'msal4j', version: '1.0.0'
-```
-
-### Initialize MSAL
-
-Add a reference to MSAL for Java by adding the following code at the start of the file where you'll be using MSAL4J:
-
-```Java
-import com.microsoft.aad.msal4j.*;
-```
--
-## Next steps
-
-For a more in-depth discussion of building web apps that sign in users on the Microsoft identity platform, see the multipart scenario series:
-
-> [!div class="nextstepaction"]
-> [Scenario: Web app that signs in users](scenario-web-app-sign-user-overview.md?tabs=java)
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: Java web app with user sign-in](web-app-quickstart.md?pivots=devlang-java)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" class="sxs-lookup"]
+> In this quickstart, you download and run a code sample that demonstrates how a Java web application can sign in users and call the Microsoft Graph API. Users from any Azure Active Directory (Azure AD) organization can sign in to the application.
+>
+> For an overview, see the [diagram of how the sample works](#how-the-sample-works).
+>
+> ## Prerequisites
+>
+> To run this sample, you need:
+>
+> - [Java Development Kit (JDK)](https://openjdk.java.net/) 8 or later.
+> - [Maven](https://maven.apache.org/).
+>
+>
+> #### Step 1: Configure your application in the Azure portal
+>
+> To use the code sample in this quickstart:
+>
+> 1. Add reply URLs `https://localhost:8443/msal4jsample/secure/aad` and `https://localhost:8443/msal4jsample/graph/me`.
+> 1. Create a client secret.
+> > [!div class="nextstepaction"]
+> > [Make these changes for me]()
+>
+> > [!div class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-aspnet-webapp/green-check.png) Your application is configured with these attributes.
+>
+> #### Step 2: Download the code sample
+>
+> Download the project and extract the .zip file into a folder near the root of your drive. For example, *C:\Azure-Samples*.
+>
+> To use HTTPS with localhost, provide the `server.ssl.key` properties. To generate a self-signed certificate, use the keytool utility (included in JRE).
+>
+> Here's an example:
+> ```
+> keytool -genkeypair -alias testCert -keyalg RSA -storetype PKCS12 -keystore keystore.p12 -storepass password
+>
+> server.ssl.key-store-type=PKCS12
+> server.ssl.key-store=classpath:keystore.p12
+> server.ssl.key-store-password=password
+> server.ssl.key-alias=testCert
+> ```
+> Put the generated keystore file in the *resources* folder.
+>
+> > [!div class="sxs-lookup nextstepaction"]
+> > [Download the code sample](https://github.com/Azure-Samples/ms-identity-java-webapp/archive/master.zip)
+>
+> > [!div class="sxs-lookup"]
+> > > [!NOTE]
+> > > `Enter_the_Supported_Account_Info_Here`
+>
+> > [!div class="sxs-lookup"]
+>
+> #### Step 3: Run the code sample
+>
+> To run the project, take one of these steps:
+>
+> - Run it directly from your IDE by using the embedded Spring Boot server.
+> - Package it to a WAR file by using [Maven](https://maven.apache.org/plugins/maven-war-plugin/usage.html), and then deploy it to a J2EE container solution like [Apache Tomcat](http://tomcat.apache.org/).
+>
+> ##### Running the project from an IDE
+>
+> To run the web application from an IDE, select run, and then go to the home page of the project. For this sample, the standard home page URL is https://localhost:8443.
+>
+> 1. On the front page, select the **Login** button to redirect users to Azure Active Directory and prompt them for credentials.
+>
+> 1. After users are authenticated, they're redirected to `https://localhost:8443/msal4jsample/secure/aad`. They're now signed in, and the page will show information about the user account. The sample UI has these buttons:
+> - **Sign Out**: Signs the current user out of the application and redirects that user to the home page.
+> - **Show User Info**: Acquires a token for Microsoft Graph and calls Microsoft Graph with a request that contains the token, which returns basic information about the signed-in user.
+>
+> ##### Running the project from Tomcat
+>
+> If you want to deploy the web sample to Tomcat, make a couple changes to the source code.
+>
+> 1. Open *ms-identity-java-webapp/src/main/java/com.microsoft.azure.msalwebsample/MsalWebSampleApplication*.
+>
+> - Delete all source code and replace it with this code:
+>
+> ```Java
+> package com.microsoft.azure.msalwebsample;
+>
+> import org.springframework.boot.SpringApplication;
+> import org.springframework.boot.autoconfigure.SpringBootApplication;
+> import org.springframework.boot.builder.SpringApplicationBuilder;
+> import org.springframework.boot.web.servlet.support.SpringBootServletInitializer;
+>
+> @SpringBootApplication
+> public class MsalWebSampleApplication extends SpringBootServletInitializer {
+>
+> public static void main(String[] args) {
+> SpringApplication.run(MsalWebSampleApplication.class, args);
+> }
+>
+> @Override
+> protected SpringApplicationBuilder configure(SpringApplicationBuilder builder) {
+> return builder.sources(MsalWebSampleApplication.class);
+> }
+> }
+> ```
+>
+> 2. Tomcat's default HTTP port is 8080, but you need an HTTPS connection over port 8443. To configure this setting:
+> - Go to *tomcat/conf/server.xml*.
+> - Search for the `<connector>` tag, and replace the existing connector with this connector:
+>
+> ```xml
+> <Connector
+> protocol="org.apache.coyote.http11.Http11NioProtocol"
+> port="8443" maxThreads="200"
+> scheme="https" secure="true" SSLEnabled="true"
+> keystoreFile="C:/Path/To/Keystore/File/keystore.p12" keystorePass="KeystorePassword"
+> clientAuth="false" sslProtocol="TLS"/>
+> ```
+>
+> 3. Open a Command Prompt window. Go to the root folder of this sample (where the pom.xml file is located), and run `mvn > package` to build the project.
+> - This command will generate a *msal-web-sample-0.1.0.war* file in your */targets* directory.
+> - Rename this file to *msal4jsample.war*.
+> - Deploy the WAR file by using Tomcat or any other J2EE container solution.
+> - To deploy the msal4jsample.war file, copy it to the */webapps/* directory in your Tomcat installation, and then start the Tomcat server.
+>
+> 4. After the file is deployed, go to https://localhost:8443/msal4jsample by using a browser.
+>
+> > [!IMPORTANT]
+> > This quickstart application uses a client secret to identify itself as a confidential client. Because the client secret is added as plain text to your project files, for security reasons we recommend that you use a certificate instead of a client secret before using the application in a production environment. For more information on how to use a certificate, see [Certificate credentials for application authentication](./active-directory-certificate-credentials.md).
+>
+> ## More information
+>
+> ### How the sample works
+> ![Diagram that shows how the sample app generated by this quickstart works.](media/quickstart-v2-java-webapp/java-quickstart.svg)
+>
+> ### Get MSAL
+>
+> MSAL for Java (MSAL4J) is the Java library used to sign in users and request tokens that are used to access an API that's protected by the Microsoft identity platform.
+>
+> Add MSAL4J to your application by using Maven or Gradle to manage your dependencies by making the following changes to the > application's pom.xml (Maven) or build.gradle (Gradle) file.
+>
+> In pom.xml:
+>
+> ```xml
+> <dependency>
+> <groupId>com.microsoft.azure</groupId>
+> <artifactId>msal4j</artifactId>
+> <version>1.0.0</version>
+> </dependency>
+> ```
+>
+> In build.gradle:
+>
+> ```$xslt
+> compile group: 'com.microsoft.azure', name: 'msal4j', version: '1.0.0'
+> ```
+>
+> ### Initialize MSAL
+>
+> Add a reference to MSAL for Java by adding the following code at the start of the file where you'll be using MSAL4J:
+>
+> ```Java
+> import com.microsoft.aad.msal4j.*;
+> ```
+>
+> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
+>
+> ## Next steps
+>
+> For a more in-depth discussion of building web apps that sign in users on the Microsoft identity platform, see the multipart scenario series:
+>
+> > [!div class="nextstepaction"]
+> > [Scenario: Web app that signs in users](scenario-web-app-sign-user-overview.md?tabs=java)
active-directory Quickstart V2 Javascript Auth Code Angular https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-javascript-auth-code-angular.md
# Quickstart: Sign in and get an access token in an Angular SPA using the auth code flow -
-In this quickstart, you download and run a code sample that demonstrates how a JavaScript Angular single-page application (SPA) can sign in users and call Microsoft Graph using the authorization code flow. The code sample demonstrates how to get an access token to call the Microsoft Graph API or any web API.
-
-See [How the sample works](#how-the-sample-works) for an illustration.
-
-This quickstart uses MSAL Angular v2 with the authorization code flow.
-
-## Prerequisites
-
-* Azure subscription - [Create an Azure subscription for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
-* [Node.js](https://nodejs.org/en/download/)
-* [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
-
-#### Step 1: Configure your application in the Azure portal
-For the code sample in this quickstart to work, add a **Redirect URI** of `http://localhost:4200/`.
-
->[!div class="nextstepaction"]
->[Make these changes for me]()
-
-> [!div class="alert alert-info"]
-> ![Already configured](media/quickstart-v2-javascript/green-check.png) Your application is configured with these attributes.
-
- #### Step 2: Download the project
-
-Run the project with a web server by using Node.js
-
->[!div class="nextstepaction"]
->[Download the code sample](https://github.com/Azure-Samples/ms-identity-javascript-angular-spa/archive/main.zip)
-
-> [!div class="sxs-lookup"]
-> > [!NOTE]
-> > `Enter_the_Supported_Account_Info_Here`
--
-#### Step 3: Your app is configured and ready to run
-
-We have configured your project with values of your app's properties.
-
-#### Step 4: Run the project
-
-Run the project with a web server by using Node.js:
-
-1. To start the server, run the following commands from within the project directory:
- ```console
- npm install
- npm start
- ```
-1. Browse to `http://localhost:4200/`.
-
-1. Select **Login** to start the sign-in process and then call the Microsoft Graph API.
-
- The first time you sign in, you're prompted to provide your consent to allow the application to access your profile and sign you in. After you're signed in successfully, click the **Profile** button to display your user information on the page.
-
-## More information
-
-### How the sample works
-
-![Diagram showing the authorization code flow for a single-page application.](media/quickstart-v2-javascript-auth-code/diagram-01-auth-code-flow.png)
-
-### msal.js
-
-The MSAL.js library signs in users and requests the tokens that are used to access an API that's protected by the Microsoft identity platform.
-
-If you have Node.js installed, you can download the latest version by using the Node.js Package Manager (npm):
-
-```console
-npm install @azure/msal-browser @azure/msal-angular@2
-```
-
-## Next steps
-
-For a detailed step-by-step guide on building the auth code flow application using vanilla JavaScript, see the following tutorial:
-
-> [!div class="nextstepaction"]
-> [Tutorial to sign in and call MS Graph](./tutorial-v2-javascript-auth-code.md)
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: Angular single-page app with user sign-in](single-page-app-quickstart.md?pivots=devlang-angular)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" class="sxs-lookup"]
+> In this quickstart, you download and run a code sample that demonstrates how a JavaScript Angular single-page application (SPA) can sign in users and call Microsoft Graph using the authorization code flow. The code sample demonstrates how to get an access token to call the Microsoft Graph API or any web API.
+>
+> See [How the sample works](#how-the-sample-works) for an illustration.
+>
+> This quickstart uses MSAL Angular v2 with the authorization code flow.
+>
+> ## Prerequisites
+>
+> * Azure subscription - [Create an Azure subscription for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+> * [Node.js](https://nodejs.org/en/download/)
+> * [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
+>
+> #### Step 1: Configure your application in the Azure portal
+> For the code sample in this quickstart to work, add a **Redirect URI** of `http://localhost:4200/`.
+>
+> >[!div class="nextstepaction"]
+> >[Make these changes for me]()
+>
+> > [!div class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-javascript/green-check.png) Your application is configured with these > attributes.
+>
+> #### Step 2: Download the project
+>
+> Run the project with a web server by using Node.js
+>
+> >[!div class="nextstepaction"]
+> >[Download the code sample](https://github.com/Azure-Samples/ms-identity-javascript-angular-spa/archive/main.zip)
+>
+> > [!div class="sxs-lookup"]
+> > > [!NOTE]
+> > > `Enter_the_Supported_Account_Info_Here`
+>
+>
+> #### Step 3: Your app is configured and ready to run
+>
+> We have configured your project with values of your app's properties.
+>
+> #### Step 4: Run the project
+>
+> Run the project with a web server by using Node.js:
+>
+> 1. To start the server, run the following commands from within the project directory:
+> ```console
+> npm install
+> npm start
+> ```
+> 1. Browse to `http://localhost:4200/`.
+>
+> 1. Select **Login** to start the sign-in process and then call the Microsoft Graph API.
+>
+> The first time you sign in, you're prompted to provide your consent to allow the application to access your profile and sign you in. After you're signed in successfully, click the **Profile** button to display your user information on the page.
+>
+> ## More information
+>
+> ### How the sample works
+>
+> ![Diagram showing the authorization code flow for a single-page application.](media/quickstart-v2-javascript-auth-code/diagram-01-auth-code-flow.png)
+>
+> ### msal.js
+>
+> The MSAL.js library signs in users and requests the tokens that are used to access an API that's protected by the Microsoft identity platform.
+>
+> If you have Node.js installed, you can download the latest version by using the Node.js Package Manager (npm):
+>
+> ```console
+> npm install @azure/msal-browser @azure/msal-angular@2
+> ```
+>
+> ## Next steps
+>
+> For a detailed step-by-step guide on building the auth code flow application using vanilla JavaScript, see the following tutorial:
+>
+> > [!div class="nextstepaction"]
+> > [Tutorial to sign in and call MS Graph](./tutorial-v2-javascript-auth-code.md)
active-directory Quickstart V2 Javascript Auth Code React https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-javascript-auth-code-react.md
#Customer intent: As an app developer, I want to learn how to login, logout, conditionally render components to authenticated users, and acquire an access token for a protected resource such as Microsoft Graph by using the Microsoft identity platform so that my JavaScript React app can sign in users of personal accounts, work accounts, and school accounts. --
-# Quickstart: Sign in and get an access token in a React SPA using the auth code flow
-
-In this quickstart, you download and run a code sample that demonstrates how a JavaScript React single-page application (SPA) can sign in users and call Microsoft Graph using the authorization code flow. The code sample demonstrates how to get an access token to call the Microsoft Graph API or any web API.
-
-See [How the sample works](#how-the-sample-works) for an illustration.
-
-## Prerequisites
-
-* Azure subscription - [Create an Azure subscription for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
-* [Node.js](https://nodejs.org/en/download/)
-* [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
-
-#### Step 1: Configure your application in the Azure portal
-
-This code samples requires a **Redirect URI** of `http://localhost:3000/`.
-> [!div class="nextstepaction"]
-> [Make these changes for me]()
-
-> [!div class="alert alert-info"]
-> ![Already configured](media/quickstart-v2-javascript/green-check.png) Your application is configured with these attributes.
-
-#### Step 2: Download the project
-
-Run the project with a web server by using Node.js
-
-> [!div class="nextstepaction"]
-> [Download the code sample](https://github.com/Azure-Samples/ms-identity-javascript-react-spa/archive/main.zip)
-
-> [!div class="sxs-lookup"]
-> > [!NOTE]
-> > `Enter_the_Supported_Account_Info_Here`
--
-#### Step 3: Your app is configured and ready to run
-We have configured your project with values of your app's properties.
-
-#### Step 4: Run the project
-
-Run the project with a web server by using Node.js:
-
-1. To start the server, run the following commands from within the project directory:
- ```console
- npm install
- npm start
- ```
-1. Browse to `http://localhost:3000/`.
-
-1. Select **Sign In** to start the sign-in process and then call the Microsoft Graph API.
-
- The first time you sign in, you're prompted to provide your consent to allow the application to access your profile and sign you in. After you're signed in successfully, click on the **Request Profile Information** to display your profile information on the page.
-
-## More information
-
-### How the sample works
-
-![Diagram showing the authorization code flow for a single-page application.](media/quickstart-v2-javascript-auth-code/diagram-01-auth-code-flow.png)
-
-### msal.js
-
-The MSAL.js library signs in users and requests the tokens that are used to access an API that's protected by the Microsoft identity platform.
-
-If you have Node.js installed, you can download the latest version by using the Node.js Package Manager (npm):
-
-```console
-npm install @azure/msal-browser @azure/msal-react
-```
-
-## Next steps
-
-Next, try a step-by-step tutorial to learn how to build a React SPA from scratch that signs in users and calls the Microsoft Graph API to get user profile data:
-
-> [!div class="nextstepaction"]
-> [Tutorial: Sign in users and call Microsoft Graph](tutorial-v2-react.md)
+> # Quickstart: Sign in and get an access token in a React SPA using the auth code flow
++
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: React single-page app with user sign-in](single-page-app-quickstart.md?pivots=devlang-react)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" class="sxs-lookup"]
+> In this quickstart, you download and run a code sample that demonstrates how a JavaScript React single-page application (SPA) can sign in users and call Microsoft Graph using the authorization code flow. The code sample demonstrates how to get an access token to call the Microsoft Graph API or any web API.
+>
+> See [How the sample works](#how-the-sample-works) for an illustration.
+>
+> ## Prerequisites
+>
+> * Azure subscription - [Create an Azure subscription for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+> * [Node.js](https://nodejs.org/en/download/)
+> * [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
+>
+> #### Step 1: Configure your application in the Azure portal
+>
+> This code samples requires a **Redirect URI** of `http://localhost:3000/`.
+> > [!div class="nextstepaction"]
+> > [Make these changes for me]()
+>
+> > [!div class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-javascript/green-check.png) Your application is configured with these attributes.
+>
+> #### Step 2: Download the project
+>
+> Run the project with a web server by using Node.js
+>
+> > [!div class="nextstepaction"]
+> > [Download the code sample](https://github.com/Azure-Samples/ms-identity-javascript-react-spa/archive/main.zip)
+>
+> > [!div class="sxs-lookup"]
+> > > [!NOTE]
+> > > `Enter_the_Supported_Account_Info_Here`
+>
+>
+> #### Step 3: Your app is configured and ready to run
+> We have configured your project with values of your app's properties.
+>
+> #### Step 4: Run the project
+>
+> Run the project with a web server by using Node.js:
+>
+> 1. To start the server, run the following commands from within the project directory:
+> ```console
+> npm install
+> npm start
+> ```
+> 1. Browse to `http://localhost:3000/`.
+>
+> 1. Select **Sign In** to start the sign-in process and then call the Microsoft Graph API.
+>
+> The first time you sign in, you're prompted to provide your consent to allow the application to access your profile and sign you in. After you're signed in successfully, click on the **Request Profile Information** to display your profile information on the page.
+>
+> ## More information
+>
+> ### How the sample works
+>
+> ![Diagram showing the authorization code flow for a single-page application.](media/quickstart-v2-javascript-auth-code/diagram-01-auth-code-flow.png)
+>
+> ### msal.js
+>
+> The MSAL.js library signs in users and requests the tokens that are used to access an API that's protected by the Microsoft identity platform.
+>
+> If you have Node.js installed, you can download the latest version by using the Node.js Package Manager (npm):
+>
+> ```console
+> npm install @azure/msal-browser @azure/msal-react
+> ```
+>
+> ## Next steps
+>
+> Next, try a step-by-step tutorial to learn how to build a React SPA from scratch that signs in users and calls the > Microsoft Graph API to get user profile data:
+>
+> > [!div class="nextstepaction"]
+> > [Tutorial: Sign in users and call Microsoft Graph](tutorial-v2-react.md)
active-directory Quickstart V2 Javascript Auth Code https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-javascript-auth-code.md
# Quickstart: Sign in users and get an access token in a JavaScript SPA using the auth code flow with PKCE
-In this quickstart, you download and run a code sample that demonstrates how a JavaScript single-page application (SPA) can sign in users and call Microsoft Graph using the authorization code flow with Proof Key for Code Exchange (PKCE). The code sample demonstrates how to get an access token to call the Microsoft Graph API or any web API.
-
-See [How the sample works](#how-the-sample-works) for an illustration.
-
-## Prerequisites
-
-* Azure subscription - [Create an Azure subscription for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
-* [Node.js](https://nodejs.org/en/download/)
-* [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
--
-#### Step 1: Configure your application in the Azure portal
-For the code sample in this quickstart to work, add a **Redirect URI** of `http://localhost:3000/`.
-> [!div class="nextstepaction"]
-> [Make these changes for me]()
-
-> [!div class="alert alert-info"]
-> ![Already configured](media/quickstart-v2-javascript/green-check.png) Your application is configured with these attributes.
-
-#### Step 2: Download the project
-
-Run the project with a web server by using Node.js
-
-> [!div class="nextstepaction"]
-> [Download the code sample](https://github.com/Azure-Samples/ms-identity-javascript-v2/archive/master.zip)
-
-> [!div class="sxs-lookup"]
-> > [!NOTE]
-> > `Enter_the_Supported_Account_Info_Here`
-
-#### Step 3: Your app is configured and ready to run
-
-We have configured your project with values of your app's properties.
-
-Run the project with a web server by using Node.js.
-
-1. To start the server, run the following commands from within the project directory:
-
- ```console
- npm install
- npm start
- ```
-
-1. Go to `http://localhost:3000/`.
-
-1. Select **Sign In** to start the sign-in process and then call the Microsoft Graph API.
-
- The first time you sign in, you're prompted to provide your consent to allow the application to access your profile and sign you in. After you're signed in successfully, your user profile information is displayed on the page.
-
-## More information
-
-### How the sample works
-
-![Diagram showing the authorization code flow for a single-page application.](media/quickstart-v2-javascript-auth-code/diagram-01-auth-code-flow.png)
-
-### MSAL.js
-
-The MSAL.js library signs in users and requests the tokens that are used to access an API that's protected by Microsoft identity platform. The sample's *https://docsupdatetracker.net/index.html* file contains a reference to the library:
-
-```html
-<script type="text/javascript" src="https://alcdn.msauth.net/browser/2.0.0-beta.0/js/msal-browser.js" integrity=
-"sha384-r7Qxfs6PYHyfoBR6zG62DGzptfLBxnREThAlcJyEfzJ4dq5rqExc1Xj3TPFE/9TH" crossorigin="anonymous"></script>
-```
-
-If you have Node.js installed, you can download the latest version by using the Node.js Package Manager (npm):
-
-```console
-npm install @azure/msal-browser
-```
-
-## Next steps
-
-For a more detailed step-by-step guide on building the application used in this quickstart, see the following tutorial:
-
-> [!div class="nextstepaction"]
-> [Tutorial to sign in and call MS Graph](./tutorial-v2-javascript-auth-code.md)
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: JavaScript single-page app with user sign-in](single-page-app-quickstart.md?pivots=devlang-javascript)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" class="sxs-lookup"]
+> In this quickstart, you download and run a code sample that demonstrates how a JavaScript single-page application (SPA) can sign in users and call Microsoft Graph using the authorization code flow with Proof Key for Code Exchange (PKCE). The code sample demonstrates how to get an access token to call the Microsoft Graph API or any web API.
+>
+> See [How the sample works](#how-the-sample-works) for an illustration.
+>
+> ## Prerequisites
+>
+> * Azure subscription - [Create an Azure subscription for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+> * [Node.js](https://nodejs.org/en/download/)
+> * [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
+>
+>
+> #### Step 1: Configure your application in the Azure portal
+> For the code sample in this quickstart to work, add a **Redirect URI** of `http://localhost:3000/`.
+> > [!div class="nextstepaction"]
+> > [Make these changes for me]()
+>
+> > [!div class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-javascript/green-check.png) Your application is configured with these attributes.
+>
+> #### Step 2: Download the project
+>
+> Run the project with a web server by using Node.js
+>
+> > [!div class="nextstepaction"]
+> > [Download the code sample](https://github.com/Azure-Samples/ms-identity-javascript-v2/archive/master.zip)
+>
+> > [!div class="sxs-lookup"]
+> > > [!NOTE]
+> > > `Enter_the_Supported_Account_Info_Here`
+>
+> #### Step 3: Your app is configured and ready to run
+>
+> We have configured your project with values of your app's properties.
+>
+> Run the project with a web server by using Node.js.
+>
+> 1. To start the server, run the following commands from within the project directory:
+>
+> ```console
+> npm install
+> npm start
+> ```
+>
+> 1. Go to `http://localhost:3000/`.
+>
+> 1. Select **Sign In** to start the sign-in process and then call the Microsoft Graph API.
+>
+> The first time you sign in, you're prompted to provide your consent to allow the application to access your profile and sign you in. After you're signed in successfully, your user profile information is displayed on the page.
+>
+> ## More information
+>
+> ### How the sample works
+>
+> ![Diagram showing the authorization code flow for a single-page application.](media/quickstart-v2-javascript-auth-code/diagram-01-auth-code-flow.png)
+>
+> ### MSAL.js
+>
+> The MSAL.js library signs in users and requests the tokens that are used to access an API that's protected by Microsoft > identity platform. The sample's *https://docsupdatetracker.net/index.html* file contains a reference to the library:
+>
+> ```html
+> <script type="text/javascript" src="https://alcdn.msauth.net/browser/2.0.0-beta.0/js/msal-browser.js" integrity=
+> "sha384-r7Qxfs6PYHyfoBR6zG62DGzptfLBxnREThAlcJyEfzJ4dq5rqExc1Xj3TPFE/9TH" crossorigin="anonymous"></script>
+> ```
+>
+> If you have Node.js installed, you can download the latest version by using the Node.js Package Manager (npm):
+>
+> ```console
+> npm install @azure/msal-browser
+> ```
+>
+> ## Next steps
+>
+> For a more detailed step-by-step guide on building the application used in this quickstart, see the following tutorial:
+>
+> > [!div class="nextstepaction"]
+> > [Tutorial to sign in and call MS Graph](./tutorial-v2-javascript-auth-code.md)
active-directory Quickstart V2 Netcore Daemon https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-netcore-daemon.md
# Quickstart: Get a token and call the Microsoft Graph API by using a console app's identity
-In this quickstart, you download and run a code sample that demonstrates how a .NET Core console application can get an access token to call the Microsoft Graph API and display a [list of users](/graph/api/user-list) in the directory. The code sample also demonstrates how a job or a Windows service can run with an application identity, instead of a user's identity. The sample console application in this quickstart is also a daemon application, so it's a confidential client application.
-## Prerequisites
-
-This quickstart requires [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download) but will also work with .NET 5.0 SDK.
-
-> [!div class="sxs-lookup"]
-### Download and configure your quickstart app
-
-#### Step 1: Configure your application in the Azure portal
-For the code sample in this quickstart to work, create a client secret and add the Graph API's **User.Read.All** application permission.
-> [!div class="nextstepaction"]
-> [Make these changes for me]()
-
-> [!div class="alert alert-info"]
-> ![Already configured](media/quickstart-v2-netcore-daemon/green-check.png) Your application is configured with these attributes.
-
-#### Step 2: Download your Visual Studio project
-
-> [!div class="sxs-lookup"]
-> Run the project by using Visual Studio 2019.
-> [!div class="sxs-lookup" id="autoupdate" class="nextstepaction"]
-> [Download the code sample](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/archive/master.zip)
--
-> [!div class="sxs-lookup"]
-> > [!NOTE]
-> > `Enter_the_Supported_Account_Info_Here`
-
-#### Step 3: Admin consent
-
-If you try to run the application at this point, you'll receive an *HTTP 403 - Forbidden* error: "Insufficient privileges to complete the operation." This error happens because any app-only permission requires a global administrator of your directory to give consent to your application. Select one of the following options, depending on your role.
-
-##### Global tenant administrator
-
-If you're a global administrator, go to the **API Permissions** page and select **Grant admin consent for Enter_the_Tenant_Name_Here**.
-> [!div id="apipermissionspage"]
-> [Go to the API Permissions page]()
-
-##### Standard user
-
-If you're a standard user of your tenant, ask a global administrator to grant admin consent for your application. To do this, give the following URL to your administrator:
-
-```url
-https://login.microsoftonline.com/Enter_the_Tenant_Id_Here/adminconsent?client_id=Enter_the_Application_Id_Here
-```
-
-You might see the error "AADSTS50011: No reply address is registered for the application" after you grant consent to the app by using the preceding URL. This error happens because this application and the URL don't have a redirect URI. You can ignore it.
-
-#### Step 4: Run the application
-
-If you're using Visual Studio or Visual Studio for Mac, press **F5** to run the application. Otherwise, run the application via command prompt, console, or terminal:
-
-```dotnetcli
-cd {ProjectFolder}\1-Call-MSGraph\daemon-console
-dotnet run
-```
-In that code:
-* `{ProjectFolder}` is the folder where you extracted the .zip file. An example is `C:\Azure-Samples\active-directory-dotnetcore-daemon-v2`.
-
-You should see a list of users in Azure Active Directory as result.
-
-This quickstart application uses a client secret to identify itself as a confidential client. The client secret is added as a plain-text file to your project files. For security reasons, we recommend that you use a certificate instead of a client secret before considering the application as a production application. For more information on how to use a certificate, see [these instructions](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/#variation-daemon-application-using-client-credentials-with-certificates) in the GitHub repository for this sample.
-
-## More information
-This section gives an overview of the code required to sign in users. This overview can be useful to understand how the code works, what the main arguments are, and how to add sign-in to an existing .NET Core console application.
-
-> [!div class="sxs-lookup"]
-### How the sample works
-
-![Diagram that shows how the sample app generated by this quickstart works.](media/quickstart-v2-netcore-daemon/netcore-daemon-intro.svg)
-
-### MSAL.NET
-
-Microsoft Authentication Library (MSAL, in the [Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client) package) is the library that's used to sign in users and request tokens for accessing an API protected by the Microsoft identity platform. This quickstart requests tokens by using the application's own identity instead of delegated permissions. The authentication flow in this case is known as a [client credentials OAuth flow](v2-oauth2-client-creds-grant-flow.md). For more information on how to use MSAL.NET with a client credentials flow, see [this article](https://aka.ms/msal-net-client-credentials).
-
- You can install MSAL.NET by running the following command in the Visual Studio Package Manager Console:
-
-```dotnetcli
-dotnet add package Microsoft.Identity.Client
-```
-
-### MSAL initialization
-
-You can add the reference for MSAL by adding the following code:
-
-```csharp
-using Microsoft.Identity.Client;
-```
-
-Then, initialize MSAL by using the following code:
-
-```csharp
-IConfidentialClientApplication app;
-app = ConfidentialClientApplicationBuilder.Create(config.ClientId)
- .WithClientSecret(config.ClientSecret)
- .WithAuthority(new Uri(config.Authority))
- .Build();
-```
-
- | Element | Description |
- |||
- | `config.ClientSecret` | The client secret created for the application in the Azure portal. |
- | `config.ClientId` | The application (client) ID for the application registered in the Azure portal. You can find this value on the app's **Overview** page in the Azure portal. |
- | `config.Authority` | (Optional) The security token service (STS) endpoint for the user to authenticate. It's usually `https://login.microsoftonline.com/{tenant}` for the public cloud, where `{tenant}` is the name of your tenant or your tenant ID.|
-
-For more information, see the [reference documentation for `ConfidentialClientApplication`](/dotnet/api/microsoft.identity.client.iconfidentialclientapplication).
-
-### Requesting tokens
-
-To request a token by using the app's identity, use the `AcquireTokenForClient` method:
-
-```csharp
-result = await app.AcquireTokenForClient(scopes)
- .ExecuteAsync();
-```
-
-|Element| Description |
-|||
-| `scopes` | Contains the requested scopes. For confidential clients, this value should use a format similar to `{Application ID URI}/.default`. This format indicates that the requested scopes are the ones that are statically defined in the app object set in the Azure portal. For Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`. For custom web APIs, `{Application ID URI}` is defined in the Azure portal, under **Application Registration (Preview)** > **Expose an API**. |
-
-For more information, see the [reference documentation for `AcquireTokenForClient`](/dotnet/api/microsoft.identity.client.confidentialclientapplication.acquiretokenforclient).
--
-## Next steps
-
-To learn more about daemon applications, see the scenario overview:
-
-> [!div class="nextstepaction"]
-> [Daemon application that calls web APIs](scenario-daemon-overview.md)
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: .NET Core console that calls an API](console-app-quickstart.md?pivots=devlang-dotnet-core)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" class="sxs-lookup"]
+> In this quickstart, you download and run a code sample that demonstrates how a .NET Core console application can get an access token to call the Microsoft Graph API and display a [list of users](/graph/api/user-list) in the directory. The code sample also demonstrates how a job or a Windows service can run with an application identity, instead of a user's identity. The sample console application in this quickstart is also a daemon application, so it's a confidential client application.
+>
+> ## Prerequisites
+>
+> This quickstart requires [.NET Core 3.1 SDK](https://dotnet.microsoft.com/download) but will also work with .NET 5.0 SDK.
+>
+> > [!div class="sxs-lookup"]
+> ### Download and configure your quickstart app
+>
+> #### Step 1: Configure your application in the Azure portal
+> For the code sample in this quickstart to work, create a client secret and add the Graph API's **User.Read.All** application permission.
+> > [!div class="nextstepaction"]
+> > [Make these changes for me]()
+>
+> > [!div class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-netcore-daemon/green-check.png) Your application is configured with these attributes.
+>
+> #### Step 2: Download your Visual Studio project
+>
+> > [!div class="sxs-lookup"]
+> > Run the project by using Visual Studio 2019.
+> > [!div class="sxs-lookup" id="autoupdate" class="nextstepaction"]
+> > [Download the code sample](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/archive/master.zip)
+>
+> [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)]
+>
+> > [!div class="sxs-lookup"]
+> > > [!NOTE]
+> > > `Enter_the_Supported_Account_Info_Here`
+>
+> #### Step 3: Admin consent
+>
+> If you try to run the application at this point, you'll receive an *HTTP 403 - Forbidden* error: "Insufficient privileges to complete the operation." This error happens because any app-only permission requires a global administrator of your directory to give consent to your application. Select one of the following options, depending on your role.
+>
+> ##### Global tenant administrator
+>
+> If you're a global administrator, go to the **API Permissions** page and select **Grant admin consent for Enter_the_Tenant_Name_Here**.
+> > [!div id="apipermissionspage"]
+> > [Go to the API Permissions page]()
+>
+> ##### Standard user
+>
+> If you're a standard user of your tenant, ask a global administrator to grant admin consent for your application. To do this, give the following URL to your administrator:
+>
+> ```url
+> https://login.microsoftonline.com/Enter_the_Tenant_Id_Here/adminconsent?client_id=Enter_the_Application_Id_Here
+> ```
+>
+> You might see the error "AADSTS50011: No reply address is registered for the application" after you grant consent to the app by using the preceding URL. This error happens because this application and the URL don't have a redirect URI. You can ignore it.
+>
+> #### Step 4: Run the application
+>
+> If you're using Visual Studio or Visual Studio for Mac, press **F5** to run the application. Otherwise, run the application via command prompt, console, or terminal:
+>
+> ```dotnetcli
+> cd {ProjectFolder}\1-Call-MSGraph\daemon-console
+> dotnet run
+> ```
+> In that code:
+> * `{ProjectFolder}` is the folder where you extracted the .zip file. An example is `C:\Azure-Samples\active-directory-dotnetcore-daemon-v2`.
+>
+> You should see a list of users in Azure Active Directory as result.
+>
+> This quickstart application uses a client secret to identify itself as a confidential client. The client secret is added as a plain-text file to your project files. For security reasons, we recommend that you use a certificate instead of a client secret before considering the application as a production application. For more information on how to use a certificate, see [these instructions](https://github.com/Azure-Samples/active-directory-dotnetcore-daemon-v2/#variation-daemon-application-using-client-credentials-with-certificates) in the GitHub repository for this sample.
+>
+> ## More information
+> This section gives an overview of the code required to sign in users. This overview can be useful to understand how the > code works, what the main arguments are, and how to add sign-in to an existing .NET Core console application.
+>
+> > [!div class="sxs-lookup"]
+> ### How the sample works
+>
+> ![Diagram that shows how the sample app generated by this quickstart works.](media/quickstart-v2-netcore-daemon/> netcore-daemon-intro.svg)
+>
+> ### MSAL.NET
+>
+> Microsoft Authentication Library (MSAL, in the [Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client) package) is the library that's used to sign in users and request tokens for accessing an API protected by the Microsoft identity platform. This quickstart requests tokens by using the application's own identity instead of delegated permissions. The authentication flow in this case is known as a [client credentials OAuth flow](v2-oauth2-client-creds-grant-flow.md). For more information on how to use MSAL.NET with a client credentials flow, see [this article](https://aka.ms/msal-net-client-credentials).
+>
+> You can install MSAL.NET by running the following command in the Visual Studio Package Manager Console:
+>
+> ```dotnetcli
+> dotnet add package Microsoft.Identity.Client
+> ```
+>
+> ### MSAL initialization
+>
+> You can add the reference for MSAL by adding the following code:
+>
+> ```csharp
+> using Microsoft.Identity.Client;
+> ```
+>
+> Then, initialize MSAL by using the following code:
+>
+> ```csharp
+> IConfidentialClientApplication app;
+> app = ConfidentialClientApplicationBuilder.Create(config.ClientId)
+> .WithClientSecret(config.ClientSecret)
+> .WithAuthority(new Uri(config.Authority))
+> .Build();
+> ```
+>
+> | Element | Description |
+> |||
+> | `config.ClientSecret` | The client secret created for the application in the Azure portal. |
+> | `config.ClientId` | The application (client) ID for the application registered in the Azure portal. You can find this value on the app's **Overview** page in the Azure portal. |
+> | `config.Authority` | (Optional) The security token service (STS) endpoint for the user to authenticate. It's usually `https://login.microsoftonline.com/{tenant}` for the public cloud, where `{tenant}` is the name of your tenant or your tenant ID.|
+>
+> For more information, see the [reference documentation for `ConfidentialClientApplication`](/dotnet/api/microsoft.identity.client.iconfidentialclientapplication).
+>
+> ### Requesting tokens
+>
+> To request a token by using the app's identity, use the `AcquireTokenForClient` method:
+>
+> ```csharp
+> result = await app.AcquireTokenForClient(scopes)
+> .ExecuteAsync();
+> ```
+>
+> |Element| Description |
+> |||
+> | `scopes` | Contains the requested scopes. For confidential clients, this value should use a format similar to `{Application ID URI}/.default`. This format indicates that the requested scopes are the ones that are statically defined in the app object set in the Azure portal. For Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`. For custom web APIs, `{Application ID URI}` is defined in the Azure portal, under **Application Registration (Preview)** > **Expose an API**. |
+>
+> For more information, see the [reference documentation for `AcquireTokenForClient`](/dotnet/api/microsoft.identity.client.confidentialclientapplication.acquiretokenforclient).
+>
+> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
+>
+> ## Next steps
+>
+> To learn more about daemon applications, see the scenario overview:
+>
+> > [!div class="nextstepaction"]
+> > [Daemon application that calls web APIs](scenario-daemon-overview.md)
active-directory Quickstart V2 Nodejs Console https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-nodejs-console.md
# Quickstart: Acquire a token and call Microsoft Graph API from a Node.js console app using app's identity
-In this quickstart, you download and run a code sample that demonstrates how a Node.js console application can get an access token using the app's identity to call the Microsoft Graph API and display a [list of users](/graph/api/user-list) in the directory. The code sample demonstrates how an unattended job or Windows service can run with an application identity, instead of a user's identity.
-
-This quickstart uses the [Microsoft Authentication Library for Node.js (MSAL Node)](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) with the [client credentials grant](v2-oauth2-client-creds-grant-flow.md).
-
-## Prerequisites
-
-* [Node.js](https://nodejs.org/en/download/)
-* [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
--
-### Download and configure the sample app
-
-#### Step 1: Configure the application in Azure portal
-For the code sample for this quickstart to work, you need to create a client secret, and add Graph API's **User.Read.All** application permission.
-> [!div class="nextstepaction"]
-> [Make these changes for me]()
-
-> [!div class="alert alert-info"]
-> ![Already configured](media/quickstart-v2-netcore-daemon/green-check.png) Your application is configured with these attributes.
-
-#### Step 2: Download the Node.js sample project
-
-> [!div class="sxs-lookup nextstepaction"]
-> [Download the code sample](https://github.com/azure-samples/ms-identity-javascript-nodejs-console/archive/main.zip)
-
-> [!div class="sxs-lookup"]
-> > [!NOTE]
-> > `Enter_the_Supported_Account_Info_Here`
-
-#### Step 3: Admin consent
-
-If you try to run the application at this point, you'll receive *HTTP 403 - Forbidden* error: `Insufficient privileges to complete the operation`. This error happens because any *app-only permission* requires **admin consent**: a global administrator of your directory must give consent to your application. Select one of the options below depending on your role:
-
-##### Global tenant administrator
-
-If you are a global administrator, go to **API Permissions** page select **Grant admin consent for Enter_the_Tenant_Name_Here**
-> > [!div id="apipermissionspage"]
-> > [Go to the API Permissions page]()
-
-##### Standard user
-
-If you're a standard user of your tenant, then you need to ask a global administrator to grant **admin consent** for your application. To do this, give the following URL to your administrator:
-
-```url
-https://login.microsoftonline.com/Enter_the_Tenant_Id_Here/adminconsent?client_id=Enter_the_Application_Id_Here
-```
-
-#### Step 4: Run the application
-
-Locate the sample's root folder (where `package.json` resides) in a command prompt or console. You'll need to install the dependencies of this sample once:
-
-```console
-npm install
-```
-
-Then, run the application via command prompt or console:
-
-```console
-node . --op getUsers
-```
-
-You should see on the console output some JSON fragment representing a list of users in your Azure AD directory.
-
-## About the code
-
-Below, some of the important aspects of the sample application are discussed.
-
-### MSAL Node
-
-[MSAL Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) is the library used to sign in users and request tokens used to access an API protected by Microsoft identity platform. As described, this quickstart requests tokens by application permissions (using the application's own identity) instead of delegated permissions. The authentication flow used in this case is known as [OAuth 2.0 client credentials flow](v2-oauth2-client-creds-grant-flow.md). For more information on how to use MSAL Node with daemon apps, see [Scenario: Daemon application](scenario-daemon-overview.md).
-
- You can install MSAL Node by running the following npm command.
-
-```console
-npm install @azure/msal-node --save
-```
-
-### MSAL initialization
-
-You can add the reference for MSAL by adding the following code:
-
-```javascript
-const msal = require('@azure/msal-node');
-```
-
-Then, initialize MSAL using the following code:
-
-```javascript
-const msalConfig = {
- auth: {
- clientId: "Enter_the_Application_Id_Here",
- authority: "https://login.microsoftonline.com/Enter_the_Tenant_Id_Here",
- clientSecret: "Enter_the_Client_Secret_Here",
- }
-};
-const cca = new msal.ConfidentialClientApplication(msalConfig);
-```
-
-> | Where: |Description |
-> |||
-> | `clientId` | Is the **Application (client) ID** for the application registered in the Azure portal. You can find this value in the app's **Overview** page in the Azure portal. |
-> | `authority` | The STS endpoint for user to authenticate. Usually `https://login.microsoftonline.com/{tenant}` for public cloud, where {tenant} is the name of your tenant or your tenant Id.|
-> | `clientSecret` | Is the client secret created for the application in Azure Portal. |
-
-For more information, please see the [reference documentation for `ConfidentialClientApplication`](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-node/docs/initialize-confidential-client-application.md)
-
-### Requesting tokens
-
-To request a token using app's identity, use `acquireTokenByClientCredential` method:
-
-```javascript
-const tokenRequest = {
- scopes: [ 'https://graph.microsoft.com/.default' ],
-};
-
-const tokenResponse = await cca.acquireTokenByClientCredential(tokenRequest);
-```
-
-> |Where:| Description |
-> |||
-> | `tokenRequest` | Contains the scopes requested. For confidential clients, this should use the format similar to `{Application ID URI}/.default` to indicate that the scopes being requested are the ones statically defined in the app object set in the Azure Portal (for Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`). For custom web APIs, `{Application ID URI}` is defined under **Expose an API** section in Azure Portal's Application Registration. |
-> | `tokenResponse` | The response contains an access token for the scopes requested. |
--
-## Next steps
-
-To learn more about daemon/console app development with MSAL Node, see the tutorial:
-
-> [!div class="nextstepaction"]
-> [Daemon application that calls web APIs](tutorial-v2-nodejs-console.md)
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: Node.js console app that calls an API](console-app-quickstart.md?pivots=devlang-nodejs)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" class="sxs-lookup"]
+> In this quickstart, you download and run a code sample that demonstrates how a Node.js console application can get an access token using the app's identity to call the Microsoft Graph API and display a [list of users](/graph/api/user-list) in the directory. The code sample demonstrates how an unattended job or Windows service can run with an application identity, instead of a user's identity.
+>
+> This quickstart uses the [Microsoft Authentication Library for Node.js (MSAL Node)](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) with the [client credentials grant](v2-oauth2-client-creds-grant-flow.md).
+>
+> ## Prerequisites
+>
+> * [Node.js](https://nodejs.org/en/download/)
+> * [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
+>
+>
+> ### Download and configure the sample app
+>
+> #### Step 1: Configure the application in Azure portal
+> For the code sample for this quickstart to work, you need to create a client secret, and add Graph API's **User.Read.All** application permission.
+> > [!div class="nextstepaction"]
+> > [Make these changes for me]()
+>
+> > [!div class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-netcore-daemon/green-check.png) Your application is configured with these attributes.
+>
+> #### Step 2: Download the Node.js sample project
+>
+> > [!div class="sxs-lookup nextstepaction"]
+> > [Download the code sample](https://github.com/azure-samples/ms-identity-javascript-nodejs-console/archive/main.zip)
+>
+> > [!div class="sxs-lookup"]
+> > > [!NOTE]
+> > > `Enter_the_Supported_Account_Info_Here`
+>
+> #### Step 3: Admin consent
+>
+> If you try to run the application at this point, you'll receive *HTTP 403 - Forbidden* error: `Insufficient privileges to complete the operation`. This error happens because any *app-only permission* requires **admin consent**: a global administrator of your directory must give consent to your application. Select one of the options below depending on your role:
+>
+> ##### Global tenant administrator
+>
+> If you are a global administrator, go to **API Permissions** page select **Grant admin consent for > Enter_the_Tenant_Name_Here**
+> > > [!div id="apipermissionspage"]
+> > > [Go to the API Permissions page]()
+>
+> ##### Standard user
+>
+> If you're a standard user of your tenant, then you need to ask a global administrator to grant **admin consent** for your application. To do this, give the following URL to your administrator:
+>
+> ```url
+> https://login.microsoftonline.com/Enter_the_Tenant_Id_Here/adminconsent?client_id=Enter_the_Application_Id_Here
+> ```
+>
+> #### Step 4: Run the application
+>
+> Locate the sample's root folder (where `package.json` resides) in a command prompt or console. You'll need to install the dependencies of this sample once:
+>
+> ```console
+> npm install
+> ```
+>
+> Then, run the application via command prompt or console:
+>
+> ```console
+> node . --op getUsers
+> ```
+>
+> You should see on the console output some JSON fragment representing a list of users in your Azure AD directory.
+>
+> ## About the code
+>
+> Below, some of the important aspects of the sample application are discussed.
+>
+> ### MSAL Node
+>
+> [MSAL Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) is the library used to sign in users and request tokens used to access an API protected by Microsoft identity platform. As described, this quickstart requests tokens by application permissions (using the application's own identity) instead of delegated permissions. The authentication flow used in this case is known as [OAuth 2.0 client credentials flow](v2-oauth2-client-creds-grant-flow.md). For more information on how to use MSAL Node with daemon apps, see [Scenario: Daemon application](scenario-daemon-overview.md).
+>
+> You can install MSAL Node by running the following npm command.
+>
+> ```console
+> npm install @azure/msal-node --save
+> ```
+>
+> ### MSAL initialization
+>
+> You can add the reference for MSAL by adding the following code:
+>
+> ```javascript
+> const msal = require('@azure/msal-node');
+> ```
+>
+> Then, initialize MSAL using the following code:
+>
+> ```javascript
+> const msalConfig = {
+> auth: {
+> clientId: "Enter_the_Application_Id_Here",
+> authority: "https://login.microsoftonline.com/Enter_the_Tenant_Id_Here",
+> clientSecret: "Enter_the_Client_Secret_Here",
+> }
+> };
+> const cca = new msal.ConfidentialClientApplication(msalConfig);
+> ```
+>
+> > | Where: |Description |
+> > |||
+> > | `clientId` | Is the **Application (client) ID** for the application registered in the Azure portal. You can find this value in the app's **Overview** page in the Azure portal. |
+> > | `authority` | The STS endpoint for user to authenticate. Usually `https://login.microsoftonline.com/{tenant}` for public cloud, where {tenant} is the name of your tenant or your tenant Id.|
+> > | `clientSecret` | Is the client secret created for the application in Azure Portal. |
+>
+> For more information, please see the [reference documentation for `ConfidentialClientApplication`](https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-node/docs/initialize-confidential-client-application.md)
+>
+> ### Requesting tokens
+>
+> To request a token using app's identity, use `acquireTokenByClientCredential` method:
+>
+> ```javascript
+> const tokenRequest = {
+> scopes: [ 'https://graph.microsoft.com/.default' ],
+> };
+>
+> const tokenResponse = await cca.acquireTokenByClientCredential(tokenRequest);
+> ```
+>
+> > |Where:| Description |
+> > |||
+> > | `tokenRequest` | Contains the scopes requested. For confidential clients, this should use the format similar to `{Application ID URI}/.default` to indicate that the scopes being requested are the ones statically defined in the app object set in the Azure Portal (for Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`). For custom web APIs, `{Application ID URI}` is defined under **Expose an API** section in Azure Portal's Application Registration. |
+> > | `tokenResponse` | The response contains an access token for the scopes requested. |
+>
+> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
+>
+> ## Next steps
+>
+> To learn more about daemon/console app development with MSAL Node, see the tutorial:
+>
+> > [!div class="nextstepaction"]
+> > [Daemon application that calls web APIs](tutorial-v2-nodejs-console.md)
active-directory Quickstart V2 Nodejs Desktop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-nodejs-desktop.md
# Quickstart: Acquire an access token and call the Microsoft Graph API from an Electron desktop app
-In this quickstart, you download and run a code sample that demonstrates how an Electron desktop application can sign in users and acquire access tokens to call the Microsoft Graph API.
-
-This quickstart uses the [Microsoft Authentication Library for Node.js (MSAL Node)](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) with the [authorization code flow with PKCE](v2-oauth2-auth-code-flow.md).
-
-## Prerequisites
-
-* [Node.js](https://nodejs.org/en/download/)
-* [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
-
-#### Step 1: Configure the application in Azure portal
-For the code sample for this quickstart to work, you need to add a reply URL as **msal://redirect**.
-> [!div class="nextstepaction"]
-> [Make this change for me]()
-
-> [!div class="alert alert-info"]
-> ![Already configured](media/quickstart-v2-windows-desktop/green-check.png) Your application is configured with these attributes.
-
-#### Step 2: Download the Electron sample project
-
-> [!div class="nextstepaction"]
-> [Download the code sample](https://github.com/azure-samples/ms-identity-javascript-nodejs-desktop/archive/main.zip)
-
-> [!div class="sxs-lookup"]
-> > [!NOTE]
-> > `Enter_the_Supported_Account_Info_Here`
-
-#### Step 4: Run the application
-
-You'll need to install the dependencies of this sample once:
-
-```console
-npm install
-```
-
-Then, run the application via command prompt or console:
-
-```console
-npm start
-```
-
-You should see application's UI with a **Sign in** button.
-
-## About the code
-
-Below, some of the important aspects of the sample application are discussed.
-
-### MSAL Node
-
-[MSAL Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) is the library used to sign in users and request tokens used to access an API protected by Microsoft identity platform. For more information on how to use MSAL Node with desktop apps, see [this article](scenario-desktop-overview.md).
-
-You can install MSAL Node by running the following npm command.
-
-```console
-npm install @azure/msal-node --save
-```
-
-### MSAL initialization
-
-You can add the reference for MSAL Node by adding the following code:
-
-```javascript
-const { PublicClientApplication } = require('@azure/msal-node');
-```
-
-Then, initialize MSAL using the following code:
-
-```javascript
-const MSAL_CONFIG = {
- auth: {
- clientId: "Enter_the_Application_Id_Here",
- authority: "https://login.microsoftonline.com/Enter_the_Tenant_Id_Here",
- },
-};
-
-const pca = new PublicClientApplication(MSAL_CONFIG);
-```
-
-> | Where: |Description |
-> |||
-> | `clientId` | Is the **Application (client) ID** for the application registered in the Azure portal. You can find this value in the app's **Overview** page in the Azure portal. |
-> | `authority` | The STS endpoint for user to authenticate. Usually `https://login.microsoftonline.com/{tenant}` for public cloud, where {tenant} is the name of your tenant or your tenant Id.|
-
-### Requesting tokens
-
-In the first leg of authorization code flow with PKCE, prepare and send an authorization code request with the appropriate parameters. Then, in the second leg of the flow, listen for the authorization code response. Once the code is obtained, exchange it to obtain a token.
-
-```javascript
-// The redirect URI you setup during app registration with a custom file protocol "msal"
-const redirectUri = "msal://redirect";
-
-const cryptoProvider = new CryptoProvider();
-
-const pkceCodes = {
- challengeMethod: "S256", // Use SHA256 Algorithm
- verifier: "", // Generate a code verifier for the Auth Code Request first
- challenge: "" // Generate a code challenge from the previously generated code verifier
-};
-
-/**
- * Starts an interactive token request
- * @param {object} authWindow: Electron window object
- * @param {object} tokenRequest: token request object with scopes
- */
-async function getTokenInteractive(authWindow, tokenRequest) {
-
- /**
- * Proof Key for Code Exchange (PKCE) Setup
- *
- * MSAL enables PKCE in the Authorization Code Grant Flow by including the codeChallenge and codeChallengeMethod
- * parameters in the request passed into getAuthCodeUrl() API, as well as the codeVerifier parameter in the
- * second leg (acquireTokenByCode() API).
- */
-
- const {verifier, challenge} = await cryptoProvider.generatePkceCodes();
-
- pkceCodes.verifier = verifier;
- pkceCodes.challenge = challenge;
-
- const authCodeUrlParams = {
- redirectUri: redirectUri
- scopes: tokenRequest.scopes,
- codeChallenge: pkceCodes.challenge, // PKCE Code Challenge
- codeChallengeMethod: pkceCodes.challengeMethod // PKCE Code Challenge Method
- };
-
- const authCodeUrl = await pca.getAuthCodeUrl(authCodeUrlParams);
-
- // register the custom file protocol in redirect URI
- protocol.registerFileProtocol(redirectUri.split(":")[0], (req, callback) => {
- const requestUrl = url.parse(req.url, true);
- callback(path.normalize(`${__dirname}/${requestUrl.path}`));
- });
-
- const authCode = await listenForAuthCode(authCodeUrl, authWindow); // see below
-
- const authResponse = await pca.acquireTokenByCode({
- redirectUri: redirectUri,
- scopes: tokenRequest.scopes,
- code: authCode,
- codeVerifier: pkceCodes.verifier // PKCE Code Verifier
- });
-
- return authResponse;
-}
-
-/**
- * Listens for auth code response from Azure AD
- * @param {string} navigateUrl: URL where auth code response is parsed
- * @param {object} authWindow: Electron window object
- */
-async function listenForAuthCode(navigateUrl, authWindow) {
-
- authWindow.loadURL(navigateUrl);
-
- return new Promise((resolve, reject) => {
- authWindow.webContents.on('will-redirect', (event, responseUrl) => {
- try {
- const parsedUrl = new URL(responseUrl);
- const authCode = parsedUrl.searchParams.get('code');
- resolve(authCode);
- } catch (err) {
- reject(err);
- }
- });
- });
-}
-```
-
-> |Where:| Description |
-> |||
-> | `authWindow` | Current Electron window in process. |
-> | `tokenRequest` | Contains the scopes being requested, such as `"User.Read"` for Microsoft Graph or `"api://<Application ID>/access_as_user"` for custom web APIs. |
-
-## Next steps
-
-To learn more about Electron desktop app development with MSAL Node, see the tutorial:
-
-> [!div class="nextstepaction"]
-> [Tutorial: Sign in users and call the Microsoft Graph API in an Electron desktop app](tutorial-v2-nodejs-desktop.md)
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: Node.js Electron desktop app with user sign-in](desktop-app-quickstart.md?pivots=devlang-nodejs-electron)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" class="sxs-lookup"]
+> In this quickstart, you download and run a code sample that demonstrates how an Electron desktop application can sign in users and acquire access tokens to call the Microsoft Graph API.
+>
+> This quickstart uses the [Microsoft Authentication Library for Node.js (MSAL Node)](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) with the [authorization code flow with PKCE](v2-oauth2-auth-code-flow.md).
+>
+> ## Prerequisites
+>
+> * [Node.js](https://nodejs.org/en/download/)
+> * [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
+>
+> #### Step 1: Configure the application in Azure portal
+> For the code sample for this quickstart to work, you need to add a reply URL as **msal://redirect**.
+> > [!div class="nextstepaction"]
+> > [Make this change for me]()
+>
+> > [!div class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-windows-desktop/green-check.png) Your application is configured with these attributes.
+>
+> #### Step 2: Download the Electron sample project
+>
+> > [!div class="nextstepaction"]
+> > [Download the code sample](https://github.com/azure-samples/ms-identity-javascript-nodejs-desktop/archive/main.zip)
+>
+> > [!div class="sxs-lookup"]
+> > > [!NOTE]
+> > > `Enter_the_Supported_Account_Info_Here`
+>
+> #### Step 4: Run the application
+>
+> You'll need to install the dependencies of this sample once:
+>
+> ```console
+> npm install
+> ```
+>
+> Then, run the application via command prompt or console:
+>
+> ```console
+> npm start
+> ```
+>
+> You should see application's UI with a **Sign in** button.
+>
+> ## About the code
+>
+> Below, some of the important aspects of the sample application are discussed.
+>
+> ### MSAL Node
+>
+> [MSAL Node](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-node) is the library used to sign in users and request tokens used to access an API protected by Microsoft identity platform. For more information on how to use MSAL Node with desktop apps, see [this article](scenario-desktop-overview.md).
+>
+> You can install MSAL Node by running the following npm command.
+>
+> ```console
+> npm install @azure/msal-node --save
+> ```
+>
+> ### MSAL initialization
+>
+> You can add the reference for MSAL Node by adding the following code:
+>
+> ```javascript
+> const { PublicClientApplication } = require('@azure/msal-node');
+> ```
+>
+> Then, initialize MSAL using the following code:
+>
+> ```javascript
+> const MSAL_CONFIG = {
+> auth: {
+> clientId: "Enter_the_Application_Id_Here",
+> authority: "https://login.microsoftonline.com/Enter_the_Tenant_Id_Here",
+> },
+> };
+>
+> const pca = new PublicClientApplication(MSAL_CONFIG);
+> ```
+>
+> > | Where: |Description |
+> > |||
+> > | `clientId` | Is the **Application (client) ID** for the application registered in the Azure portal. You can find this value in the app's **Overview** page in the Azure portal. |
+> > | `authority` | The STS endpoint for user to authenticate. Usually `https://login.microsoftonline.com/{tenant}` for public cloud, where {tenant} is the name of your tenant or your tenant Id.|
+>
+> ### Requesting tokens
+>
+> In the first leg of authorization code flow with PKCE, prepare and send an authorization code request with the appropriate parameters. Then, in the second leg of the flow, listen for the authorization code response. Once the code is obtained, exchange it to obtain a token.
+>
+> ```javascript
+> // The redirect URI you setup during app registration with a custom file protocol "msal"
+> const redirectUri = "msal://redirect";
+>
+> const cryptoProvider = new CryptoProvider();
+>
+> const pkceCodes = {
+> challengeMethod: "S256", // Use SHA256 Algorithm
+> verifier: "", // Generate a code verifier for the Auth Code Request first
+> challenge: "" // Generate a code challenge from the previously generated code verifier
+> };
+>
+> /**
+> * Starts an interactive token request
+> * @param {object} authWindow: Electron window object
+> * @param {object} tokenRequest: token request object with scopes
+> */
+> async function getTokenInteractive(authWindow, tokenRequest) {
+>
+> /**
+> * Proof Key for Code Exchange (PKCE) Setup
+> *
+> * MSAL enables PKCE in the Authorization Code Grant Flow by including the codeChallenge and codeChallengeMethod
+> * parameters in the request passed into getAuthCodeUrl() API, as well as the codeVerifier parameter in the
+> * second leg (acquireTokenByCode() API).
+> */
+>
+> const {verifier, challenge} = await cryptoProvider.generatePkceCodes();
+>
+> pkceCodes.verifier = verifier;
+> pkceCodes.challenge = challenge;
+>
+> const authCodeUrlParams = {
+> redirectUri: redirectUri
+> scopes: tokenRequest.scopes,
+> codeChallenge: pkceCodes.challenge, // PKCE Code Challenge
+> codeChallengeMethod: pkceCodes.challengeMethod // PKCE Code Challenge Method
+> };
+>
+> const authCodeUrl = await pca.getAuthCodeUrl(authCodeUrlParams);
+>
+> // register the custom file protocol in redirect URI
+> protocol.registerFileProtocol(redirectUri.split(":")[0], (req, callback) => {
+> const requestUrl = url.parse(req.url, true);
+> callback(path.normalize(`${__dirname}/${requestUrl.path}`));
+> });
+>
+> const authCode = await listenForAuthCode(authCodeUrl, authWindow); // see below
+>
+> const authResponse = await pca.acquireTokenByCode({
+> redirectUri: redirectUri,
+> scopes: tokenRequest.scopes,
+> code: authCode,
+> codeVerifier: pkceCodes.verifier // PKCE Code Verifier
+> });
+>
+> return authResponse;
+> }
+>
+> /**
+> * Listens for auth code response from Azure AD
+> * @param {string} navigateUrl: URL where auth code response is parsed
+> * @param {object} authWindow: Electron window object
+> */
+> async function listenForAuthCode(navigateUrl, authWindow) {
+>
+> authWindow.loadURL(navigateUrl);
+>
+> return new Promise((resolve, reject) => {
+> authWindow.webContents.on('will-redirect', (event, responseUrl) => {
+> try {
+> const parsedUrl = new URL(responseUrl);
+> const authCode = parsedUrl.searchParams.get('code');
+> resolve(authCode);
+> } catch (err) {
+> reject(err);
+> }
+> });
+> });
+> }
+> ```
+>
+> > |Where:| Description |
+> > |||
+> > | `authWindow` | Current Electron window in process. |
+> > | `tokenRequest` | Contains the scopes being requested, such as `"User.Read"` for Microsoft Graph or `"api://<Application ID>/access_as_user"` for custom web APIs. |
+>
+> ## Next steps
+>
+> To learn more about Electron desktop app development with MSAL Node, see the tutorial:
+>
+> > [!div class="nextstepaction"]
+> > [Tutorial: Sign in users and call the Microsoft Graph API in an Electron desktop app](tutorial-v2-nodejs-desktop.md)
active-directory Quickstart V2 Nodejs Webapp Msal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-nodejs-webapp-msal.md
# Quickstart: Sign in users and get an access token in a Node.js web app using the auth code flow
-In this quickstart, you download and run a code sample that demonstrates how a Node.js web app can sign in users by using the authorization code flow. The code sample also demonstrates how to get an access token to call Microsoft Graph API.
-See [How the sample works](#how-the-sample-works) for an illustration.
-
-This quickstart uses the Microsoft Authentication Library for Node.js (MSAL Node) with the authorization code flow.
-
-## Prerequisites
-
-* An Azure subscription. [Create an Azure subscription for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* [Node.js](https://nodejs.org/en/download/)
-* [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
-
-#### Step 1: Configure the application in Azure portal
-For the code sample for this quickstart to work, you need to create a client secret and add the following reply URL: `http://localhost:3000/redirect`.
-> [!div class="nextstepaction"]
-> [Make this change for me]()
-
-> [!div class="alert alert-info"]
-> ![Already configured](media/quickstart-v2-windows-desktop/green-check.png) Your application is configured with these attributes.
-
-#### Step 2: Download the project
-
-Run the project with a web server by using Node.js.
-
-> [!div class="nextstepaction"]
-> [Download the code sample](https://github.com/Azure-Samples/ms-identity-node/archive/main.zip)
-
-#### Step 3: Your app is configured and ready to run
-
-Run the project by using Node.js.
-
-1. To start the server, run the following commands from within the project directory:
-
- ```console
- npm install
- npm start
- ```
-
-1. Go to `http://localhost:3000/`.
-
-1. Select **Sign In** to start the sign-in process.
-
- The first time you sign in, you're prompted to provide your consent to allow the application to access your profile and sign you in. After you're signed in successfully, you will see a log message in the command line.
-
-## More information
-
-### How the sample works
-
-The sample hosts a web server on localhost, port 3000. When a web browser accesses this site, the sample immediately redirects the user to a Microsoft authentication page. Because of this, the sample does not contain any HTML or display elements. Authentication success displays the message "OK".
-
-### MSAL Node
-
-The MSAL Node library signs in users and requests the tokens that are used to access an API that's protected by Microsoft identity platform. You can download the latest version by using the Node.js Package Manager (npm):
-
-```console
-npm install @azure/msal-node
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Adding Auth to an existing web app - GitHub code sample >](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/samples/msal-node-samples/auth-code)
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: Node.js web app that signs in users with MSAL Node](web-app-quickstart.md?pivots=devlang-nodejs-msal)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" class="sxs-lookup"]
+> In this quickstart, you download and run a code sample that demonstrates how a Node.js web app can sign in users by using the authorization code flow. The code sample also demonstrates how to get an access token to call Microsoft Graph API.
+>
+> See [How the sample works](#how-the-sample-works) for an illustration.
+>
+> This quickstart uses the Microsoft Authentication Library for Node.js (MSAL Node) with the authorization code flow.
+>
+> ## Prerequisites
+>
+> * An Azure subscription. [Create an Azure subscription for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+> * [Node.js](https://nodejs.org/en/download/)
+> * [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
+>
+> #### Step 1: Configure the application in Azure portal
+> For the code sample for this quickstart to work, you need to create a client secret and add the following reply URL: `http:/> /localhost:3000/redirect`.
+> > [!div class="nextstepaction"]
+> > [Make this change for me]()
+>
+> > [!div class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-windows-desktop/green-check.png) Your application is configured with these > attributes.
+>
+> #### Step 2: Download the project
+>
+> Run the project with a web server by using Node.js.
+>
+> > [!div class="nextstepaction"]
+> > [Download the code sample](https://github.com/Azure-Samples/ms-identity-node/archive/main.zip)
+>
+> #### Step 3: Your app is configured and ready to run
+>
+> Run the project by using Node.js.
+>
+> 1. To start the server, run the following commands from within the project directory:
+>
+> ```console
+> npm install
+> npm start
+> ```
+>
+> 1. Go to `http://localhost:3000/`.
+>
+> 1. Select **Sign In** to start the sign-in process.
+>
+> The first time you sign in, you're prompted to provide your consent to allow the application to access your profile and sign you in. After you're signed in successfully, you will see a log message in the command line.
+>
+> ## More information
+>
+> ### How the sample works
+>
+> The sample hosts a web server on localhost, port 3000. When a web browser accesses this site, the sample immediately redirects the user to a Microsoft authentication page. Because of this, the sample does not contain any HTML or display elements. Authentication success displays the message "OK".
+>
+> ### MSAL Node
+>
+> The MSAL Node library signs in users and requests the tokens that are used to access an API that's protected by Microsoft identity platform. You can download the latest version by using the Node.js Package Manager (npm):
+>
+> ```console
+> npm install @azure/msal-node
+> ```
+>
+> ## Next steps
+>
+> > [!div class="nextstepaction"]
+> > [Adding Auth to an existing web app - GitHub code sample >](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/samples/msal-node-samples/auth-code)
active-directory Quickstart V2 Nodejs Webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-nodejs-webapp.md
# Quickstart: Add sign in using OpenID Connect to a Node.js web app
-In this quickstart, you download and run a code sample that demonstrates how to set up OpenID Connect authentication in a web application built using Node.js with Express. The sample is designed to run on any platform.
-
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- [Node.js](https://nodejs.org/en/download/).-
-## Register your application
-
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
-1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
-1. Search for and select **Azure Active Directory**.
-1. Under **Manage**, select **App registrations** > **New registration**.
-1. Enter a **Name** for your application, for example `MyWebApp`. Users of your app might see this name, and you can change it later.
-1. In the **Supported account types** section, select **Accounts in any organizational directory and personal Microsoft accounts (e.g. Skype, Xbox, Outlook.com)**.
-
- If there are more than one redirect URIs, add these from the **Authentication** tab later after the app has been successfully created.
-
-1. Select **Register** to create the app.
-1. On the app's **Overview** page, find the **Application (client) ID** value and record it for later. You'll need this value to configure the application later in this project.
-1. Under **Manage**, select **Authentication**.
-1. Select **Add a platform** > **Web**.
-1. In the **Redirect URIs** section, enter `http://localhost:3000/auth/openid/return`.
-1. Enter a **Front-channel logout URL** `https://localhost:3000`.
-1. In the **Implicit grant and hybrid flows** section, select **ID tokens** as this sample requires the [Implicit grant flow](./v2-oauth2-implicit-grant-flow.md) to be enabled to sign-in the user.
-1. Select **Configure**.
-1. Under **Manage**, select **Certificates & secrets** > **Client secrets** > **New client secret**.
-1. Enter a key description (for instance app secret).
-1. Select a key duration of either **In 1 year, In 2 years,** or **Never Expires**.
-1. Select **Add**. The key value will be displayed. Copy the key value and save it in a safe location for later use.
--
-## Download the sample application and modules
-
-Next, clone the sample repo and install the NPM modules.
-
-From your shell or command line:
-
-`$ git clone git@github.com:AzureADQuickStarts/AppModelv2-WebApp-OpenIDConnect-nodejs.git`
-
-or
-
-`$ git clone https://github.com/AzureADQuickStarts/AppModelv2-WebApp-OpenIDConnect-nodejs.git`
-
-From the project root directory, run the command:
-
-`$ npm install`
-
-## Configure the application
-
-Provide the parameters in `exports.creds` in config.js as instructed.
-
-* Update `<tenant_name>` in `exports.identityMetadata` with the Azure AD tenant name of the format \*.onmicrosoft.com.
-* Update `exports.clientID` with the Application ID noted from app registration.
-* Update `exports.clientSecret` with the Application secret noted from app registration.
-* Update `exports.redirectUrl` with the Redirect URI noted from app registration.
-
-**Optional configuration for production apps:**
-
-* Update `exports.destroySessionUrl` in config.js, if you want to use a different `post_logout_redirect_uri`.
-
-* Set `exports.useMongoDBSessionStore` in config.js to true, if you want to use [mongoDB](https://www.mongodb.com) or other [compatible session stores](https://github.com/expressjs/session#compatible-session-stores).
-The default session store in this sample is `express-session`. The default session store is not suitable for production.
-
-* Update `exports.databaseUri`, if you want to use mongoDB session store and a different database URI.
-
-* Update `exports.mongoDBSessionMaxAge`. Here you can specify how long you want to keep a session in mongoDB. The unit is second(s).
-
-## Build and run the application
-
-Start mongoDB service. If you are using mongoDB session store in this app, you have to [install mongoDB](http://www.mongodb.org/) and start the service first. If you are using the default session store, you can skip this step.
-
-Run the app using the following command from your command line.
-
-```
-$ node app.js
-```
-
-**Is the server output hard to understand?:** We use `bunyan` for logging in this sample. The console won't make much sense to you unless you also install bunyan and run the server like above but pipe it through the bunyan binary:
-
-```
-$ npm install -g bunyan
-
-$ node app.js | bunyan
-```
-
-### You're done!
-
-You will have a server successfully running on `http://localhost:3000`.
--
-## Next steps
-Learn more about the web app scenario that the Microsoft identity platform supports:
-> [!div class="nextstepaction"]
-> [Web app that signs in users scenario](scenario-web-app-sign-user-overview.md)
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: Add user sign-in to a Node.js web app built with the Express framework ](web-app-quickstart.md?pivots=devlang-nodejs-passport)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" class="sxs-lookup"]
+> In this quickstart, you download and run a code sample that demonstrates how to set up OpenID Connect authentication in a web application built using Node.js with Express. The sample is designed to run on any platform.
+>
+> ## Prerequisites
+>
+> - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+> - [Node.js](https://nodejs.org/en/download/).
+>
+> ## Register your application
+>
+> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
+> 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+> 1. Search for and select **Azure Active Directory**.
+> 1. Under **Manage**, select **App registrations** > **New registration**.
+> 1. Enter a **Name** for your application, for example `MyWebApp`. Users of your app might see this name, and you can change it later.
+> 1. In the **Supported account types** section, select **Accounts in any organizational directory and personal Microsoft accounts (e.g. Skype, Xbox, Outlook.com)**.
+>
+> If there are more than one redirect URIs, add these from the **Authentication** tab later after the app has been successfully created.
+>
+> 1. Select **Register** to create the app.
+> 1. On the app's **Overview** page, find the **Application (client) ID** value and record it for later. You'll need this > value to configure the application later in this project.
+> 1. Under **Manage**, select **Authentication**.
+> 1. Select **Add a platform** > **Web**.
+> 1. In the **Redirect URIs** section, enter `http://localhost:3000/auth/openid/return`.
+> 1. Enter a **Front-channel logout URL** `https://localhost:3000`.
+> 1. In the **Implicit grant and hybrid flows** section, select **ID tokens** as this sample requires the [Implicit grant flow](./v2-oauth2-implicit-grant-flow.md) to be enabled to sign-in the user.
+> 1. Select **Configure**.
+> 1. Under **Manage**, select **Certificates & secrets** > **Client secrets** > **New client secret**.
+> 1. Enter a key description (for instance app secret).
+> 1. Select a key duration of either **In 1 year, In 2 years,** or **Never Expires**.
+> 1. Select **Add**. The key value will be displayed. Copy the key value and save it in a safe location for later use.
+>
+>
+> ## Download the sample application and modules
+>
+> Next, clone the sample repo and install the NPM modules.
+>
+> From your shell or command line:
+>
+> `$ git clone git@github.com:AzureADQuickStarts/AppModelv2-WebApp-OpenIDConnect-nodejs.git`
+>
+> or
+>
+> `$ git clone https://github.com/AzureADQuickStarts/AppModelv2-WebApp-OpenIDConnect-nodejs.git`
+>
+> From the project root directory, run the command:
+>
+> `$ npm install`
+>
+> ## Configure the application
+>
+> Provide the parameters in `exports.creds` in config.js as instructed.
+>
+> * Update `<tenant_name>` in `exports.identityMetadata` with the Azure AD tenant name of the format \*.onmicrosoft.com.
+> * Update `exports.clientID` with the Application ID noted from app registration.
+> * Update `exports.clientSecret` with the Application secret noted from app registration.
+> * Update `exports.redirectUrl` with the Redirect URI noted from app registration.
+>
+> **Optional configuration for production apps:**
+>
+> * Update `exports.destroySessionUrl` in config.js, if you want to use a different `post_logout_redirect_uri`.
+>
+> * Set `exports.useMongoDBSessionStore` in config.js to true, if you want to use [mongoDB](https://www.mongodb.com) or other [compatible session stores](https://github.com/expressjs/session#compatible-session-stores).
+> The default session store in this sample is `express-session`. The default session store is not suitable for production.
+>
+> * Update `exports.databaseUri`, if you want to use mongoDB session store and a different database URI.
+>
+> * Update `exports.mongoDBSessionMaxAge`. Here you can specify how long you want to keep a session in mongoDB. The unit is second(s).
+>
+> ## Build and run the application
+>
+> Start mongoDB service. If you are using mongoDB session store in this app, you have to [install mongoDB](http://www.mongodb.org/) and start the service first. If you are using the default session store, you can skip this step.
+>
+> Run the app using the following command from your command line.
+>
+> ```
+> $ node app.js
+> ```
+>
+> **Is the server output hard to understand?:** We use `bunyan` for logging in this sample. The console won't make much sense to you unless you also install bunyan and run the server like above but pipe it through the bunyan binary:
+>
+> ```
+> $ npm install -g bunyan
+>
+> $ node app.js | bunyan
+> ```
+>
+> ### You're done!
+>
+> You will have a server successfully running on `http://localhost:3000`.
+>
+> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
+>
+> ## Next steps
+> Learn more about the web app scenario that the Microsoft identity platform supports:
+> > [!div class="nextstepaction"]
+> > [Web app that signs in users scenario](scenario-web-app-sign-user-overview.md)
active-directory Quickstart V2 Python Daemon https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-python-daemon.md
# Quickstart: Acquire a token and call Microsoft Graph API from a Python console app using app's identity
-In this quickstart, you download and run a code sample that demonstrates how a Python application can get an access token using the app's identity to call the Microsoft Graph API and display a [list of users](/graph/api/user-list) in the directory. The code sample demonstrates how an unattended job or Windows service can run with an application identity, instead of a user's identity.
-
-## Prerequisites
-
-To run this sample, you need:
--- [Python 2.7+](https://www.python.org/downloads/release/python-2713) or [Python 3+](https://www.python.org/downloads/release/python-364/)-- [MSAL Python](https://github.com/AzureAD/microsoft-authentication-library-for-python)-
-> [!div class="sxs-lookup"]
-### Download and configure the quickstart app
-
-#### Step 1: Configure your application in Azure portal
-For the code sample in this quickstart to work, create a client secret and add Graph API's **User.Read.All** application permission.
-> [!div class="nextstepaction"]
-> [Make these changes for me]()
-
-> [!div class="alert alert-info"]
-> ![Already configured](media/quickstart-v2-netcore-daemon/green-check.png) Your application is configured with these attributes.
-
-#### Step 2: Download the Python project
-
-> [!div class="sxs-lookup nextstepaction"]
-> [Download the code sample](https://github.com/Azure-Samples/ms-identity-python-daemon/archive/master.zip)
-
-> [!div class="sxs-lookup"]
-> > [!NOTE]
-> > `Enter_the_Supported_Account_Info_Here`
-
-#### Step 3: Admin consent
-
-If you try to run the application at this point, you'll receive *HTTP 403 - Forbidden* error: `Insufficient privileges to complete the operation`. This error happens because any *app-only permission* requires Admin consent: a global administrator of your directory must give consent to your application. Select one of the options below depending on your role:
-
-##### Global tenant administrator
-
-If you are a global administrator, go to **API Permissions** page select **Grant admin consent for Enter_the_Tenant_Name_Here**.
-> [!div id="apipermissionspage"]
-> [Go to the API Permissions page]()
-
-##### Standard user
-
-If you're a standard user of your tenant, ask a global administrator to grant admin consent for your application. To do this, give the following URL to your administrator:
-
-```url
-https://login.microsoftonline.com/Enter_the_Tenant_Id_Here/adminconsent?client_id=Enter_the_Application_Id_Here
-```
--
-#### Step 4: Run the application
-
-You'll need to install the dependencies of this sample once.
-
-```console
-pip install -r requirements.txt
-```
-
-Then, run the application via command prompt or console:
-
-```console
-python confidential_client_secret_sample.py parameters.json
-```
-
-You should see on the console output some Json fragment representing a list of users in your Azure AD directory.
-
-> [!IMPORTANT]
-> This quickstart application uses a client secret to identify itself as confidential client. Because the client secret is added as a plain-text to your project files, for security reasons, it is recommended that you use a certificate instead of a client secret before considering the application as production application. For more information on how to use a certificate, see [these instructions](https://github.com/Azure-Samples/ms-identity-python-daemon/blob/master/2-Call-MsGraph-WithCertificate/README.md) in the same GitHub repository for this sample, but in the second folder **2-Call-MsGraph-WithCertificate**.
-
-## More information
-
-### MSAL Python
-
-[MSAL Python](https://github.com/AzureAD/microsoft-authentication-library-for-python) is the library used to sign in users and request tokens used to access an API protected by Microsoft identity platform. As described, this quickstart requests tokens by using the application own identity instead of delegated permissions. The authentication flow used in this case is known as *[client credentials oauth flow](v2-oauth2-client-creds-grant-flow.md)*. For more information on how to use MSAL Python with daemon apps, see [this article](scenario-daemon-overview.md).
-
- You can install MSAL Python by running the following pip command.
-
-```powershell
-pip install msal
-```
-
-### MSAL initialization
-
-You can add the reference for MSAL by adding the following code:
-
-```Python
-import msal
-```
-
-Then, initialize MSAL using the following code:
-
-```Python
-app = msal.ConfidentialClientApplication(
- config["client_id"], authority=config["authority"],
- client_credential=config["secret"])
-```
-
-> | Where: |Description |
-> |||
-> | `config["secret"]` | Is the client secret created for the application in Azure portal. |
-> | `config["client_id"]` | Is the **Application (client) ID** for the application registered in the Azure portal. You can find this value in the app's **Overview** page in the Azure portal. |
-> | `config["authority"]` | The STS endpoint for user to authenticate. Usually `https://login.microsoftonline.com/{tenant}` for public cloud, where {tenant} is the name of your tenant or your tenant Id.|
-
-For more information, please see the [reference documentation for `ConfidentialClientApplication`](https://msal-python.readthedocs.io/en/latest/#confidentialclientapplication).
-
-### Requesting tokens
-
-To request a token using app's identity, use `AcquireTokenForClient` method:
-
-```Python
-result = None
-result = app.acquire_token_silent(config["scope"], account=None)
-
-if not result:
- logging.info("No suitable token exists in cache. Let's get a new one from AAD.")
- result = app.acquire_token_for_client(scopes=config["scope"])
-```
-
-> |Where:| Description |
-> |||
-> | `config["scope"]` | Contains the scopes requested. For confidential clients, this should use the format similar to `{Application ID URI}/.default` to indicate that the scopes being requested are the ones statically defined in the app object set in the Azure portal (for Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`). For custom web APIs, `{Application ID URI}` is defined under the **Expose an API** section in **App registrations** in the Azure portal.|
-
-For more information, please see the [reference documentation for `AcquireTokenForClient`](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.acquire_token_for_client).
--
-## Next steps
-
-To learn more about daemon applications, see the scenario landing page.
-
-> [!div class="nextstepaction"]
-> [Daemon application that calls web APIs](scenario-daemon-overview.md)
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: Python console app that calls an API](console-app-quickstart.md?pivots=devlang-python)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" class="sxs-lookup"]
+> In this quickstart, you download and run a code sample that demonstrates how a Python application can get an access token using the app's identity to call the Microsoft Graph API and display a [list of users](/graph/api/user-list) in the directory. The code sample demonstrates how an unattended job or Windows service can run with an application identity, instead of a user's identity.
+>
+> ## Prerequisites
+>
+> To run this sample, you need:
+>
+> - [Python 2.7+](https://www.python.org/downloads/release/python-2713) or [Python 3+](https://www.python.org/downloads/release/python-364/)
+> - [MSAL Python](https://github.com/AzureAD/microsoft-authentication-library-for-python)
+>
+> > [!div class="sxs-lookup"]
+> ### Download and configure the quickstart app
+>
+> #### Step 1: Configure your application in Azure portal
+> For the code sample in this quickstart to work, create a client secret and add Graph API's **User.Read.All** application permission.
+> > [!div class="nextstepaction"]
+> > [Make these changes for me]()
+>
+> > [!div class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-netcore-daemon/green-check.png) Your application is configured with these attributes.
+>
+> #### Step 2: Download the Python project
+>
+> > [!div class="sxs-lookup nextstepaction"]
+> > [Download the code sample](https://github.com/Azure-Samples/ms-identity-python-daemon/archive/master.zip)
+>
+> > [!div class="sxs-lookup"]
+> > > [!NOTE]
+> > > `Enter_the_Supported_Account_Info_Here`
+>
+> #### Step 3: Admin consent
+>
+> If you try to run the application at this point, you'll receive *HTTP 403 - Forbidden* error: `Insufficient privileges to complete the operation`. This error happens because any *app-only permission* requires Admin consent: a global administrator of your directory must give consent to your application. Select one of the options below depending on your role:
+>
+> ##### Global tenant administrator
+>
+> If you are a global administrator, go to **API Permissions** page select **Grant admin consent for Enter_the_Tenant_Name_Here**.
+> > [!div id="apipermissionspage"]
+> > [Go to the API Permissions page]()
+>
+> ##### Standard user
+>
+> If you're a standard user of your tenant, ask a global administrator to grant admin consent for your application. To do this, give the following URL to your administrator:
+>
+> ```url
+> https://login.microsoftonline.com/Enter_the_Tenant_Id_Here/adminconsent?client_id=Enter_the_Application_Id_Here
+> ```
+>
+>
+> #### Step 4: Run the application
+>
+> You'll need to install the dependencies of this sample once.
+>
+> ```console
+> pip install -r requirements.txt
+> ```
+>
+> Then, run the application via command prompt or console:
+>
+> ```console
+> python confidential_client_secret_sample.py parameters.json
+> ```
+>
+> You should see on the console output some Json fragment representing a list of users in your Azure AD directory.
+>
+> > [!IMPORTANT]
+> > This quickstart application uses a client secret to identify itself as confidential client. Because the client secret is added as a plain-text to your project files, for security reasons, it is recommended that you use a certificate instead of a client secret before considering the application as production application. For more information on how to use a certificate, see [these instructions](https://github.com/Azure-Samples/ms-identity-python-daemon/blob/master/2-Call-MsGraph-WithCertificate/README.md) in the same GitHub repository for this sample, but in the second folder **2-Call-MsGraph-WithCertificate**.
+>
+> ## More information
+>
+> ### MSAL Python
+>
+> [MSAL Python](https://github.com/AzureAD/microsoft-authentication-library-for-python) is the library used to sign in users and request tokens used to access an API protected by Microsoft identity platform. As described, this quickstart requests tokens by using the application own identity instead of delegated permissions. The authentication flow used in this case is known as *[client credentials oauth flow](v2-oauth2-client-creds-grant-flow.md)*. For more information on how to use MSAL Python with daemon apps, see [this article](scenario-daemon-overview.md).
+>
+> You can install MSAL Python by running the following pip command.
+>
+> ```powershell
+> pip install msal
+> ```
+>
+> ### MSAL initialization
+>
+> You can add the reference for MSAL by adding the following code:
+>
+> ```Python
+> import msal
+> ```
+>
+> Then, initialize MSAL using the following code:
+>
+> ```Python
+> app = msal.ConfidentialClientApplication(
+> config["client_id"], authority=config["authority"],
+> client_credential=config["secret"])
+> ```
+>
+> > | Where: |Description |
+> > |||
+> > | `config["secret"]` | Is the client secret created for the application in Azure portal. |
+> > | `config["client_id"]` | Is the **Application (client) ID** for the application registered in the Azure portal. You can find this value in the app's **Overview** page in the Azure portal. |
+> > | `config["authority"]` | The STS endpoint for user to authenticate. Usually `https://login.microsoftonline.com/{tenant}` for public cloud, where {tenant} is the name of your tenant or your tenant Id.|
+>
+> For more information, please see the [reference documentation for `ConfidentialClientApplication`](https://msal-python.readthedocs.io/en/latest/#confidentialclientapplication).
+>
+> ### Requesting tokens
+>
+> To request a token using app's identity, use `AcquireTokenForClient` method:
+>
+> ```Python
+> result = None
+> result = app.acquire_token_silent(config["scope"], account=None)
+>
+> if not result:
+> logging.info("No suitable token exists in cache. Let's get a new one from AAD.")
+> result = app.acquire_token_for_client(scopes=config["scope"])
+> ```
+>
+> > |Where:| Description |
+> > |||
+> > | `config["scope"]` | Contains the scopes requested. For confidential clients, this should use the format similar to `{Application ID URI}/.default` to indicate that the scopes being requested are the ones statically defined in the app object set in the Azure portal (for Microsoft Graph, `{Application ID URI}` points to `https://graph.microsoft.com`). For custom web APIs, `{Application ID URI}` is defined under the **Expose an API** section in **App registrations** in the Azure portal.|
+>
+> For more information, please see the [reference documentation for `AcquireTokenForClient`](https://msal-python.readthedocs.io/en/latest/#msal.ConfidentialClientApplication.acquire_token_for_client).
+>
+> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
+>
+> ## Next steps
+>
+> To learn more about daemon applications, see the scenario landing page.
+>
+> > [!div class="nextstepaction"]
+> > [Daemon application that calls web APIs](scenario-daemon-overview.md)
active-directory Quickstart V2 Python Webapp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-python-webapp.md
# Quickstart: Add sign-in with Microsoft to a Python web app
-In this quickstart, you download and run a code sample that demonstrates how a Python web application can sign in users and get an access token to call the Microsoft Graph API. Users with a personal Microsoft Account or an account in any Azure Active Directory (Azure AD) organization can sign into the application.
-See [How the sample works](#how-the-sample-works) for an illustration.
-
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- [Python 2.7+](https://www.python.org/downloads/release/python-2713) or [Python 3+](https://www.python.org/downloads/release/python-364/)-- [Flask](http://flask.pocoo.org/), [Flask-Session](https://pypi.org/project/Flask-Session/), [requests](https://requests.kennethreitz.org/en/master/)-- [MSAL Python](https://github.com/AzureAD/microsoft-authentication-library-for-python)-
-#### Step 1: Configure your application in Azure portal
-
-For the code sample in this quickstart to work:
-
-1. Add a reply URL as `http://localhost:5000/getAToken`.
-1. Create a Client Secret.
-1. Add Microsoft Graph API's User.ReadBasic.All delegated permission.
-
-> [!div class="nextstepaction"]
-> [Make these changes for me]()
-
-> [!div class="alert alert-info"]
-> ![Already configured](./media/quickstart-v2-aspnet-webapp/green-check.png) Your application is configured with this attribute
-
-#### Step 2: Download your project
-
-Download the project and extract the zip file to a local folder closer to the root folder - for example, **C:\Azure-Samples**
-> [!div class="nextstepaction"]
-> [Download the code sample](https://github.com/Azure-Samples/ms-identity-python-webapp/archive/master.zip)
-
-> [!NOTE]
-> `Enter_the_Supported_Account_Info_Here`
-
-#### Step 3: Run the code sample
-
-1. You will need to install MSAL Python library, Flask framework, Flask-Sessions for server-side session management and requests using pip as follows:
-
- ```shell
- pip install -r requirements.txt
- ```
-
-2. Run `app.py` from shell or command line:
-
- ```shell
- python app.py
- ```
-
- > [!IMPORTANT]
- > This quickstart application uses a client secret to identify itself as confidential client. Because the client secret is added as a plain-text to your project files, for security reasons, it is recommended that you use a certificate instead of a client secret before considering the application as production application. For more information on how to use a certificate, see [these instructions](./active-directory-certificate-credentials.md).
-
-## More information
-
-### How the sample works
-![Shows how the sample app generated by this quickstart works](media/quickstart-v2-python-webapp/python-quickstart.svg)
-
-### Getting MSAL
-MSAL is the library used to sign in users and request tokens used to access an API protected by the Microsoft identity Platform.
-You can add MSAL Python to your application using Pip.
-
-```Shell
-pip install msal
-```
-
-### MSAL initialization
-You can add the reference to MSAL Python by adding the following code to the top of the file where you will be using MSAL:
-
-```Python
-import msal
-```
--
-## Next steps
-
-Learn more about web apps that sign in users in our multi-part scenario series.
-
-> [!div class="nextstepaction"]
-> [Scenario: Web app that signs in users](scenario-web-app-sign-user-overview.md)
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: Python web app with user sign-in](web-app-quickstart.md?pivots=devlang-python)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" class="sxs-lookup"]
+> In this quickstart, you download and run a code sample that demonstrates how a Python web application can sign in users and get an access token to call the Microsoft Graph API. Users with a personal Microsoft Account or an account in any Azure Active Directory (Azure AD) organization can sign into the application.
+>
+> See [How the sample works](#how-the-sample-works) for an illustration.
+>
+> ## Prerequisites
+>
+> - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+> - [Python 2.7+](https://www.python.org/downloads/release/python-2713) or [Python 3+](https://www.python.org/downloads/release/python-364/)
+> - [Flask](http://flask.pocoo.org/), [Flask-Session](https://pypi.org/project/Flask-Session/), [requests](https://requests.kennethreitz.org/en/master/)
+> - [MSAL Python](https://github.com/AzureAD/microsoft-authentication-library-for-python)
+>
+> #### Step 1: Configure your application in Azure portal
+>
+> For the code sample in this quickstart to work:
+>
+> 1. Add a reply URL as `http://localhost:5000/getAToken`.
+> 1. Create a Client Secret.
+> 1. Add Microsoft Graph API's User.ReadBasic.All delegated permission.
+>
+> > [!div class="nextstepaction"]
+> > [Make these changes for me]()
+>
+> > [!div class="alert alert-info"]
+> > ![Already configured](./media/quickstart-v2-aspnet-webapp/green-check.png) Your application is configured with this attribute
+>
+> #### Step 2: Download your project
+>
+> Download the project and extract the zip file to a local folder closer to the root folder - for example, **C:\Azure-Samples**
+> > [!div class="nextstepaction"]
+> > [Download the code sample](https://github.com/Azure-Samples/ms-identity-python-webapp/archive/master.zip)
+>
+> > [!NOTE]
+> > `Enter_the_Supported_Account_Info_Here`
+>
+> #### Step 3: Run the code sample
+>
+> 1. You will need to install MSAL Python library, Flask framework, Flask-Sessions for server-side session management and requests using pip as follows:
+>
+> ```shell
+> pip install -r requirements.txt
+> ```
+>
+> 2. Run `app.py` from shell or command line:
+>
+> ```shell
+> python app.py
+> ```
+>
+> > [!IMPORTANT]
+> > This quickstart application uses a client secret to identify itself as confidential client. Because the client secret is added as a plain-text to your project files, for security reasons, it is recommended that you use a certificate instead of a client secret before considering the application as production application. For more information on how to use a certificate, see [these instructions](./active-directory-certificate-credentials.md).
+>
+> ## More information
+>
+> ### How the sample works
+> ![Shows how the sample app generated by this quickstart works](media/quickstart-v2-python-webapp/python-quickstart.svg)
+>
+> ### Getting MSAL
+> MSAL is the library used to sign in users and request tokens used to access an API protected by the Microsoft identity Platform.
+> You can add MSAL Python to your application using Pip.
+>
+> ```Shell
+> pip install msal
+> ```
+>
+> ### MSAL initialization
+> You can add the reference to MSAL Python by adding the following code to the top of the file where you will be using MSAL:
+>
+> ```Python
+> import msal
+> ```
+>
+> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
+>
+> ## Next steps
+>
+> Learn more about web apps that sign in users in our multi-part scenario series.
+>
+> > [!div class="nextstepaction"]
+> > [Scenario: Web app that signs in users](scenario-web-app-sign-user-overview.md)
active-directory Quickstart V2 Uwp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-uwp.md
# Quickstart: Call the Microsoft Graph API from a Universal Windows Platform (UWP) application
-In this quickstart, you download and run a code sample that demonstrates how a Universal Windows Platform (UWP) application can sign in users and get an access token to call the Microsoft Graph API.
-See [How the sample works](#how-the-sample-works) for an illustration.
--
-## Prerequisites
-
-* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* [Visual Studio 2019](https://visualstudio.microsoft.com/vs/)
-
-#### Step 1: Configure the application
-For the code sample in this quickstart to work, add a **Redirect URI** of `https://login.microsoftonline.com/common/oauth2/nativeclient`.
-> [!div class="nextstepaction"]
-> [Make this change for me]()
-
-> [!div class="alert alert-info"]
-> ![Already configured](media/quickstart-v2-uwp/green-check.png) Your application is configured with these attributes.
-
-#### Step 2: Download the Visual Studio project
-
-Run the project using Visual Studio 2019.
-> [!div class="nextstepaction"]
-> [Download the code sample](https://github.com/Azure-Samples/active-directory-dotnet-native-uwp-v2/archive/msal3x.zip)
---
-#### Step 3: Your app is configured and ready to run
-We have configured your project with values of your app's properties and it's ready to run.
-#### Step 4: Run the application
-
-To run the sample application on your local machine:
-
-1. In the Visual Studio toolbar, choose the right platform (probably **x64** or **x86**, not ARM). The target device should change from *Device* to *Local Machine*.
-1. Select **Debug** > **Start Without Debugging**.
-
- If you're prompted to do so, you might first need to enable **Developer Mode**, and then **Start Without Debugging** again to launch the app.
-
-When the app's window appears, you can select the **Call Microsoft Graph API** button, enter your credentials, and consent to the permissions requested by the application. If successful, the application displays some token information and data obtained from the call to the Microsoft Graph API.
-
-## How the sample works
-
-![Shows how the sample app generated by this quickstart works](media/quickstart-v2-uwp/uwp-intro.svg)
-
-### MSAL.NET
-
-MSAL ([Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client)) is the library used to sign in users and request security tokens. The security tokens are used to access an API protected by the Microsoft Identity platform. You can install MSAL by running the following command in Visual Studio's *Package Manager Console*:
-
-```powershell
-Install-Package Microsoft.Identity.Client
-```
-
-### MSAL initialization
-
-You can add the reference for MSAL by adding the following code:
-
-```csharp
-using Microsoft.Identity.Client;
-```
-
-Then, MSAL is initialized using the following code:
-
-```csharp
-public static IPublicClientApplication PublicClientApp;
-PublicClientApp = PublicClientApplicationBuilder.Create(ClientId)
- .WithRedirectUri("https://login.microsoftonline.com/common/oauth2/nativeclient")
- .Build();
-```
-
-The value of `ClientId` is the **Application (client) ID** of the app you registered in the Azure portal. You can find this value in the app's **Overview** page in the Azure portal.
-
-### Requesting tokens
-
-MSAL has two methods for acquiring tokens in a UWP app: `AcquireTokenInteractive` and `AcquireTokenSilent`.
-
-#### Get a user token interactively
-
-Some situations require forcing users to interact with the Microsoft identity platform through a pop-up window to either validate their credentials or to give consent. Some examples include:
--- The first-time users sign in to the application-- When users may need to reenter their credentials because the password has expired-- When your application is requesting access to a resource, that the user needs to consent to-- When two factor authentication is required-
-```csharp
-authResult = await App.PublicClientApp.AcquireTokenInteractive(scopes)
- .ExecuteAsync();
-```
-
-The `scopes` parameter contains the scopes being requested, such as `{ "user.read" }` for Microsoft Graph or `{ "api://<Application ID>/access_as_user" }` for custom web APIs.
-
-#### Get a user token silently
-
-Use the `AcquireTokenSilent` method to obtain tokens to access protected resources after the initial `AcquireTokenInteractive` method. You donΓÇÖt want to require the user to validate their credentials every time they need to access a resource. Most of the time you want token acquisitions and renewal without any user interaction
-
-```csharp
-var accounts = await App.PublicClientApp.GetAccountsAsync();
-var firstAccount = accounts.FirstOrDefault();
-authResult = await App.PublicClientApp.AcquireTokenSilent(scopes, firstAccount)
- .ExecuteAsync();
-```
-
-* `scopes` contains the scopes being requested, such as `{ "user.read" }` for Microsoft Graph or `{ "api://<Application ID>/access_as_user" }` for custom web APIs.
-* `firstAccount` specifies the first user account in the cache (MSAL supports multiple users in a single app).
--
-## Next steps
-
-Try out the Windows desktop tutorial for a complete step-by-step guide on building applications and new features, including a full explanation of this quickstart.
-
-> [!div class="nextstepaction"]
-> [UWP - Call Graph API tutorial](tutorial-v2-windows-uwp.md)
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: Universal Windows Platform (UWP) desktop app with user sign-in](desktop-app-quickstart.md?pivots=devlang-uwp)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" class="sxs-lookup"]
+> In this quickstart, you download and run a code sample that demonstrates how a Universal Windows Platform (UWP) application can sign in users and get an access token to call the Microsoft Graph API.
+>
+> See [How the sample works](#how-the-sample-works) for an illustration.
+>
+>
+> ## Prerequisites
+>
+> * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+> * [Visual Studio 2019](https://visualstudio.microsoft.com/vs/)
+>
+> #### Step 1: Configure the application
+> For the code sample in this quickstart to work, add a **Redirect URI** of `https://login.microsoftonline.com/common/oauth2/nativeclient`.
+> > [!div class="nextstepaction"]
+> > [Make this change for me]()
+>
+> > [!div class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-uwp/green-check.png) Your application is configured with these attributes.
+>
+> #### Step 2: Download the Visual Studio project
+>
+> Run the project using Visual Studio 2019.
+> > [!div class="nextstepaction"]
+> > [Download the code sample](https://github.com/Azure-Samples/active-directory-dotnet-native-uwp-v2/archive/msal3x.zip)
+>
+> [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)]
+>
+>
+> #### Step 3: Your app is configured and ready to run
+> We have configured your project with values of your app's properties and it's ready to run.
+> #### Step 4: Run the application
+>
+> To run the sample application on your local machine:
+>
+> 1. In the Visual Studio toolbar, choose the right platform (probably **x64** or **x86**, not ARM). The target device should change from *Device* to *Local Machine*.
+> 1. Select **Debug** > **Start Without Debugging**.
+>
+> If you're prompted to do so, you might first need to enable **Developer Mode**, and then **Start Without Debugging** again to launch the app.
+>
+> When the app's window appears, you can select the **Call Microsoft Graph API** button, enter your credentials, and consent to the permissions requested by the application. If successful, the application displays some token information and data obtained from the call to the Microsoft Graph API.
+>
+> ## How the sample works
+>
+> ![Shows how the sample app generated by this quickstart works](media/quickstart-v2-uwp/uwp-intro.svg)
+>
+> ### MSAL.NET
+>
+> MSAL ([Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client)) is the library used to sign in users and request security tokens. The security tokens are used to access an API protected by the Microsoft Identity platform. You can install MSAL by running the following command in Visual Studio's *Package Manager Console*:
+>
+> ```powershell
+> Install-Package Microsoft.Identity.Client
+> ```
+>
+> ### MSAL initialization
+>
+> You can add the reference for MSAL by adding the following code:
+>
+> ```csharp
+> using Microsoft.Identity.Client;
+> ```
+>
+> Then, MSAL is initialized using the following code:
+>
+> ```csharp
+> public static IPublicClientApplication PublicClientApp;
+> PublicClientApp = PublicClientApplicationBuilder.Create(ClientId)
+> .WithRedirectUri("https://login.microsoftonline.com/common/oauth2/> nativeclient")
+> .Build();
+> ```
+>
+> The value of `ClientId` is the **Application (client) ID** of the app you registered in the Azure portal. You can find this value in the app's **Overview** page in the Azure portal.
+>
+> ### Requesting tokens
+>
+> MSAL has two methods for acquiring tokens in a UWP app: `AcquireTokenInteractive` and `AcquireTokenSilent`.
+>
+> #### Get a user token interactively
+>
+> Some situations require forcing users to interact with the Microsoft identity platform through a pop-up window to either validate their credentials or to give consent. Some examples include:
+>
+> - The first-time users sign in to the application
+> - When users may need to reenter their credentials because the password has expired
+> - When your application is requesting access to a resource, that the user needs to consent to
+> - When two factor authentication is required
+>
+> ```csharp
+> authResult = await App.PublicClientApp.AcquireTokenInteractive(scopes)
+> .ExecuteAsync();
+> ```
+>
+> The `scopes` parameter contains the scopes being requested, such as `{ "user.read" }` for Microsoft Graph or `{ "api://<Application ID>/access_as_user" }` for custom web APIs.
+>
+> #### Get a user token silently
+>
+> Use the `AcquireTokenSilent` method to obtain tokens to access protected resources after the initial `AcquireTokenInteractive` method. You donΓÇÖt want to require the user to validate their credentials every time they need to access a resource. Most of the time you want token acquisitions and renewal without any user interaction
+>
+> ```csharp
+> var accounts = await App.PublicClientApp.GetAccountsAsync();
+> var firstAccount = accounts.FirstOrDefault();
+> authResult = await App.PublicClientApp.AcquireTokenSilent(scopes, firstAccount)
+> .ExecuteAsync();
+> ```
+>
+> * `scopes` contains the scopes being requested, such as `{ "user.read" }` for Microsoft Graph or `{ "api://<Application ID>/access_as_user" }` for custom web APIs.
+> * `firstAccount` specifies the first user account in the cache (MSAL supports multiple users in a single app).
+>
+> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
+>
+> ## Next steps
+>
+> Try out the Windows desktop tutorial for a complete step-by-step guide on building applications and new features, including a full explanation of this quickstart.
+>
+> > [!div class="nextstepaction"]
+> > [UWP - Call Graph API tutorial](tutorial-v2-windows-uwp.md)
active-directory Quickstart V2 Windows Desktop https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/quickstart-v2-windows-desktop.md
# Quickstart: Acquire a token and call Microsoft Graph API from a Windows desktop app
-In this quickstart, you download and run a code sample that demonstrates how a Windows Presentation Foundation (WPF) application can sign in users and get an access token to call the Microsoft Graph API.
-
-See [How the sample works](#how-the-sample-works) for an illustration.
--
-#### Step 1: Configure your application in Azure portal
-For the code sample in this quickstart to work, add a **Redirect URI** of `https://login.microsoftonline.com/common/oauth2/nativeclient` and `ms-appx-web://microsoft.aad.brokerplugin/{client_id}`.
-> [!div class="nextstepaction"]
-> [Make this change for me]()
-
-> [!div class="alert alert-info"]
-> ![Already configured](media/quickstart-v2-windows-desktop/green-check.png) Your application is configured with these attributes.
-
-#### Step 2: Download your Visual Studio project
-
-Run the project using Visual Studio 2019.
-> [!div class="nextstepaction"]
-> [Download the code sample](https://github.com/Azure-Samples/active-directory-dotnet-desktop-msgraph-v2/archive/msal3x.zip)
--
-#### Step 3: Your app is configured and ready to run
-We have configured your project with values of your app's properties and it's ready to run.
-
-> [!div class="sxs-lookup"]
-> > [!NOTE]
-> > `Enter_the_Supported_Account_Info_Here`
-
-## More information
-
-### How the sample works
-![Shows how the sample app generated by this quickstart works](media/quickstart-v2-windows-desktop/windesktop-intro.svg)
-
-### MSAL.NET
-MSAL ([Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client)) is the library used to sign in users and request tokens used to access an API protected by Microsoft identity platform. You can install MSAL by running the following command in Visual Studio's **Package Manager Console**:
-
-```powershell
-Install-Package Microsoft.Identity.Client -IncludePrerelease
-```
-
-### MSAL initialization
-
-You can add the reference for MSAL by adding the following code:
-
-```csharp
-using Microsoft.Identity.Client;
-```
-
-Then, initialize MSAL using the following code:
-
-```csharp
-IPublicClientApplication publicClientApp = PublicClientApplicationBuilder.Create(ClientId)
- .WithRedirectUri("https://login.microsoftonline.com/common/oauth2/nativeclient")
- .WithAuthority(AzureCloudInstance.AzurePublic, Tenant)
- .Build();
-```
-
-|Where: | Description |
-|||
-| `ClientId` | Is the **Application (client) ID** for the application registered in the Azure portal. You can find this value in the app's **Overview** page in the Azure portal. |
-
-### Requesting tokens
-
-MSAL has two methods for acquiring tokens: `AcquireTokenInteractive` and `AcquireTokenSilent`.
-
-#### Get a user token interactively
-
-Some situations require forcing users interact with the Microsoft identity platform through a pop-up window to either validate their credentials or to give consent. Some examples include:
--- The first time users sign in to the application-- When users may need to reenter their credentials because the password has expired-- When your application is requesting access to a resource that the user needs to consent to-- When two factor authentication is required-
-```csharp
-authResult = await App.PublicClientApp.AcquireTokenInteractive(_scopes)
- .ExecuteAsync();
-```
-
-|Where:| Description |
-|||
-| `_scopes` | Contains the scopes being requested, such as `{ "user.read" }` for Microsoft Graph or `{ "api://<Application ID>/access_as_user" }` for custom web APIs. |
-
-#### Get a user token silently
-
-You don't want to require the user to validate their credentials every time they need to access a resource. Most of the time you want token acquisitions and renewal without any user interaction. You can use the `AcquireTokenSilent` method to obtain tokens to access protected resources after the initial `AcquireTokenInteractive` method:
-
-```csharp
-var accounts = await App.PublicClientApp.GetAccountsAsync();
-var firstAccount = accounts.FirstOrDefault();
-authResult = await App.PublicClientApp.AcquireTokenSilent(scopes, firstAccount)
- .ExecuteAsync();
-```
-
-|Where: | Description |
-|||
-| `scopes` | Contains the scopes being requested, such as `{ "user.read" }` for Microsoft Graph or `{ "api://<Application ID>/access_as_user" }` for custom web APIs. |
-| `firstAccount` | Specifies the first user in the cache (MSAL support multiple users in a single app). |
--
-## Next steps
-
-Try out the Windows desktop tutorial for a complete step-by-step guide on building applications and new features, including a full explanation of this quickstart.
-
-> [!div class="nextstepaction"]
-> [Call Graph API tutorial](./tutorial-v2-windows-desktop.md)
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: Windows Presentation Foundation (WPF) desktop app that signs in users and calls a web API](desktop-app-quickstart.md?pivots=devlang-windows-desktop)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" class="sxs-lookup"]
+> In this quickstart, you download and run a code sample that demonstrates how a Windows Presentation Foundation (WPF) application can sign in users and get an access token to call the Microsoft Graph API.
+>
+> See [How the sample works](#how-the-sample-works) for an illustration.
+>
+>
+> #### Step 1: Configure your application in Azure portal
+> For the code sample in this quickstart to work, add a **Redirect URI** of `https://login.microsoftonline.com/common/oauth2/nativeclient` and `ms-appx-web://microsoft.aad.brokerplugin/{client_id}`.
+> > [!div class="nextstepaction"]
+> > [Make this change for me]()
+>
+> > [!div class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-windows-desktop/green-check.png) Your application is configured with these attributes.
+>
+> #### Step 2: Download your Visual Studio project
+>
+> Run the project using Visual Studio 2019.
+> > [!div class="nextstepaction"]
+> > [Download the code sample](https://github.com/Azure-Samples/active-directory-dotnet-desktop-msgraph-v2/archive/msal3x.zip)
+>
+> [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)]
+>
+> #### Step 3: Your app is configured and ready to run
+> We have configured your project with values of your app's properties and it's ready to run.
+>
+> > [!div class="sxs-lookup"]
+> > > [!NOTE]
+> > > `Enter_the_Supported_Account_Info_Here`
+>
+> ## More information
+>
+> ### How the sample works
+> ![Shows how the sample app generated by this quickstart works](media/quickstart-v2-windows-desktop/windesktop-intro.svg)
+>
+> ### MSAL.NET
+> MSAL ([Microsoft.Identity.Client](https://www.nuget.org/packages/Microsoft.Identity.Client)) is the library used to sign in users and request tokens used to access an API protected by Microsoft identity platform. You can install MSAL by running the following command in Visual Studio's **Package Manager Console**:
+>
+> ```powershell
+> Install-Package Microsoft.Identity.Client -IncludePrerelease
+> ```
+>
+> ### MSAL initialization
+>
+> You can add the reference for MSAL by adding the following code:
+>
+> ```csharp
+> using Microsoft.Identity.Client;
+> ```
+>
+> Then, initialize MSAL using the following code:
+>
+> ```csharp
+> IPublicClientApplication publicClientApp = PublicClientApplicationBuilder.Create(ClientId)
+> .WithRedirectUri("https://login.microsoftonline.com/common/oauth2/nativeclient")
+> .WithAuthority(AzureCloudInstance.AzurePublic, Tenant)
+> .Build();
+> ```
+>
+> |Where: | Description |
+> |||
+> | `ClientId` | Is the **Application (client) ID** for the application registered in the Azure portal. You can find this value in the app's **Overview** page in the Azure portal. |
+>
+> ### Requesting tokens
+>
+> MSAL has two methods for acquiring tokens: `AcquireTokenInteractive` and `AcquireTokenSilent`.
+>
+> #### Get a user token interactively
+>
+> Some situations require forcing users interact with the Microsoft identity platform through a pop-up window to either validate their credentials or to give consent. Some examples include:
+>
+> - The first time users sign in to the application
+> - When users may need to reenter their credentials because the password has expired
+> - When your application is requesting access to a resource that the user needs to consent to
+> - When two factor authentication is required
+>
+> ```csharp
+> authResult = await App.PublicClientApp.AcquireTokenInteractive(_scopes)
+> .ExecuteAsync();
+> ```
+>
+> |Where:| Description |
+> |||
+> | `_scopes` | Contains the scopes being requested, such as `{ "user.read" }` for Microsoft Graph or `{ "api://<Application ID>/access_as_user" }` for custom web APIs. |
+>
+> #### Get a user token silently
+>
+> You don't want to require the user to validate their credentials every time they need to access a resource. Most of the time you want token acquisitions and renewal without any user interaction. You can use the `AcquireTokenSilent` method to obtain tokens to access protected resources after the initial `AcquireTokenInteractive` method:
+>
+> ```csharp
+> var accounts = await App.PublicClientApp.GetAccountsAsync();
+> var firstAccount = accounts.FirstOrDefault();
+> authResult = await App.PublicClientApp.AcquireTokenSilent(scopes, firstAccount)
+> .ExecuteAsync();
+> ```
+>
+> |Where: | Description |
+> |||
+> | `scopes` | Contains the scopes being requested, such as `{ "user.read" }` for Microsoft Graph or `{ "api://<Application ID>/access_as_user" }` for custom web APIs. |
+> | `firstAccount` | Specifies the first user in the cache (MSAL support multiple users in a single app). |
+>
+> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
+>
+> ## Next steps
+>
+> Try out the Windows desktop tutorial for a complete step-by-step guide on building applications and new features, including a full explanation of this quickstart.
+>
+> > [!div class="nextstepaction"]
+> > [Call Graph API tutorial](./tutorial-v2-windows-desktop.md)
active-directory Reference Aadsts Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/reference-aadsts-error-codes.md
The `error` field has several possible values - review the protocol documentatio
| AADSTS50015 | ViralUserLegalAgeConsentRequiredState - The user requires legal age group consent. | | AADSTS50017 | CertificateValidationFailed - Certification validation failed, reasons for the following reasons:<ul><li>Cannot find issuing certificate in trusted certificates list</li><li>Unable to find expected CrlSegment</li><li>Cannot find issuing certificate in trusted certificates list</li><li>Delta CRL distribution point is configured without a corresponding CRL distribution point</li><li>Unable to retrieve valid CRL segments because of a timeout issue</li><li>Unable to download CRL</li></ul>Contact the tenant admin. | | AADSTS50020 | UserUnauthorized - Users are unauthorized to call this endpoint. |
+| AADSTS500212 | NotAllowedByOutboundPolicyTenant - The user's administrator has set an outbound access policy that does not allow access to the resource tenant. |
+| AADSTS500213 | NotAllowedByInboundPolicyTenant - The resource tenant's cross-tenant access policy does not allow this user to access this tenant. |
| AADSTS50027 | InvalidJwtToken - Invalid JWT token because of the following reasons:<ul><li>doesn't contain nonce claim, sub claim</li><li>subject identifier mismatch</li><li>duplicate claim in idToken claims</li><li>unexpected issuer</li><li>unexpected audience</li><li>not within its valid time range </li><li>token format is not proper</li><li>External ID token from issuer failed signature verification.</li></ul> | | AADSTS50029 | Invalid URI - domain name contains invalid characters. Contact the tenant admin. | | AADSTS50032 | WeakRsaKey - Indicates the erroneous user attempt to use a weak RSA key. |
active-directory V2 Oauth2 Auth Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-oauth2-auth-code-flow.md
Previously updated : 08/30/2021 Last updated : 02/02/2022
The OAuth 2.0 authorization code grant can be used in apps that are installed on a device to gain access to protected resources, such as web APIs. Using the Microsoft identity platform implementation of OAuth 2.0 and Open ID Connect (OIDC), you can add sign in and API access to your mobile and desktop apps.
-This article describes how to program directly against the protocol in your application using any language. When possible, we recommend you use the supported Microsoft Authentication Libraries (MSAL) instead to [acquire tokens and call secured web APIs](authentication-flows-app-scenarios.md#scenarios-and-supported-authentication-flows). Also take a look at the [sample apps that use MSAL](sample-v2-code.md).
+This article describes how to program directly against the protocol in your application using any language. When possible, we recommend you use the supported Microsoft Authentication Libraries (MSAL) to [acquire tokens and call secured web APIs](authentication-flows-app-scenarios.md#scenarios-and-supported-authentication-flows). For more information, look at [sample apps that use MSAL](sample-v2-code.md).
-The OAuth 2.0 authorization code flow is described in [section 4.1 of the OAuth 2.0 specification](https://tools.ietf.org/html/rfc6749). With OIDC, it's used to perform authentication and authorization in the majority of app types, including [single page apps](v2-app-types.md#single-page-apps-javascript), [web apps](v2-app-types.md#web-apps), and [natively installed apps](v2-app-types.md#mobile-and-native-apps). The flow enables apps to securely acquire access_tokens that can be used to access resources secured by the Microsoft identity platform, as well as refresh tokens to get additional access_tokens, and ID tokens for the signed in user.
+The OAuth 2.0 authorization code flow is described in [section 4.1 of the OAuth 2.0 specification](https://tools.ietf.org/html/rfc6749). With OIDC, this flow does authentication and authorization for most app types. These types include [single page apps](v2-app-types.md#single-page-apps-javascript), [web apps](v2-app-types.md#web-apps), and [natively installed apps](v2-app-types.md#mobile-and-native-apps). The flow enables apps to securely acquire an `access_token` that can be used to access resources secured by the Microsoft identity platform. Apps can refresh tokens to get other access tokens and ID tokens for the signed in user.
[!INCLUDE [try-in-postman-link](includes/try-in-postman-link.md)] ## Protocol diagram
-At a high level, the entire authentication flow for an application looks a bit like this:
+This diagram provides a high-level overview of the authentication flow for an application:
-![OAuth Auth Code Flow](./media/v2-oauth2-auth-code-flow/convergence-scenarios-native.svg)
+![Diagram shows OAuth authorization code flow. Native app and Web A P I interact by using tokens as described in this article.](./media/v2-oauth2-auth-code-flow/convergence-scenarios-native.svg)
## Redirect URI setup required for single-page apps
-The authorization code flow for single page applications requires some additional setup. Follow the instructions for [creating your single-page application](scenario-spa-app-registration.md#redirect-uri-msaljs-20-with-auth-code-flow) to correctly mark your redirect URI as enabled for CORS. To update an existing redirect URI to enable CORS, open the manifest editor and set the `type` field for your redirect URI to `spa` in the `replyUrlsWithType` section. You can also click on the redirect URI in the "Web" section of the Authentication tab, and select the URIs you want to migrate to using the authorization code flow.
+The authorization code flow for single page applications requires additional setup. Follow the instructions for [creating your single-page application](scenario-spa-app-registration.md#redirect-uri-msaljs-20-with-auth-code-flow) to correctly mark your redirect URI as enabled for Cross-Origin Resource Sharing (CORS). To update an existing redirect URI to enable CORS, open the manifest editor and set the `type` field for your redirect URI to `spa` in the `replyUrlsWithType` section. Or, you can select the redirect URI in **Authentication** > **Web** and select URIs to migrate to using the authorization code flow.
The `spa` redirect type is backwards compatible with the implicit flow. Apps currently using the implicit flow to get tokens can move to the `spa` redirect URI type without issues and continue using the implicit flow.
-If you attempt to use the authorization code flow and see this error:
+If you attempt to use the authorization code flow without setting up CORS for your redirect URI, you will see this error in the console:
-`access to XMLHttpRequest at 'https://login.microsoftonline.com/common/oauth2/v2.0/token' from origin 'yourApp.com' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.`
+```http
+access to XMLHttpRequest at 'https://login.microsoftonline.com/common/v2.0/oauth2/token' from origin 'yourApp.com' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
+```
-Then, visit your app registration and update the redirect URI for your app to type `spa`.
+If so, visit your app registration and update the redirect URI for your app to use the `spa` type.
-Applications cannot use a `spa` redirect URI with non-SPA flows, for example native applications or client credential flows. To ensure security and best practices, the Microsoft Identity platform will return an error if you attempt to use use a `spa` redirect URI without an `Origin` header. Similarly, the Microsoft Identity platform also prevents the use of client credentials (in the OBO flow, client credentials flow, and auth code flow) in the presence of an `Origin` header, to ensure that secrets are not used from within the browser.
+Applications can't use a `spa` redirect URI with non-SPA flows, for example, native applications or client credential flows. To ensure security and best practices, the Microsoft identity platform returns an error if you attempt to use a `spa` redirect URI without an `Origin` header. Similarly, the Microsoft identity platform also prevents the use of client credentials in all flows in the presence of an `Origin` header, to ensure that secrets aren't used from within the browser.
## Request an authorization code
-The authorization code flow begins with the client directing the user to the `/authorize` endpoint. In this request, the client requests the `openid`, `offline_access`, and `https://graph.microsoft.com/mail.read ` permissions from the user. Some permissions are admin-restricted, for example writing data to an organization's directory by using `Directory.ReadWrite.All`. If your application requests access to one of these permissions from an organizational user, the user receives an error message that says they're not authorized to consent to your app's permissions. To request access to admin-restricted scopes, you should request them directly from a Global Administrator. For more information, read [Admin-restricted permissions](v2-permissions-and-consent.md#admin-restricted-permissions).
+The authorization code flow begins with the client directing the user to the `/authorize` endpoint. In this request, the client requests the `openid`, `offline_access`, and `https://graph.microsoft.com/mail.read` permissions from the user.
-```
+Some permissions are admin-restricted, for example, writing data to an organization's directory by using `Directory.ReadWrite.All`. If your application requests access to one of these permissions from an organizational user, the user receives an error message that says they're not authorized to consent to your app's permissions. To request access to admin-restricted scopes, you should request them directly from a Global Administrator. For more information, see [Admin-restricted permissions](v2-permissions-and-consent.md#admin-restricted-permissions).
+
+```http
// Line breaks for legibility only https://login.microsoftonline.com/{tenant}/oauth2/v2.0/authorize?
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
``` > [!TIP]
-> Click the link below to execute this request! After signing in, your browser should be redirected to `http://localhost/myapp/` with a `code` in the address bar.
+> Select the link below to execute this request! After signing in, your browser should be redirected to `http://localhost/myapp/` with a `code` in the address bar.
> <a href="https://login.microsoftonline.com/common/oauth2/v2.0/authorize?client_id=6731de76-14a6-49ae-97bc-6eba6914391e&response_type=code&redirect_uri=http%3A%2F%2Flocalhost%2Fmyapp%2F&response_mode=query&scope=openid%20offline_access%20https%3A%2F%2Fgraph.microsoft.com%2Fmail.read&state=12345" target="_blank">https://login.microsoftonline.com/common/oauth2/v2.0/authorize...</a> | Parameter | Required/optional | Description | |--|-|--|
-| `tenant` | required | The `{tenant}` value in the path of the request can be used to control who can sign into the application. The allowed values are `common`, `organizations`, `consumers`, and tenant identifiers. For more detail, see [protocol basics](active-directory-v2-protocols.md#endpoints). Critically, for guest scenarios where you sign a user from one tenant into another tenant, you *must* provide the tenant identifier to correctly sign them into the resource tenant.|
-| `client_id` | required | The **Application (client) ID** that the [Azure portal ΓÇô App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) experience assigned to your app. |
-| `response_type` | required | Must include `code` for the authorization code flow. Can also include `id_token` or `token` if using the [hybrid flow](#request-an-id-token-as-well-hybrid-flow). |
-| `redirect_uri` | required | The redirect_uri of your app, where authentication responses can be sent and received by your app. It must exactly match one of the redirect_uris you registered in the portal, except it must be URL-encoded. For native & mobile apps, you should use one of the recommended values - `https://login.microsoftonline.com/common/oauth2/nativeclient` (for apps using embedded browsers) or `http://localhost` (for apps that use system browsers). |
-| `scope` | required | A space-separated list of [scopes](v2-permissions-and-consent.md) that you want the user to consent to. For the `/authorize` leg of the request, this can cover multiple resources, allowing your app to get consent for multiple web APIs you want to call. |
-| `response_mode` | recommended | Specifies the method that should be used to send the resulting token back to your app. Can be one of the following:<br/><br/>- `query`<br/>- `fragment`<br/>- `form_post`<br/><br/>`query` provides the code as a query string parameter on your redirect URI. If you're requesting an ID token using the implicit flow, you can't use `query` as specified in the [OpenID spec](https://openid.net/specs/oauth-v2-multiple-response-types-1_0.html#Combinations). If you're requesting just the code, you can use `query`, `fragment`, or `form_post`. `form_post` executes a POST containing the code to your redirect URI. |
-| `state` | recommended | A value included in the request that will also be returned in the token response. It can be a string of any content that you wish. A randomly generated unique value is typically used for [preventing cross-site request forgery attacks](https://tools.ietf.org/html/rfc6749#section-10.12). The value can also encode information about the user's state in the app before the authentication request occurred, such as the page or view they were on. |
-| `prompt` | optional | Indicates the type of user interaction that is required. The only valid values at this time are `login`, `none`, `consent`, and `select_account`.<br/><br/>- `prompt=login` will force the user to enter their credentials on that request, negating single-sign on.<br/>- `prompt=none` is the opposite - it will ensure that the user isn't presented with any interactive prompt whatsoever. If the request can't be completed silently via single-sign on, the Microsoft identity platform will return an `interaction_required` error.<br/>- `prompt=consent` will trigger the OAuth consent dialog after the user signs in, asking the user to grant permissions to the app.<br/>- `prompt=select_account` will interrupt single sign-on providing account selection experience listing all the accounts either in session or any remembered account or an option to choose to use a different account altogether.<br/> |
-| `login_hint` | Optional | You can use this parameter to pre-fill the username and email address field of the sign-in page for the user, if you know the username ahead of time. Often, apps use this parameter during reauthentication, after already extracting the `login_hint` [optional claim](active-directory-optional-claims.md) from an earlier sign-in. |
-| `domain_hint` | optional | If included, it will skip the email-based discovery process that user goes through on the sign-in page, leading to a slightly more streamlined user experience - for example, sending them to their federated identity provider. Often apps will use this parameter during re-authentication, by extracting the `tid` from a previous sign-in. |
-| `code_challenge` | recommended / required | Used to secure authorization code grants via Proof Key for Code Exchange (PKCE). Required if `code_challenge_method` is included. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). This is now recommended for all application types - both public and confidential clients - and required by the Microsoft identity platform for [single page apps using the authorization code flow](reference-third-party-cookies-spas.md). |
-| `code_challenge_method` | recommended / required | The method used to encode the `code_verifier` for the `code_challenge` parameter. This *SHOULD* be `S256`, but the spec allows the use of `plain` if for some reason the client cannot support SHA256. <br/><br/>If excluded, `code_challenge` is assumed to be plaintext if `code_challenge` is included. The Microsoft identity platform supports both `plain` and `S256`. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). This is required for [single page apps using the authorization code flow](reference-third-party-cookies-spas.md).|
--
-At this point, the user will be asked to enter their credentials and complete the authentication. The Microsoft identity platform will also ensure that the user has consented to the permissions indicated in the `scope` query parameter. If the user has not consented to any of those permissions, it will ask the user to consent to the required permissions. Details of [permissions, consent, and multi-tenant apps are provided here](v2-permissions-and-consent.md).
-
-Once the user authenticates and grants consent, the Microsoft identity platform will return a response to your app at the indicated `redirect_uri`, using the method specified in the `response_mode` parameter.
+| `tenant` | required | The `{tenant}` value in the path of the request can be used to control who can sign into the application. Valid values are `common`, `organizations`, `consumers`, and tenant identifiers. For guest scenarios where you sign a user from one tenant into another tenant, you *must* provide the tenant identifier to sign them into the resource tenant. For more information, see [Endpoints](active-directory-v2-protocols.md#endpoints). |
+| `client_id` | required | The **Application (client) ID** that the [Azure portal ΓÇô App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) experience assigned to your app. |
+| `response_type` | required | Must include `code` for the authorization code flow. Can also include `id_token` or `token` if using the [hybrid flow](#request-an-id-token-as-well-or-hybrid-flow). |
+| `redirect_uri` | required | The `redirect_uri` of your app, where authentication responses can be sent and received by your app. It must exactly match one of the redirect URIs you registered in the portal, except it must be URL-encoded. For native and mobile apps, use one of the recommended values: `https://login.microsoftonline.com/common/oauth2/nativeclient` for apps using embedded browsers or `http://localhost` for apps that use system browsers. |
+| `scope` | required | A space-separated list of [scopes](v2-permissions-and-consent.md) that you want the user to consent to. For the `/authorize` leg of the request, this parameter can cover multiple resources. This value allows your app to get consent for multiple web APIs you want to call. |
+| `response_mode` | recommended | Specifies the method that should be used to send the resulting token back to your app. It can be one of the following values:<br/><br/>- `query`<br/>- `fragment`<br/>- `form_post`<br/><br/>`query` provides the code as a query string parameter on your redirect URI. If you're requesting an ID token using the implicit flow, you can't use `query` as specified in the [OpenID spec](https://openid.net/specs/oauth-v2-multiple-response-types-1_0.html#Combinations). If you're requesting just the code, you can use `query`, `fragment`, or `form_post`. `form_post` executes a POST containing the code to your redirect URI. |
+| `state` | recommended | A value included in the request that is also returned in the token response. It can be a string of any content that you wish. A randomly generated unique value is typically used for [preventing cross-site request forgery attacks](https://tools.ietf.org/html/rfc6749#section-10.12). The value can also encode information about the user's state in the app before the authentication request occurred. For instance, it could encode the page or view they were on. |
+| `prompt` | optional | Indicates the type of user interaction that is required. Valid values are `login`, `none`, `consent`, and `select_account`.<br/><br/>- `prompt=login` forces the user to enter their credentials on that request, negating single-sign on.<br/>- `prompt=none` is the opposite. It ensures that the user isn't presented with any interactive prompt. If the request can't be completed silently by using single-sign on, the Microsoft identity platform returns an `interaction_required` error.<br/>- `prompt=consent` triggers the OAuth consent dialog after the user signs in, asking the user to grant permissions to the app.<br/>- `prompt=select_account` interrupts single sign-on providing account selection experience listing all the accounts either in session or any remembered account or an option to choose to use a different account altogether.<br/> |
+| `login_hint` | optional | You can use this parameter to pre-fill the username and email address field of the sign-in page for the user. Apps can use this parameter during reauthentication, after already extracting the `login_hint` [optional claim](active-directory-optional-claims.md) from an earlier sign-in. |
+| `domain_hint` | optional | If included, the app skips the email-based discovery process that user goes through on the sign-in page, leading to a slightly more streamlined user experience. For example, sending them to their federated identity provider. Apps can use this parameter during reauthentication, by extracting the `tid` from a previous sign-in. |
+| `code_challenge` | recommended / required | Used to secure authorization code grants by using Proof Key for Code Exchange (PKCE). Required if `code_challenge_method` is included. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). This parameter is now recommended for all application types, both public and confidential clients, and required by the Microsoft identity platform for [single page apps using the authorization code flow](reference-third-party-cookies-spas.md). |
+| `code_challenge_method` | recommended / required | The method used to encode the `code_verifier` for the `code_challenge` parameter. This *SHOULD* be `S256`, but the spec allows the use of `plain` if the client can't support SHA256. <br/><br/>If excluded, `code_challenge` is assumed to be plaintext if `code_challenge` is included. The Microsoft identity platform supports both `plain` and `S256`. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). This parameter is required for [single page apps using the authorization code flow](reference-third-party-cookies-spas.md).|
+
+At this point, the user is asked to enter their credentials and complete the authentication. The Microsoft identity platform also ensures that the user has consented to the permissions indicated in the `scope` query parameter. If the user hasn't consented to any of those permissions, it asks the user to consent to the required permissions. For more information, see [Permissions and consent in the Microsoft identity platform](v2-permissions-and-consent.md).
+
+Once the user authenticates and grants consent, the Microsoft identity platform returns a response to your app at the indicated `redirect_uri`, using the method specified in the `response_mode` parameter.
#### Successful response
-A successful response using `response_mode=query` looks like:
+This example shows a successful response using `response_mode=query`:
```HTTP GET http://localhost?
code=AwABAAAAvPM1KaPlrEqdFSBzjqfTGBCmLdgfSTLEMPGYuNHSUYBrq...
| Parameter | Description | |--|--|
-| `code` | The authorization_code that the app requested. The app can use the authorization code to request an access token for the target resource. Authorization_codes are short lived, typically they expire after about 10 minutes. |
-| `state` | If a state parameter is included in the request, the same value should appear in the response. The app should verify that the state values in the request and response are identical. |
+| `code` | The `authorization_code` that the app requested. The app can use the authorization code to request an access token for the target resource. Authorization codes are short lived. Typically, they expire after about 10 minutes. |
+| `state` | If a `state` parameter is included in the request, the same value should appear in the response. The app should verify that the state values in the request and response are identical. |
-You can also receive an ID token if you request one and have the implicit grant enabled in your application registration. This is sometimes referred to as the ["hybrid flow"](#request-an-id-token-as-well-hybrid-flow), and is used by frameworks like ASP.NET.
+You can also receive an ID token if you request one and have the implicit grant enabled in your application registration. This behavior is sometimes referred to as the [*hybrid flow*](#request-an-id-token-as-well-or-hybrid-flow). It's used by frameworks like ASP.NET.
#### Error response
error=access_denied
| Parameter | Description | |-||
-| `error` | An error code string that can be used to classify types of errors that occur, and can be used to react to errors. |
-| `error_description` | A specific error message that can help a developer identify the root cause of an authentication error. |
+| `error` | An error code string that can be used to classify types of errors, and to react to errors. This part of the error is provided so that the app can react appropriately to the error, but does not explain in depth why an error occurred. |
+| `error_description` | A specific error message that can help a developer identify the cause of an authentication error. This part of the error contains most of the useful information about _why_ the error occurred. |
#### Error codes for authorization endpoint errors
The following table describes the various error codes that can be returned in th
| Error Code | Description | Client Action | |-|-|--|
-| `invalid_request` | Protocol error, such as a missing required parameter. | Fix and resubmit the request. This is a development error typically caught during initial testing. |
+| `invalid_request` | Protocol error, such as a missing required parameter. | Fix and resubmit the request. This error is a development error typically caught during initial testing. |
| `unauthorized_client` | The client application isn't permitted to request an authorization code. | This error usually occurs when the client application isn't registered in Azure AD or isn't added to the user's Azure AD tenant. The application can prompt the user with instruction for installing the application and adding it to Azure AD. |
-| `access_denied` | Resource owner denied consent | The client application can notify the user that it can't proceed unless the user consents. |
-| `unsupported_response_type` | The authorization server does not support the response type in the request. | Fix and resubmit the request. This is a development error typically caught during initial testing. When seen in the [hybrid flow](#request-an-id-token-as-well-hybrid-flow), signals that you must enable the ID token implicit grant setting on the client app registration. |
-| `server_error` | The server encountered an unexpected error.| Retry the request. These errors can result from temporary conditions. The client application might explain to the user that its response is delayed to a temporary error. |
+| `access_denied` | Resource owner denied consent | The client application can notify the user that it can't continue unless the user consents. |
+| `unsupported_response_type` | The authorization server doesn't support the response type in the request. | Fix and resubmit the request. This error is a development error typically caught during initial testing. In the [hybrid flow](#request-an-id-token-as-well-or-hybrid-flow), this error signals that you must enable the ID token implicit grant setting on the client app registration. |
+| `server_error` | The server encountered an unexpected error.| Retry the request. These errors can result from temporary conditions. The client application might explain to the user that its response is delayed to a temporary error. |
| `temporarily_unavailable` | The server is temporarily too busy to handle the request. | Retry the request. The client application might explain to the user that its response is delayed because of a temporary condition. |
-| `invalid_resource` | The target resource is invalid because it does not exist, Azure AD can't find it, or it's not correctly configured. | This error indicates the resource, if it exists, has not been configured in the tenant. The application can prompt the user with instruction for installing the application and adding it to Azure AD. |
-| `login_required` | Too many or no users found | The client requested silent authentication (`prompt=none`), but a single user could not found. This may mean there are multiple users active in the session, or no users. This takes into account the tenant chosen (for example, if there are two Azure AD accounts active and one Microsoft account, and `consumers` is chosen, silent authentication will work). |
-| `interaction_required` | The request requires user interaction. | An additional authentication step or consent is required. Retry the request without `prompt=none`. |
+| `invalid_resource` | The target resource is invalid because it does not exist, Azure AD can't find it, or it's not correctly configured. | This error indicates the resource, if it exists, hasn't been configured in the tenant. The application can prompt the user with instruction for installing the application and adding it to Azure AD. |
+| `login_required` | Too many or no users found. | The client requested silent authentication (`prompt=none`), but a single user couldn't be found. This error may mean there are multiple users active in the session, or no users. This error takes into account the tenant chosen. For example, if there are two Azure AD accounts active and one Microsoft account, and `consumers` is chosen, silent authentication works. |
+| `interaction_required` | The request requires user interaction. | Another authentication step or consent is required. Retry the request without `prompt=none`. |
-### Request an ID token as well (hybrid flow)
+### Request an ID token as well or hybrid flow
-To learn who the user is before redeeming an authorization code, it's common for applications to also request an ID token when they request the authorization code. This is called the *hybrid flow* because it mixes the implicit grant with the authorization code flow. The hybrid flow is commonly used in web apps that want to render a page for a user without blocking on code redemption, notably [ASP.NET](quickstart-v2-aspnet-core-webapp.md). Both single-page apps and traditional web apps benefit from reduced latency in this model.
+To learn who the user is before redeeming an authorization code, it's common for applications to also request an ID token when they request the authorization code. This approach is called the *hybrid flow* because it mixes the implicit grant with the authorization code flow.
-The hybrid flow is the same as the authorization code flow described earlier but with three additions, all of which are required to request an ID token: new scopes, a new response_type, and a new `nonce` query parameter.
+The hybrid flow is commonly used in web apps to render a page for a user without blocking on code redemption, notably in [ASP.NET](quickstart-v2-aspnet-core-webapp.md). Both single-page apps and traditional web apps benefit from reduced latency in this model.
-```
+The hybrid flow is the same as the authorization code flow described earlier but with three additions. All of these additions are required to request an ID token: new scopes, a new response_type, and a new `nonce` query parameter.
+
+```http
// Line breaks for legibility only https://login.microsoftonline.com/{tenant}/oauth2/v2.0/authorize?
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
| Updated Parameter | Required/optional | Description | ||-|--|
-|`response_type`| Required | The addition of `id_token` indicates to the server that the application would like an ID token in the response from the `/authorize` endpoint. |
-|`scope`| Required | For ID tokens, must be updated to include the ID token scopes - `openid`, and optionally `profile` and `email`. |
-|`nonce`| Required| A value included in the request, generated by the app, that will be included in the resulting id_token as a claim. The app can then verify this value to mitigate token replay attacks. The value is typically a randomized, unique string that can be used to identify the origin of the request. |
-|`response_mode`| Recommended | Specifies the method that should be used to send the resulting token back to your app. Defaults to `query` for just an authorization code, but `fragment` if the request includes an id_token `response_type`. However, apps are recommended to use `form_post`, especially when using `http://localhost` as a redirect URI. |
+|`response_type`| required | The addition of `id_token` indicates to the server that the application would like an ID token in the response from the `/authorize` endpoint. |
+|`scope`| required | For ID tokens, this parameter must be updated to include the ID token scopes: `openid` and optionally `profile` and `email`. |
+|`nonce`| required| A value included in the request, generated by the app, that is included in the resulting `id_token` as a claim. The app can then verify this value to mitigate token replay attacks. The value is typically a randomized, unique string that can be used to identify the origin of the request. |
+|`response_mode`| recommended | Specifies the method that should be used to send the resulting token back to your app. Default value is `query` for just an authorization code, but `fragment` if the request includes an `id_token` `response_type`. We recommend apps use `form_post`, especially when using `http://localhost` as a redirect URI. |
-The use of `fragment` as a response mode causes issues for web apps that read the code from the redirect, as browsers do not pass the fragment to the web server. In these situations, apps should use the `form_post` response mode to ensure that all data is sent to the server.
+The use of `fragment` as a response mode causes issues for web apps that read the code from the redirect. Browsers don't pass the fragment to the web server. In these situations, apps should use the `form_post` response mode to ensure that all data is sent to the server.
#### Successful response
-A successful response using `response_mode=fragment` looks like:
+This example shows a successful response using `response_mode=fragment`:
-```HTTP
+```http
GET https://login.microsoftonline.com/common/oauth2/nativeclient# code=AwABAAAAvPM1KaPlrEqdFSBzjqfTGBCmLdgfSTLEMPGYuNHSUYBrq... &id_token=eYj...
code=AwABAAAAvPM1KaPlrEqdFSBzjqfTGBCmLdgfSTLEMPGYuNHSUYBrq...
| Parameter | Description | |--|--| | `code` | The authorization code that the app requested. The app can use the authorization code to request an access token for the target resource. Authorization codes are short lived, typically expiring after about 10 minutes. |
-| `id_token` | An ID token for the user, issued via *implicit grant*. Contains a special `c_hash` claim that is the hash of the `code` in the same request. |
-| `state` | If a state parameter is included in the request, the same value should appear in the response. The app should verify that the state values in the request and response are identical. |
+| `id_token` | An ID token for the user, issued by using the *implicit grant*. Contains a special `c_hash` claim that is the hash of the `code` in the same request. |
+| `state` | If a `state` parameter is included in the request, the same value should appear in the response. The app should verify that the state values in the request and response are identical. |
## Redeem a code for an access token
-All confidential clients have a choice of using client secrets (symmetric shared secrets generated by the Microsoft identity platform) and [certificate credentials](active-directory-certificate-credentials.md)(asymmetric keys uploaded by the developer). For best security, we recommend using certificate credentials. Public clients (native applications and single page apps) must not use secrets or certificates when redeeming an authorization code - always ensure that your redirect URIs correctly indicate the type of application and [are unique](reply-url.md#localhost-exceptions).
+All confidential clients have a choice of using client secrets or certificate credentials. Symmetric shared secrets are generated by the Microsoft identity platform. Certificate credentials are asymmetric keys uploaded by the developer. For more information, see [Microsoft identity platform application authentication certificate credentials](active-directory-certificate-credentials.md).
+
+For best security, we recommend using certificate credentials. Public clients, which include native applications and single page apps, must not use secrets or certificates when redeeming an authorization code. Always ensure that your redirect URIs include the type of application and [are unique](reply-url.md#localhost-exceptions).
### Request an access token with a client_secret
-Now that you've acquired an authorization_code and have been granted permission by the user, you can redeem the `code` for an `access_token` to the desired resource. Do this by sending a `POST` request to the `/token` endpoint:
+Now that you've acquired an `authorization_code` and have been granted permission by the user, you can redeem the `code` for an `access_token` to the resource. Redeem the `code` by sending a `POST` request to the `/token` endpoint:
-```HTTP
+```http
// Line breaks for legibility only POST /{tenant}/oauth2/v2.0/token HTTP/1.1
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
| Parameter | Required/optional | Description | ||-|-|
-| `tenant` | required | The `{tenant}` value in the path of the request can be used to control who can sign into the application. The allowed values are `common`, `organizations`, `consumers`, and tenant identifiers. For more detail, see [protocol basics](active-directory-v2-protocols.md#endpoints). |
-| `client_id` | required | The Application (client) ID that the [Azure portal ΓÇô App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) page assigned to your app. |
-| `scope` | optional | A space-separated list of scopes. The scopes must all be from a single resource, along with OIDC scopes (`profile`, `openid`, `email`). For a more detailed explanation of scopes, refer to [permissions, consent, and scopes](v2-permissions-and-consent.md). This is a Microsoft extension to the authorization code flow, intended to allow apps to declare the resource they want the token for during token redemption.|
-| `code` | required | The authorization_code that you acquired in the first leg of the flow. |
-| `redirect_uri` | required | The same redirect_uri value that was used to acquire the authorization_code. |
-| `grant_type` | required | Must be `authorization_code` for the authorization code flow. |
-| `code_verifier` | recommended | The same code_verifier that was used to obtain the authorization_code. Required if PKCE was used in the authorization code grant request. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). |
-| `client_secret` | required for confidential web apps | The application secret that you created in the app registration portal for your app. You shouldn't use the application secret in a native app or single page app because client_secrets can't be reliably stored on devices or web pages. It's required for web apps and web APIs, which have the ability to store the client_secret securely on the server side. Like all parameters discussed here, the client secret must be URL-encoded before being sent, a step usually performed by the SDK. For more information on URI encoding, see the [URI Generic Syntax specification](https://tools.ietf.org/html/rfc3986#page-12). The Basic auth pattern of instead providing credentials in the Authorization header, per [RFC 6749](https://datatracker.ietf.org/doc/html/rfc6749#section-2.3.1) is also supported. |
+| `tenant` | required | The `{tenant}` value in the path of the request can be used to control who can sign into the application. Valid values are `common`, `organizations`, `consumers`, and tenant identifiers. For more information, see [Endpoints](active-directory-v2-protocols.md#endpoints). |
+| `client_id` | required | The **Application (client) ID** that the [Azure portal ΓÇô App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) page assigned to your app. |
+| `scope` | optional | A space-separated list of scopes. The scopes must all be from a single resource, along with OIDC scopes (`profile`, `openid`, `email`). For more information, see [Permissions and consent in the Microsoft identity platform](v2-permissions-and-consent.md). This parameter is a Microsoft extension to the authorization code flow, intended to allow apps to declare the resource they want the token for during token redemption.|
+| `code` | required | The `authorization_code` that you acquired in the first leg of the flow. |
+| `redirect_uri` | required | The same `redirect_uri` value that was used to acquire the `authorization_code`. |
+| `grant_type` | required | Must be `authorization_code` for the authorization code flow. |
+| `code_verifier` | recommended | The same `code_verifier` that was used to obtain the authorization_code. Required if PKCE was used in the authorization code grant request. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). |
+| `client_secret` | required for confidential web apps | The application secret that you created in the app registration portal for your app. Don't use the application secret in a native app or single page app because a `client_secret` can't be reliably stored on devices or web pages. It's required for web apps and web APIs, which can store the `client_secret` securely on the server side. Like all parameters here, the client secret must be URL-encoded before being sent. This step is usually done by the SDK. For more information on URI encoding, see the [URI Generic Syntax specification](https://tools.ietf.org/html/rfc3986#page-12). The Basic auth pattern of instead providing credentials in the Authorization header, per [RFC 6749](https://datatracker.ietf.org/doc/html/rfc6749#section-2.3.1) is also supported. |
### Request an access token with a certificate credential
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
| Parameter | Required/optional | Description | ||-|-|
-| `tenant` | required | The `{tenant}` value in the path of the request can be used to control who can sign into the application. The allowed values are `common`, `organizations`, `consumers`, and tenant identifiers. For more detail, see [protocol basics](active-directory-v2-protocols.md#endpoints). |
-| `client_id` | required | The Application (client) ID that the [Azure portal ΓÇô App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) page assigned to your app. |
-| `scope` | optional | A space-separated list of scopes. The scopes must all be from a single resource, along with OIDC scopes (`profile`, `openid`, `email`). For a more detailed explanation of scopes, refer to [permissions, consent, and scopes](v2-permissions-and-consent.md). This is a Microsoft extension to the authorization code flow, intended to allow apps to declare the resource they want the token for during token redemption.|
-| `code` | required | The authorization_code that you acquired in the first leg of the flow. |
-| `redirect_uri` | required | The same redirect_uri value that was used to acquire the authorization_code. |
+| `tenant` | required | The `{tenant}` value in the path of the request can be used to control who can sign into the application. Valid values are `common`, `organizations`, `consumers`, and tenant identifiers. For more detail, see [Endpoints](active-directory-v2-protocols.md#endpoints). |
+| `client_id` | required | The **Application (client) ID** that the [Azure portal ΓÇô App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) page assigned to your app. |
+| `scope` | optional | A space-separated list of scopes. The scopes must all be from a single resource, along with OIDC scopes (`profile`, `openid`, `email`). For more information, see [permissions, consent, and scopes](v2-permissions-and-consent.md). This parameter is a Microsoft extension to the authorization code flow. This extension allows apps to declare the resource they want the token for during token redemption.|
+| `code` | required | The `authorization_code` that you acquired in the first leg of the flow. |
+| `redirect_uri` | required | The same `redirect_uri` value that was used to acquire the `authorization_code`. |
| `grant_type` | required | Must be `authorization_code` for the authorization code flow. |
-| `code_verifier` | recommended | The same code_verifier that was used to obtain the authorization_code. Required if PKCE was used in the authorization code grant request. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). |
-| `client_assertion_type` | required for confidential web apps | The value must be set to `urn:ietf:params:oauth:client-assertion-type:jwt-bearer` in order to use a certificate credential. |
-| `client_assertion` | required for confidential web apps | An assertion (a JSON web token) that you need to create and sign with the certificate you registered as credentials for your application. Read about [certificate credentials](active-directory-certificate-credentials.md) to learn how to register your certificate and the format of the assertion.|
+| `code_verifier` | recommended | The same `code_verifier` that was used to obtain the `authorization_code`. Required if PKCE was used in the authorization code grant request. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). |
+| `client_assertion_type` | required for confidential web apps | The value must be set to `urn:ietf:params:oauth:client-assertion-type:jwt-bearer` to use a certificate credential. |
+| `client_assertion` | required for confidential web apps | An assertion, which is a JSON web token (JWT), that you need to create and sign with the certificate you registered as credentials for your application. Read about [certificate credentials](active-directory-certificate-credentials.md) to learn how to register your certificate and the format of the assertion.|
-Notice that the parameters are same as in the case of the request by shared secret except that the `client_secret` parameter is replaced by two parameters: a `client_assertion_type` and `client_assertion`.
+The parameters are same as the request by shared secret except that the `client_secret` parameter is replaced by two parameters: a `client_assertion_type` and `client_assertion`.
### Successful response
-A successful token response will look like:
+This example shows a successful token response:
```json {
A successful token response will look like:
| Parameter | Description | ||| | `access_token` | The requested access token. The app can use this token to authenticate to the secured resource, such as a web API. |
-| `token_type` | Indicates the token type value. The only type that Azure AD supports is Bearer |
-| `expires_in` | How long the access token is valid (in seconds). |
-| `scope` | The scopes that the access_token is valid for. Optional - this is non-standard, and if omitted the token will be for the scopes requested on the initial leg of the flow. |
-| `refresh_token` | An OAuth 2.0 refresh token. The app can use this token acquire additional access tokens after the current access token expires. Refresh_tokens are long-lived, and can be used to retain access to resources for extended periods of time. For more detail on refreshing an access token, refer to the [section below](#refresh-the-access-token). <br> **Note:** Only provided if `offline_access` scope was requested. |
-| `id_token` | A JSON Web Token (JWT). The app can decode the segments of this token to request information about the user who signed in. The app can cache the values and display them, and confidential clients can use this for authorization. For more information about id_tokens, see the [`id_token reference`](id-tokens.md). <br> **Note:** Only provided if `openid` scope was requested. |
+| `token_type` | Indicates the token type value. The only type that Azure AD supports is `Bearer`. |
+| `expires_in` | How long the access token is valid, in seconds. |
+| `scope` | The scopes that the `access_token` is valid for. Optional. This parameter is non-standard and, if omitted, the token is for the scopes requested on the initial leg of the flow. |
+| `refresh_token` | An OAuth 2.0 refresh token. The app can use this token to acquire other access tokens after the current access token expires. Refresh tokens are long-lived. They can maintain access to resources for extended periods. For more detail on refreshing an access token, refer to [Refresh the access token](#refresh-the-access-token) later in this article.<br> **Note:** Only provided if `offline_access` scope was requested. |
+| `id_token` | A JSON Web Token. The app can decode the segments of this token to request information about the user who signed in. The app can cache the values and display them, and confidential clients can use this token for authorization. For more information about id_tokens, see the [`id_token reference`](id-tokens.md). <br> **Note:** Only provided if `openid` scope was requested. |
### Error response
-Error responses will look like:
+This example is an Error response:
```json {
Error responses will look like:
| Parameter | Description | |-|-|
-| `error` | An error code string that can be used to classify types of errors that occur, and can be used to react to errors. |
-| `error_description` | A specific error message that can help a developer identify the root cause of an authentication error. |
+| `error` | An error code string that can be used to classify types of errors, and to react to errors. |
+| `error_description` | A specific error message that can help a developer identify the cause of an authentication error. |
| `error_codes` | A list of STS-specific error codes that can help in diagnostics. | | `timestamp` | The time at which the error occurred. | | `trace_id` | A unique identifier for the request that can help in diagnostics. |
Error responses will look like:
| Error Code | Description | Client Action | |--|--||
-| `invalid_request` | Protocol error, such as a missing required parameter. | Fix the request or app registration and resubmit the request |
-| `invalid_grant` | The authorization code or PKCE code verifier is invalid or has expired. | Try a new request to the `/authorize` endpoint and verify that the code_verifier parameter was correct. |
-| `unauthorized_client` | The authenticated client isn't authorized to use this authorization grant type. | This usually occurs when the client application isn't registered in Azure AD or isn't added to the user's Azure AD tenant. The application can prompt the user with instruction for installing the application and adding it to Azure AD. |
+| `invalid_request` | Protocol error, such as a missing required parameter. | Fix the request or app registration and resubmit the request. |
+| `invalid_grant` | The authorization code or PKCE code verifier is invalid or has expired. | Try a new request to the `/authorize` endpoint and verify that the `code_verifier` parameter was correct. |
+| `unauthorized_client` | The authenticated client isn't authorized to use this authorization grant type. | This error usually occurs when the client application isn't registered in Azure AD or isn't added to the user's Azure AD tenant. The application can prompt the user with instruction for installing the application and adding it to Azure AD. |
| `invalid_client` | Client authentication failed. | The client credentials aren't valid. To fix, the application administrator updates the credentials. |
-| `unsupported_grant_type` | The authorization server does not support the authorization grant type. | Change the grant type in the request. This type of error should occur only during development and be detected during initial testing. |
-| `invalid_resource` | The target resource is invalid because it does not exist, Azure AD can't find it, or it's not correctly configured. | This indicates the resource, if it exists, has not been configured in the tenant. The application can prompt the user with instruction for installing the application and adding it to Azure AD. |
-| `interaction_required` | Non-standard, as the OIDC specification calls for this only on the `/authorize` endpoint. The request requires user interaction. For example, an additional authentication step is required. | Retry the `/authorize` request with the same scopes. |
+| `unsupported_grant_type` | The authorization server doesn't support the authorization grant type. | Change the grant type in the request. This type of error should occur only during development and be detected during initial testing. |
+| `invalid_resource` | The target resource is invalid because it doesn't exist, Azure AD can't find it, or it's not correctly configured. | This code indicates the resource, if it exists, hasn't been configured in the tenant. The application can prompt the user with instruction for installing the application and adding it to Azure AD. |
+| `interaction_required` | Non-standard, as the OIDC specification calls for this code only on the `/authorize` endpoint. The request requires user interaction. For example, another authentication step is required. | Retry the `/authorize` request with the same scopes. |
| `temporarily_unavailable` | The server is temporarily too busy to handle the request. | Retry the request after a small delay. The client application might explain to the user that its response is delayed because of a temporary condition. |
-|`consent_required` | The request requires user consent. This error is non-standard, as it's usually only returned on the `/authorize` endpoint per OIDC specifications. Returned when a `scope` parameter was used on the code redemption flow that the client app does not have permission to request. | The client should send the user back to the `/authorize` endpoint with the correct scope in order to trigger consent. |
-|`invalid_scope` | The scope requested by the app is invalid. | Update the value of the scope parameter in the authentication request to a valid value. |
+|`consent_required` | The request requires user consent. This error is non-standard. It's usually only returned on the `/authorize` endpoint per OIDC specifications. Returned when a `scope` parameter was used on the code redemption flow that the client app doesn't have permission to request. | The client should send the user back to the `/authorize` endpoint with the correct scope to trigger consent. |
+|`invalid_scope` | The scope requested by the app is invalid. | Update the value of the `scope` parameter in the authentication request to a valid value. |
> [!NOTE]
-> Single page apps may receive an `invalid_request` error indicating that cross-origin token redemption is permitted only for the 'Single-Page Application' client-type. This indicates that the redirect URI used to request the token has not been marked as a `spa` redirect URI. Review the [application registration steps](#redirect-uri-setup-required-for-single-page-apps) on how to enable this flow.
+> Single page apps may receive an `invalid_request` error indicating that cross-origin token redemption is permitted only for the 'Single-Page Application' client-type. This indicates that the redirect URI used to request the token has not been marked as a `spa` redirect URI. Review the [application registration steps](#redirect-uri-setup-required-for-single-page-apps) on how to enable this flow.
## Use the access token Now that you've successfully acquired an `access_token`, you can use the token in requests to web APIs by including it in the `Authorization` header:
-```HTTP
+```http
GET /v1.0/me/messages Host: https://graph.microsoft.com Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Ik5HVEZ2ZEstZnl0aEV1Q...
Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Ik5HVEZ2ZEstZn
## Refresh the access token
-Access_tokens are short lived, and you must refresh them after they expire to continue accessing resources. You can do so by submitting another `POST` request to the `/token` endpoint, this time providing the `refresh_token` instead of the `code`. Refresh tokens are valid for all permissions that your client has already received consent for - thus, a refresh token issued on a request for `scope=mail.read` can be used to request a new access token for `scope=api://contoso.com/api/UseResource`.
+Access tokens are short lived. Refresh them after they expire to continue accessing resources. You can do so by submitting another `POST` request to the `/token` endpoint. Provide the `refresh_token` instead of the `code`. Refresh tokens are valid for all permissions that your client has already received consent for. For example, a refresh token issued on a request for `scope=mail.read` can be used to request a new access token for `scope=api://contoso.com/api/UseResource`.
-Refresh tokens for web apps and native apps do not have specified lifetimes. Typically, the lifetimes of refresh tokens are relatively long. However, in some cases, refresh tokens expire, are revoked, or lack sufficient privileges for the desired action. Your application needs to expect and handle [errors returned by the token issuance endpoint](#error-codes-for-token-endpoint-errors) correctly. Single page apps, however, get a token with a 24-hour lifetime, requiring a new authentication every day. This can be done silently in an iframe when 3rd party cookies are enabled, but must be done in a top-level frame (either full page navigation or a pop-up window) in browsers without 3rd party cookies such as Safari.
+Refresh tokens for web apps and native apps don't have specified lifetimes. Typically, the lifetimes of refresh tokens are relatively long. However, in some cases, refresh tokens expire, are revoked, or lack sufficient privileges for the action. Your application needs to expect and handle [errors returned by the token issuance endpoint](#error-codes-for-token-endpoint-errors). Single page apps get a token with a 24-hour lifetime, requiring a new authentication every day. This action can be done silently in an iframe when third-party cookies are enabled. It must be done in a top-level frame, either full page navigation or a pop-up window, in browsers without third-party cookies, such as Safari.
-Although refresh tokens aren't revoked when used to acquire new access tokens, you are expected to discard the old refresh token. The [OAuth 2.0 spec](https://tools.ietf.org/html/rfc6749#section-6) says: "The authorization server MAY issue a new refresh token, in which case the client MUST discard the old refresh token and replace it with the new refresh token. The authorization server MAY revoke the old refresh token after issuing a new refresh token to the client."
+Refresh tokens aren't revoked when used to acquire new access tokens. You're expected to discard the old refresh token. The [OAuth 2.0 spec](https://tools.ietf.org/html/rfc6749#section-6) says: "The authorization server MAY issue a new refresh token, in which case the client MUST discard the old refresh token and replace it with the new refresh token. The authorization server MAY revoke the old refresh token after issuing a new refresh token to the client."
>[!IMPORTANT]
-> For refresh tokens sent to a redirect URI registered as `spa`, the refresh token will expire after 24 hours. Additional refresh tokens acquired using the initial refresh token will carry over that expiration time, so apps must be prepared to re-run the authorization code flow using an interactive authentication to get a new refresh token every 24 hours. Users do not have to enter their credentials, and will usually not even see any UX, just a reload of your application - but the browser must visit the login page in a top level frame in order to see the login session. This is due to [privacy features in browsers that block 3rd party cookies](reference-third-party-cookies-spas.md).
-
-```HTTP
+> For refresh tokens sent to a redirect URI registered as `spa`, the refresh token expires after 24 hours. Additional refresh tokens acquired using the initial refresh token carries over that expiration time, so apps must be prepared to re-run the authorization code flow using an interactive authentication to get a new refresh token every 24 hours. Users do not have to enter their credentials, and usually don't even see any user experience, just a reload of your application. The browser must visit the login page in a top level frame in order to see the login session. This is due to [privacy features in browsers that block third party cookies](reference-third-party-cookies-spas.md).
+```http
// Line breaks for legibility only POST /{tenant}/oauth2/v2.0/token HTTP/1.1
client_id=535fb089-9ff3-47b6-9bfb-4f1264799865
| Parameter | Type | Description | ||-|--|
-| `tenant` | required | The `{tenant}` value in the path of the request can be used to control who can sign into the application. The allowed values are `common`, `organizations`, `consumers`, and tenant identifiers. For more detail, see [protocol basics](active-directory-v2-protocols.md#endpoints). |
+| `tenant` | required | The `{tenant}` value in the path of the request can be used to control who can sign into the application. Valid values are `common`, `organizations`, `consumers`, and tenant identifiers. For more information, see [Endpoints](active-directory-v2-protocols.md#endpoints). |
| `client_id` | required | The **Application (client) ID** that the [Azure portal ΓÇô App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) experience assigned to your app. | | `grant_type` | required | Must be `refresh_token` for this leg of the authorization code flow. |
-| `scope` | optional | A space-separated list of scopes. The scopes requested in this leg must be equivalent to or a subset of the scopes requested in the original authorization_code request leg. If the scopes specified in this request span multiple resource server, then the Microsoft identity platform will return a token for the resource specified in the first scope. For a more detailed explanation of scopes, refer to [permissions, consent, and scopes](v2-permissions-and-consent.md). |
-| `refresh_token` | required | The refresh_token that you acquired in the second leg of the flow. |
-| `client_secret` | required for web apps | The application secret that you created in the app registration portal for your app. It should not be used in a native app, because client_secrets can't be reliably stored on devices. It's required for web apps and web APIs, which have the ability to store the client_secret securely on the server side. This secret needs to be URL-Encoded. For more information, see the [URI Generic Syntax specification](https://tools.ietf.org/html/rfc3986#page-12). |
+| `scope` | optional | A space-separated list of scopes. The scopes requested in this leg must be equivalent to or a subset of the scopes requested in the original `authorization_code` request leg. If the scopes specified in this request span multiple resource server, then the Microsoft identity platform returns a token for the resource specified in the first scope. For more information, see [Permissions and consent in the Microsoft identity platform](v2-permissions-and-consent.md). |
+| `refresh_token` | required | The `refresh_token` that you acquired in the second leg of the flow. |
+| `client_secret` | required for web apps | The application secret that you created in the app registration portal for your app. It shouldn't be used in a native app, because a `client_secret` can't be reliably stored on devices. It's required for web apps and web APIs, which can store the `client_secret` securely on the server side. This secret needs to be URL-Encoded. For more information, see the [URI Generic Syntax specification](https://tools.ietf.org/html/rfc3986#page-12). |
#### Successful response
-A successful token response will look like:
+This example shows a successful token response:
```json {
A successful token response will look like:
| Parameter | Description | ||-|
-| `access_token` | The requested access token. The app can use this token to authenticate to the secured resource, such as a web API. |
-| `token_type` | Indicates the token type value. The only type that Azure AD supports is Bearer |
-| `expires_in` | How long the access token is valid (in seconds). |
-| `scope` | The scopes that the access_token is valid for. |
-| `refresh_token` | A new OAuth 2.0 refresh token. You should replace the old refresh token with this newly acquired refresh token to ensure your refresh tokens remain valid for as long as possible. <br> **Note:** Only provided if `offline_access` scope was requested.|
-| `id_token` | An unsigned JSON Web Token (JWT). The app can decode the segments of this token to request information about the user who signed in. The app can cache the values and display them, but it should not rely on them for any authorization or security boundaries. For more information about id_tokens, see the [`id_token reference`](id-tokens.md). <br> **Note:** Only provided if `openid` scope was requested. |
+| `access_token` | The requested access token. The app can use this token to authenticate to the secured resource, such as a web API. |
+| `token_type` | Indicates the token type value. The only type that Azure AD supports is Bearer. |
+| `expires_in` | How long the access token is valid, in seconds. |
+| `scope` | The scopes that the `access_token` is valid for. |
+| `refresh_token` | A new OAuth 2.0 refresh token. Replace the old refresh token with this newly acquired refresh token to ensure your refresh tokens remain valid for as long as possible. <br> **Note:** Only provided if `offline_access` scope was requested.|
+| `id_token` | An unsigned JSON Web Token. The app can decode the segments of this token to request information about the user who signed in. The app can cache the values and display them, but it shouldn't rely on them for any authorization or security boundaries. For more information about `id_token`, see the [Microsoft identity platform ID tokens](id-tokens.md). <br> **Note:** Only provided if `openid` scope was requested. |
[!INCLUDE [remind-not-to-validate-access-tokens](includes/remind-not-to-validate-access-tokens.md)]
A successful token response will look like:
| Parameter | Description | |-|-|
-| `error` | An error code string that can be used to classify types of errors that occur, and can be used to react to errors. |
+| `error` | An error code string that can be used to classify types of errors, and to react to errors. |
| `error_description` | A specific error message that can help a developer identify the root cause of an authentication error. | | `error_codes` |A list of STS-specific error codes that can help in diagnostics. | | `timestamp` | The time at which the error occurred. |
A successful token response will look like:
| `correlation_id` | A unique identifier for the request that can help in diagnostics across components. | For a description of the error codes and the recommended client action, see [Error codes for token endpoint errors](#error-codes-for-token-endpoint-errors).+
+## Next steps
+
+- Go over the [MSAL JS samples](sample-v2-code.md) to get started coding.
+- Learn about [token exchange scenarios](scenario-token-exchange-saml-oauth.md).
active-directory V2 Protocols Oidc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/v2-protocols-oidc.md
When you have an authorization code and an ID token, you can sign the user in an
### Calling the UserInfo endpoint
-Review the [UserInfo documentation](userinfo.md#calling-the-api) to look over how the call the UserInfo endpoint with this token.
+Review the [UserInfo documentation](userinfo.md#calling-the-api) to look over how to call the UserInfo endpoint with this token.
## Send a sign-out request
active-directory Add User Without Invite https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/add-user-without-invite.md
You can now invite guest users by sending out a [direct link](redemption-experie
Before this new method was available, you could invite guest users without requiring the invitation email by adding an inviter (from your organization or from a partner organization) to the **Guest inviter** directory role, and then having the inviter add guest users to the directory, groups, or applications through the UI or through PowerShell. (If using PowerShell, you can suppress the invitation email altogether). For example: 1. A user in the host organization (for example, WoodGrove) invites one user from the partner organization (for example, Sam@litware.com) as Guest.
-2. The administrator in the host organization [sets up policies](delegate-invitations.md) that allow Sam to identify and add other users from the partner organization (Litware). (Sam must be added to the **Guest inviter** role.)
+2. The administrator in the host organization [sets up policies](external-collaboration-settings-configure.md) that allow Sam to identify and add other users from the partner organization (Litware). (Sam must be added to the **Guest inviter** role.)
3. Now, Sam can add other users from Litware to the WoodGrove directory, groups, or applications without needing invitations to be redeemed. If Sam has the appropriate enumeration privileges in Litware, it happens automatically. This original method still works. However, there's a small difference in behavior. If you use PowerShell, you'll notice that an invited guest account now has a **PendingAcceptance** status instead of immediately showing **Accepted**. Although the status is pending, the guest user can still sign in and access the app without clicking an email invitation link. The pending status means that the user has not yet gone through the [consent experience](redemption-experience.md#consent-experience-for-the-guest), where they accept the privacy terms of the inviting organization. The guest user sees this consent screen when they sign in for the first time.
If you invite a user to the directory, the guest user must access the resource t
- [What is Azure AD B2B collaboration?](what-is-b2b.md) - [B2B collaboration invitation redemption](redemption-experience.md)-- [Delegate invitations for Azure Active Directory B2B collaboration](delegate-invitations.md)
+- [Delegate invitations for Azure Active Directory B2B collaboration](external-collaboration-settings-configure.md)
- [How do information workers add B2B collaboration users?](add-users-information-worker.md)
active-directory Add Users Administrator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/add-users-administrator.md
After you add a guest user to the directory, you can either send the guest user
## Before you begin
-Make sure your organization's external collaboration settings are configured such that you're allowed to invite guests. By default, all users and admins can invite guests. But your organization's external collaboration policies might be configured to prevent certain types of users or admins from inviting guests. To find out how to view and set these policies, see [Enable B2B external collaboration and manage who can invite guests](delegate-invitations.md).
+Make sure your organization's external collaboration settings are configured such that you're allowed to invite guests. By default, all users and admins can invite guests. But your organization's external collaboration policies might be configured to prevent certain types of users or admins from inviting guests. To find out how to view and set these policies, see [Enable B2B external collaboration and manage who can invite guests](external-collaboration-settings-configure.md).
## Add guest users to the directory
active-directory Allow Deny List https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/allow-deny-list.md
You can use an allow list or a deny list to allow or block invitations to B2B us
- You can create either an allow list or a deny list. You can't set up both types of lists. By default, whatever domains are not in the allow list are on the deny list, and vice versa. - You can create only one policy per organization. You can update the policy to include more domains, or you can delete the policy to create a new one. -- The number of domains you can add to an allow list or deny list is limited only by the size of the policy. This limit applies to the number of characters, so you can have more shorter domains or fewer longer domains. The maximum size of the entire policy is 25 KB (25,000 characters), which includes the allow list or deny list and any other parameters configured for other features.
+- The number of domains you can add to an allow list or deny list is limited only by the size of the policy. This limit applies to the number of characters, so you can have a greater number of shorter domains or fewer longer domains. The maximum size of the entire policy is 25 KB (25,000 characters), which includes the allow list or deny list and any other parameters configured for other features.
- This list works independently from OneDrive for Business and SharePoint Online allow/block lists. If you want to restrict individual file sharing in SharePoint Online, you need to set up an allow or deny list for OneDrive for Business and SharePoint Online. For more information, see [Restricted domains sharing in SharePoint Online and OneDrive for Business](https://support.office.com/article/restricted-domains-sharing-in-sharepoint-online-and-onedrive-for-business-5d7589cd-0997-4a00-a2ba-2320ec49c4e9). - The list does not apply to external users who have already redeemed the invitation. The list will be enforced after the list is set up. If a user invitation is in a pending state, and you set a policy that blocks their domain, the user's attempt to redeem the invitation will fail.
To add a deny list:
![Shows the deny option with added domains](./media/allow-deny-list/DenyListSettings.png)
-6. When you're done, click **Save**.
+6. When you're done, select **Save**.
After you set the policy, if you try to invite a user from a blocked domain, you receive a message saying that the domain of the user is currently blocked by your invitation policy. ### Add an allow list
-This is a more restrictive configuration, where you can set specific domains in the allow list and restrict invitations to any other organizations or domains that aren't mentioned.
+This is a more restrictive configuration, where you can set specific domains in the allow list and restrict invitations to any other organizations or domains that aren't mentioned.
If you want to use an allow list, make sure that you spend time to fully evaluate what your business needs are. If you make this policy too restrictive, your users may choose to send documents over email, or find other non-IT sanctioned ways of collaborating.
To add an allow list:
![Shows the allow option with added domains](./media/allow-deny-list/AllowListSettings.png)
-6. When you're done, click **Save**.
+6. When you're done, select **Save**.
After you set the policy, if you try to invite a user from a domain that's not on the allow list, you receive a message saying that the domain of the user is currently blocked by your invitation policy.
-### Switch from allow to deny list and vice versa
+### Switch from allow list to deny list and vice versa
If you switch from one policy to the other, this discards the existing policy configuration. Make sure to back up details of your configuration before you perform the switch.
Remove-AzureADPolicy -Id $currentpolicy.Id
## Next steps - For an overview of Azure AD B2B, see [What is Azure AD B2B collaboration?](what-is-b2b.md)-- For information about Conditional Access and B2B collaboration, see [Conditional Access for B2B collaboration users](conditional-access.md).
+- For information about Conditional Access and B2B collaboration, see [Conditional Access for B2B collaboration users](authentication-conditional-access.md).
active-directory Authentication Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/authentication-conditional-access.md
+
+ Title: Authentication and Conditional Access for B2B users - Azure AD
+description: Learn how to enforce multi-factor authentication policies for Azure Active Directory B2B users.
+++++ Last updated : 01/31/2022++++++++
+# Authentication and Conditional Access for External Identities
+
+When an external user accesses resources in your organization, the authentication flow is determined by the user's identity provider (an external Azure AD tenant, social identity provider, etc.), Conditional Access policies, and the [cross-tenant access settings](cross-tenant-access-overview.md) configured both in the user's home tenant and the tenant hosting resources.
+
+This article describes the authentication flow for external users who are accessing resources in your organization. Organizations can enforce multiple Conditional Access policies for their external users, which can be enforced at the tenant, app, or individual user level in the same way that they're enabled for full-time employees and members of the organization.
+
+## Authentication flow for external Azure AD users
+
+The following diagram illustrates the authentication flow when an Azure AD organization shares resources with users from other Azure AD organizations. This diagram shows how cross-tenant access settings work with Conditional Access policies, such as multi-factor authentication (MFA), to determine if the user can access resources.
+
+![Diagram illustrating the cross-tenant authentication process](media/authentication-conditional-access/cross-tenant-auth.png)
+
+|Step |Description |
+|||
+|**1** | A user from Fabrikam (the userΓÇÖs *home tenant*) initiates sign-in to a resource in Contoso (the *resource tenant*). |
+|**2** | During sign-in, the Azure AD security token service (STS) evaluates Contoso's Conditional Access policies. It also checks whether the Fabrikam user is allowed access by evaluating cross-tenant access settings (FabrikamΓÇÖs outbound settings and ContosoΓÇÖs inbound settings). |
+|**3** | Azure AD checks ContosoΓÇÖs inbound trust settings to see if Contoso trusts MFA and device claims (device compliance, hybrid Azure AD joined status) from Fabrikam. If not, skip to step 6. |
+|**4** | If Contoso trusts MFA and device claims from Fabrikam, Azure AD checks the userΓÇÖs credentials for an indication the user has completed MFA. If Contoso trusts device information from Fabrikam, Azure AD uses the device ID to look up the device object in Fabrikam to determine its state (compliant or hybrid Azure AD joined). |
+|**5** | If MFA is required but not completed or if a device ID isn't provided, Azure AD issues MFA and device challenges in the user's home tenant as needed. When MFA and device requirements are satisfied in Fabrikam, the user is allowed access to the resource in Contoso. If the checks canΓÇÖt be satisfied, access is blocked. |
+|**6** | When no trust settings are configured and MFA is required, B2B collaboration users are prompted for MFA, which they need to satisfy in the resource tenant. If device compliance is required, access is blocked. |
+
+For more information, see the [Conditional Access for external users](#conditional-access-for-external-users) section.
+
+## Authentication flow for non-Azure AD external users
+
+When an Azure AD organization shares resources with external users with an identity provider other than Azure AD, the authentication flow depends on whether the user is authenticating with an identity provider or with email one-time passcode authentication. In either case, the resource tenant identifies which authentication method to use, and then either redirects the user to their identity provider or issues a one-time passcode.
+
+### Example 1: Authentication flow and token for a non-Azure AD external user
+
+The following diagram illustrates the authentication flow when an external user signs in with an account from a non-Azure AD identity provider, such as Google, Facebook, or a federated SAML/WS-Fed identity provider.
+
+![image shows Authentication flow for B2B guest users from an external directory](media/authentication-conditional-access/authentication-flow-b2b-guests.png)
+
+| Step | Description |
+|--|--|
+| **1** | The B2B guest user requests access to a resource. The resource redirects the user to its resource tenant, a trusted IdP.|
+| **2** | The resource tenant identifies the user as external and redirects the user to the B2B guest userΓÇÖs IdP. The user performs primary authentication in the IdP.
+| **3** | Outbound cross-tenant access settings are evaluated. If the user is allowed outbound access, the B2B guest userΓÇÖs IdP issues a token to the user. The user is redirected back to the resource tenant with the token. The resource tenant validates the token and then evaluates the user against its Conditional Access policies. For example, the resource tenant could require the user to perform Azure Active Directory (AD) MFA.
+| **4** | Inbound cross-tenant access settings and Conditional Access policies are evaluated. If all policies are satisfied, the resource tenant issues its own token and redirects the user to its resource.
+
+### Example 2: Authentication flow and token for one-time passcode user
+
+The following diagram illustrates the flow when email one-time passcode authentication is enabled and the external user isn't authenticated through other means, such as Azure AD, Microsoft account (MSA), or social identity provider.
+
+![image shows Authentication flow for B2B guest users with one time passcode](media/authentication-conditional-access/authentication-flow-b2b-guests-otp.png)
+
+| Step | Description |
+|--|--|
+| **1** |The user requests access to a resource in another tenant. The resource redirects the user to its resource tenant, a trusted IdP.|
+| **2** | The resource tenant identifies the user as an [external email one-time passcode (OTP) user](./one-time-passcode.md) and sends an email with the OTP to the user.|
+| **3** | The user retrieves the OTP and submits the code. The resource tenant evaluates the user against its Conditional Access policies.
+| **4** | Once all Conditional Access policies are satisfied, the resource tenant issues a token and redirects the user to its resource. |
+
+## Conditional Access for external users
+
+Organizations can enforce Conditional Access policies for external B2B collaboration users in the same way that they're enabled for full-time employees and members of the organization. This section describes important considerations for applying Conditional Access to users outside of your organization.
+
+### Azure AD cross-tenant trust settings for MFA and device claims
+
+In an Azure AD cross-tenant scenario, the resource organization can create Conditional Access policies that require MFA or device compliance for all guest and external users. Generally, an external user accessing a resource is required to set up their Azure AD MFA with the resource tenant. However, Azure AD now offers the capability to trust MFA, compliant device claims, and [hybrid Azure AD joined device](../conditional-access/howto-conditional-access-policy-compliant-device.md) claims from external Azure AD organizations, making for a more streamlined sign-in experience for the external user. As the resource tenant, you can use cross-tenant access settings to trust the MFA and device claims from external Azure AD tenants. Trust settings can apply to all Azure AD organizations, or just selected Azure AD organizations.
+
+When trust settings are enabled, Azure AD will check a user's credentials during authentication for an MFA claim or a device ID to determine if the policies have already been met in their home tenant. If so, the external user will be granted seamless sign-on to your shared resource. Otherwise, an MFA or device challenge will be initiated in the user's home tenant. If trust settings aren't enabled, or if the user's credentials don't contain the required claims, the external user will be presented with an MFA or device challenge.
+
+For details, see [Configuring cross-tenant access settings for B2B collaboration](cross-tenant-access-settings-b2b-collaboration.md). If no trust settings are configured, the flow is the same as the [MFA flow for non-Azure AD external users](#mfa-for-non-azure-ad-external-users).
+
+### MFA for non-Azure AD external users
+
+For non-Azure AD external users, the resource tenant is always responsible for MFA. The following is an example of a typical MFA flow. This scenario works for any identity, including a Microsoft Account (MSA) or social ID. This flow also applies for Azure AD external users when you haven't configured trust settings with their home Azure AD organization.
+
+1. An admin or information worker in a company named Fabrikam invites a user from another company named Contoso to use Fabrikam's app.
+
+2. Fabrikam's app is configured to require Azure AD MFA upon access.
+
+3. When the B2B collaboration user from Contoso attempts to access Fabrikam's app, they're asked to complete the Azure AD MFA challenge.
+
+4. The guest user can then set up their Azure AD MFA with Fabrikam and select the options.
+
+Fabrikam must have sufficient premium Azure AD licenses that support Azure AD MFA. The user from Contoso then consumes this license from Fabrikam. See [billing model for Azure AD external identities](./external-identities-pricing.md) for information on the B2B licensing.
+
+>[!NOTE]
+>MFA is completed at resource tenancy to ensure predictability. When the guest user signs in, they'll see the resource tenant sign-in page displayed in the background, and their own home tenant sign-in page and company logo in the foreground.
+
+### Azure AD MFA reset (proof up) for B2B collaboration users
+
+The following PowerShell cmdlets are available to *proof up* or request MFA registration from B2B collaboration users.
+
+1. Connect to Azure AD:
+
+ ```powershell
+ $cred = Get-Credential
+ Connect-MsolService -Credential $cred
+ ```
+
+2. Get all users with proof up methods:
+
+ ```powershell
+ Get-MsolUser | where { $_.StrongAuthenticationMethods} | select UserPrincipalName, @{n="Methods";e={($_.StrongAuthenticationMethods).MethodType}}
+ ```
+
+ For example:
+
+ ```powershell
+ Get-MsolUser | where { $_.StrongAuthenticationMethods} | select UserPrincipalName, @{n="Methods";e={($_.StrongAuthenticationMethods).MethodType}}
+ ```
+
+3. Reset the Azure AD MFA method for a specific user to require the user to set proof up methods again, for example:
+
+ ```powershell
+ Reset-MsolStrongAuthenticationMethodByUpn -UserPrincipalName gsamoogle_gmail.com#EXT#@ WoodGroveAzureAD.onmicrosoft.com
+ ```
+
+### Device-based Conditional Access
+
+In Conditional Access, there's an option to require a userΓÇÖs [device to be compliant or hybrid Azure AD joined](../conditional-access/howto-conditional-access-policy-compliant-device.md). Because devices can only be managed by the home tenant, additional considerations must be made for external users. As the resource tenant, you can use cross-tenant access settings to trust device (compliant and hybrid Azure AD joined) claims.
+
+>[!Important]
+>
+>- Unless you're willing to trust device (compliant or hybrid Azure AD joined) claims for external users, we don't recommend a Conditional Access policy requiring a managed device.
+>- When guest users try to access a resource protected by Conditional Access, they can't register and enroll devices in your tenant and will be blocked from accessing your resources.
+
+### Mobile application management policies
+
+The Conditional Access grant controls, such as **Require approved client apps** and **Require app protection policies**, need the device to be registered in the resource tenant. These controls can only be applied to [iOS and Android devices](../conditional-access/concept-conditional-access-conditions.md#device-platforms). However, neither of these controls can be applied to B2B guest users, since the userΓÇÖs device can only be managed by their home tenant.
+
+>[!NOTE]
+>We don't recommend requiring an app protection policy for external users.
+
+### Location-based Conditional Access
+
+The [location-based policy](../conditional-access/concept-conditional-access-conditions.md#locations) based on IP ranges can be enforced if the inviting organization can create a trusted IP address range that defines their partner organizations.
+
+Policies can also be enforced based on **geographical locations**.
+
+### Risk-based Conditional Access
+
+The [Sign-in risk policy](../conditional-access/concept-conditional-access-conditions.md#sign-in-risk) is enforced if the B2B guest user satisfies the grant control. For example, an organization could require Azure AD Multi-Factor Authentication for medium or high sign-in risk. However, if a user hasn't previously registered for Azure AD Multi-Factor Authentication in the resource tenant, the user will be blocked. This is done to prevent malicious users from registering their own Azure AD Multi-Factor Authentication credentials in the event they compromise a legitimate userΓÇÖs password.
+
+The [User-risk policy](../conditional-access/concept-conditional-access-conditions.md#user-risk), however, can't be resolved in the resource tenant. For example, if you require a password change for high-risk guest users, they'll be blocked because of the inability to reset passwords in the resource directory.
+
+### Conditional Access client apps condition
+
+[Client apps conditions](../conditional-access/concept-conditional-access-conditions.md#client-apps) behave the same for B2B guest users as they do for any other type of user. For example, you could prevent guest users from using legacy authentication protocols.
+
+### Conditional Access session controls
+
+[Session controls](../conditional-access/concept-conditional-access-session.md) behave the same for B2B guest users as they do for any other type of user.
+
+## Next steps
+
+For more information, see the following articles on Azure AD B2B collaboration:
+
+- [What is Azure AD B2B collaboration?](./what-is-b2b.md)
+- [Identity Protection and B2B users](../identity-protection/concept-identity-protection-b2b.md)
+- [External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/)
+- [Frequently Asked Questions (FAQs)](./faq.yml)
active-directory B2b Fundamentals https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/b2b-fundamentals.md
Previously updated : 10/21/2021 Last updated : 01/31/2022 -
This article contains recommendations and best practices for business-to-business (B2B) collaboration in Azure Active Directory (Azure AD). > [!IMPORTANT]
-> **Starting November 1, 2021**, we'll begin rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. As part of this change, Microsoft will stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption. To minimize disruptions during the holidays and deployment lockdowns, the majority of tenants will see changes rolled out in January 2022. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode).
-
+> **As of November 1, 2021**, we began rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. As part of this change, Microsoft will stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode).
## B2B recommendations+ | Recommendation | Comments | | | |
+| Carefully consider how you want to collaborate with external users and organizations | Azure AD gives you a flexible set of controls for managing collaboration with external users and organizations. You can allow or block all collaboration, or configure collaboration only for specific organizations, users, and apps. Before configuring settings for cross-tenant access and external collaboration, take a careful inventory of the organizations you work and partner with. Then determine if you want to enable [B2B collaboration](what-is-b2b.md) with other Azure AD tenants, and how you want to manage [B2B collaboration invitations](external-collaboration-settings-configure.md). |
| For an optimal sign-in experience, federate with identity providers | Whenever possible, federate directly with identity providers to allow invited users to sign in to your shared apps and resources without having to create Microsoft Accounts (MSAs) or Azure AD accounts. You can use the [Google federation feature](google-federation.md) to allow B2B guest users to sign in with their Google accounts. Or, you can use the [SAML/WS-Fed identity provider (preview) feature](direct-federation.md) to set up federation with any organization whose identity provider (IdP) supports the SAML 2.0 or WS-Fed protocol. | | Use the Email one-time passcode feature for B2B guests who canΓÇÖt authenticate by other means | The [Email one-time passcode](one-time-passcode.md) feature authenticates B2B guest users when they can't be authenticated through other means like Azure AD, a Microsoft account (MSA), or Google federation. When the guest user redeems an invitation or accesses a shared resource, they can request a temporary code, which is sent to their email address. Then they enter this code to continue signing in. | | Add company branding to your sign-in page | You can customize your sign-in page so it's more intuitive for your B2B guest users. See how to [add company branding to sign in and Access Panel pages](../fundamentals/customize-branding.md). | | Add your privacy statement to the B2B guest user redemption experience | You can add the URL of your organization's privacy statement to the first time invitation redemption process so that an invited user must consent to your privacy terms to continue. See [How-to: Add your organization's privacy info in Azure Active Directory](../fundamentals/active-directory-properties-area.md). | | Use the bulk invite (preview) feature to invite multiple B2B guest users at the same time | Invite multiple guest users to your organization at the same time by using the bulk invite preview feature in the Azure portal. This feature lets you upload a CSV file to create B2B guest users and send invitations in bulk. See [Tutorial for bulk inviting B2B users](tutorial-bulk-invite.md). |
-| Enforce Conditional Access policies for Azure Active Directory Multi-Factor Authentication (MFA) | We recommend enforcing MFA policies on the apps you want to share with partner B2B users. This way, MFA will be consistently enforced on the apps in your tenant regardless of whether the partner organization is using MFA. See [Conditional Access for B2B collaboration users](conditional-access.md). |
-| If youΓÇÖre enforcing device-based Conditional Access policies, use exclusion lists to allow access to B2B users | If device-based Conditional Access policies are enabled in your organization, B2B guest user devices will be blocked because theyΓÇÖre not managed by your organization. You can create exclusion lists containing specific partner users to exclude them from the device-based Conditional Access policy. See [Conditional Access for B2B collaboration users](conditional-access.md). |
+| Enforce Conditional Access policies for Azure Active Directory Multi-Factor Authentication (MFA) | We recommend enforcing MFA policies on the apps you want to share with partner B2B users. This way, MFA will be consistently enforced on the apps in your tenant regardless of whether the partner organization is using MFA. See [Conditional Access for B2B collaboration users](authentication-conditional-access.md). |
+| If youΓÇÖre enforcing device-based Conditional Access policies, use exclusion lists to allow access to B2B users | If device-based Conditional Access policies are enabled in your organization, B2B guest user devices will be blocked because theyΓÇÖre not managed by your organization. You can create exclusion lists containing specific partner users to exclude them from the device-based Conditional Access policy. See [Conditional Access for B2B collaboration users](authentication-conditional-access.md). |
| Use a tenant-specific URL when providing direct links to your B2B guest users | As an alternative to the invitation email, you can give a guest a direct link to your app or portal. This direct link must be tenant-specific, meaning it must include a tenant ID or verified domain so the guest can be authenticated in your tenant, where the shared app is located. See [Redemption experience for the guest user](redemption-experience.md). | | When developing an app, use UserType to determine guest user experience | If you're developing an application and you want to provide different experiences for tenant users and guest users, use the UserType property. The UserType claim isn't currently included in the token. Applications should use the Microsoft Graph API to query the directory for the user to get their UserType. | | Change the UserType property *only* if the userΓÇÖs relationship to the organization changes | Although itΓÇÖs possible to use PowerShell to convert the UserType property for a user from Member to Guest (and vice-versa), you should change this property only if the relationship of the user to your organization changes. See [Properties of a B2B guest user](user-properties.md).|
+| Find out if your environment will be affected by Azure AD directory limits | Azure AD B2B is subject to Azure AD service directory limits. For details about the number of directories a user can create and the number of directories to which a user or guest user can belong, see [Azure AD service limits and restrictions](../enterprise-users/directory-service-limits-restrictions.md).|
## Next steps
-[Manage B2B sharing](delegate-invitations.md)
+[Manage B2B sharing](external-collaboration-settings-configure.md)
active-directory B2b Government National Clouds https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/b2b-government-national-clouds.md
+
+ Title: Azure AD B2B in government and national clouds - Azure Active Directory
+description: Learn what features are available in Azure Active Directory B2B collaboration in US Government and national clouds
+++++ Last updated : 01/31/2022++++++++
+# Azure AD B2B in government and national clouds
+
+## National clouds
+[National clouds](../develop/authentication-national-cloud.md) are physically isolated instances of Azure. B2B collaboration is not supported across national cloud boundaries. For example, if your Azure tenant is in the public, global cloud, you can't invite a user whose account is in a national cloud. To collaborate with the user, ask them for another email address or create a member user account for them in your directory.
+
+## Azure US Government clouds
+Within the Azure US Government cloud, B2B collaboration is supported between tenants that are both within Azure US Government cloud and that both support B2B collaboration. Azure US Government tenants that support B2B collaboration can also collaborate with social users using Microsoft, Google accounts, or email one-time passcode accounts. If you invite a user outside of these groups (for example, if the user is in a tenant that isn't part of the Azure US Government cloud or doesn't yet support B2B collaboration), the invitation will fail or the user won't be able to redeem the invitation. For Microsoft accounts (MSAs), there are known limitations with accessing the Azure portal: newly invited MSA guests are unable to redeem direct link invitations to the Azure portal, and existing MSA guests are unable to sign in to the Azure portal. For details about other limitations, see [Azure Active Directory Premium P1 and P2 Variations](../../azure-government/compare-azure-government-global-azure.md#azure-active-directory-premium-p1-and-p2).
+
+### How can I tell if B2B collaboration is available in my Azure US Government tenant?
+To find out if your Azure US Government cloud tenant supports B2B collaboration, do the following:
+
+1. In a browser, go to the following URL, substituting your tenant name for *&lt;tenantname&gt;*:
+
+ `https://login.microsoftonline.com/<tenantname>/v2.0/.well-known/openid-configuration`
+
+2. Find `"tenant_region_scope"` in the JSON response:
+
+ - If `"tenant_region_scope":"USGOVΓÇ¥` appears, B2B is supported.
+ - If `"tenant_region_scope":"USG"` appears, B2B is not supported.
+
+## Next steps
+
+See the following articles on Azure AD B2B collaboration:
+
+- [What is Azure AD B2B collaboration?](what-is-b2b.md)
+- [Delegate B2B collaboration invitations](external-collaboration-settings-configure.md)
active-directory Compare With B2c https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/compare-with-b2c.md
-- Title: Compare External Identities - Azure Active Directory | Microsoft Docs
-description: Azure AD External Identities allow people outside your organization to access your apps and resources using their own identity. Compare solutions for External Identities, including Azure Active Directory B2B collaboration and Azure AD B2C.
----- Previously updated : 07/13/2021---------
-# What are External Identities in Azure Active Directory?
-
-With External Identities in Azure AD, you can allow people outside your organization to access your apps and resources, while letting them sign in using whatever identity they prefer. Your partners, distributors, suppliers, vendors, and other guest users can "bring their own identities." Whether they have a corporate or government-issued digital identity, or an unmanaged social identity like Google or Facebook, they can use their own credentials to sign in. The external userΓÇÖs identity provider manages their identity, and you manage access to your apps with Azure AD to keep your resources protected.
-
-## External Identities scenarios
-
-Azure AD External Identities focuses less on a user's relationship to your organization and more on how the user wants to sign in to your apps and resources. Within this framework, Azure AD supports a variety of scenarios from business-to-business (B2B) collaboration to access management for consumer/customer- or citizen-facing applications (business-to-customer, or B2C).
--- **Share your apps and resources with external users (B2B collaboration)**. Invite external users into your own tenant as "guest" users that you can assign permissions to (for authorization) while letting them use their existing credentials (for authentication). Users sign in to the shared resources using a simple invitation and redemption process with their work, school, or other email account. You can also use [Azure AD entitlement management](../governance/entitlement-management-overview.md) to configure policies that [manage access for external users](../governance/entitlement-management-external-users.md#how-access-works-for-external-users). And now with the availability of [self-service sign-up user flows](self-service-sign-up-overview.md), you can allow external users to sign up for applications themselves. The experience can be customized to allow sign-up with a work, school, or social identity (like Google or Facebook). You can also collect information about the user during the sign-up process. For more information, see the [Azure AD B2B documentation](index.yml).--- **Build user journeys with a white-label identity management solution for consumer- and customer-facing apps (Azure AD B2C)**. If you're a business or developer creating customer-facing apps, you can scale to millions of consumers, customers, or citizens by using Azure AD B2C. Developers can use Azure AD as the full-featured Customer Identity and Access Management (CIAM) system for their applications. Customers can sign in with an identity they already have established (like Facebook or Gmail). With Azure AD B2C, you can completely customize and control how customers sign up, sign in, and manage their profiles when using your applications. For more information, see the [Azure AD B2C documentation](../../active-directory-b2c/index.yml).-
-## Compare External Identities solutions
-
-The following table gives a detailed comparison of the scenarios you can enable with Azure AD External Identities.
-
-| | External user collaboration (B2B) | Access to consumer/customer-facing apps (B2C) |
-| - | | |
-| **Primary scenario** | Collaboration using Microsoft applications (Microsoft 365, Teams, etc.) or your own applications (SaaS apps, custom-developed apps, etc.). | Identity and access management for modern SaaS or custom-developed applications (not first-party Microsoft apps). |
-| **Intended for** | Collaborating with business partners from external organizations like suppliers, partners, vendors. Users appear as guest users in your directory. These users may or may not have managed IT. | Customers of your product. These users are managed in a separate Azure AD directory. |
-| **Identity providers supported** | External users can collaborate using work accounts, school accounts, any email address, SAML and WS-Fed based identity providers, Gmail, and Facebook. | Consumer users with local application accounts (any email address or user name), various supported social identities, and users with corporate and government-issued identities via SAML/WS-Fed based identity provider federation. |
-| **External user management** | External users are managed in the same directory as employees, but are typically annotated as guest users. Guest users can be managed the same way as employees, added to the same groups, and so on. | External users are managed in the Azure AD B2C directory. They're managed separately from the organization's employee and partner directory (if any). |
-| **Single sign-on (SSO)** | SSO to all Azure AD-connected apps is supported. For example, you can provide access to Microsoft 365 or on-premises apps, and to other SaaS apps such as Salesforce or Workday. | SSO to customer owned apps within the Azure AD B2C tenants is supported. SSO to Microsoft 365 or to other Microsoft SaaS apps isn't supported. |
-| **Security policy and compliance** | Managed by the host/inviting organization (for example, with [Conditional Access policies](conditional-access.md)). | Managed by the organization via Conditional Access and Identity Protection. |
-| **Branding** | Host/inviting organization's brand is used. | Fully customizable branding per application or organization. |
-| **Billing model** | [External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/) based on monthly active users (MAU). <br>(See also: [B2B setup details](external-identities-pricing.md)) | [External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/) based on monthly active users (MAU). <br>(See also: [B2C setup details](../../active-directory-b2c/billing.md)) |
-| **More information** | [Blog post](https://blogs.technet.microsoft.com/enterprisemobility/2017/02/01/azure-ad-b2b-new-updates-make-cross-business-collab-easy/), [Documentation](what-is-b2b.md) | [Supported Azure AD features](../../active-directory-b2c/supported-azure-ad-features.md), [Product page](https://azure.microsoft.com/services/active-directory-b2c/), [Documentation](../../active-directory-b2c/index.yml) |
-
-Secure and manage customers and partners beyond your organizational boundaries with Azure AD External Identities.
-
-## About multitenant applications
-
-If you're providing an app as a service and you don't want to manage your customers' user accounts, a multitenant app is likely the right choice for you. When you develop applications intended for other Azure AD tenants, you can target users from a single organization (single tenant), or users from any organization that already has an Azure AD tenant (multitenant applications). App registrations in Azure AD are single tenant by default, but you can make your registration multitenant. This multitenant application is registered once by yourself in your own Azure AD. But then any Azure AD user from any organization can use the application without additional work on your part. For more information, see [Manage identity in multitenant applications](/azure/architecture/multitenant-identity/), [How-to Guide](../develop/howto-convert-app-to-be-multi-tenant.md).
-
-## Next steps
--- [What is Azure AD B2B collaboration?](what-is-b2b.md)-- [About Azure AD B2C](../../active-directory-b2c/overview.md)
active-directory Conditional Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/conditional-access.md
- Title: Conditional Access for B2B collaboration users - Azure AD
-description: Learn how to enforce multi-factor authentication policies for Azure Active Directory B2B users.
----- Previously updated : 11/30/2021---------
-# Conditional Access for B2B collaboration users
-
-This article describes how organizations can scope Conditional Access (CA) policies for B2B guest users to access their resources.
->[!NOTE]
->This authentication or authorization flow is a bit different for guest users than for the existing users of that Identity provider (IdP).
-
-## Authentication flow for B2B guest users from an external directory
-
-The following diagram illustrates the flow:
-![image shows Authentication flow for B2B guest users from an external directory](./media/conditional-access-b2b/authentication-flow-b2b-guests.png)
-
-| Step | Description |
-|--|--|
-| 1. | The B2B guest user requests access to a resource. The resource redirects the user to its resource tenant, a trusted IdP.|
-| 2. | The resource tenant identifies the user as external and redirects the user to the B2B guest userΓÇÖs IdP. The user performs primary authentication in the IdP.
-| 3. | The B2B guest userΓÇÖs IdP issues a token to the user. The user is redirected back to the resource tenant with the token. The resource tenant validates the token and then evaluates the user against its CA policies. For example, the resource tenant could require the user to perform Azure Active Directory (AD) Multi-Factor Authentication.
-| 4. | Once all resource tenant CA policies are satisfied, the resource tenant issues its own token and redirects the user to its resource.
-
-## Authentication flow for B2B guest users with one time passcode
-
-The following diagram illustrates the flow:
-![image shows Authentication flow for B2B guest users with one time passcode](./media/conditional-access-b2b/authentication-flow-b2b-guests-otp.png)
-
-| Step | Description |
-|--|--|
-| 1. |The user requests access to a resource in another tenant. The resource redirects the user to its resource tenant, a trusted IdP.|
-| 2. | The resource tenant identifies the user as an [external email one-time passcode (OTP) user](./one-time-passcode.md) and sends an email with the OTP to the user.|
-| 3. | The user retrieves the OTP and submits the code. The resource tenant evaluates the user against its CA policies.
-| 4. | Once all CA policies are satisfied, the resource tenant issues a token and redirects the user to its resource. |
-
->[!NOTE]
->If the user is from an external resource tenant, it is not possible for the B2B guest userΓÇÖs IdP CA policies to also be evaluated. As of today, only the resource tenantΓÇÖs CA policies apply to its guests.
-
-## Azure AD Multi-Factor Authentication for B2B users
-
-Organizations can enforce multiple Azure AD Multi-Factor Authentication policies for their B2B guest users. These policies can be enforced at the tenant, app, or individual user level in the same way that they're enabled for full-time employees and members of the organization.
-The resource tenant is always responsible for Azure AD Multi-Factor Authentication for users, even if the guest userΓÇÖs organization has Multi-Factor Authentication capabilities. Here's an example-
-
-1. An admin or information worker in a company named Fabrikam invites user from another company named Contoso to use their application Woodgrove.
-
-2. The Woodgrove app in Fabrikam is configured to require Azure AD Multi-Factor Authentication on access.
-
-3. When the B2B guest user from Contoso attempts to access Woodgrove in the Fabrikam tenant, they're asked to complete the Azure AD Multi-Factor Authentication challenge.
-
-4. The guest user can then set up their Azure AD Multi-Factor Authentication with Fabrikam and select the options.
-
-5. This scenario works for any identity ΓÇô Azure AD or Personal Microsoft Account (MSA). For example, if user in Contoso authenticates using social ID.
-
-6. Fabrikam must have sufficient premium Azure AD licenses that support Azure AD Multi-Factor Authentication. The user from Contoso then consumes this license from Fabrikam. See [billing model for Azure AD external identities](./external-identities-pricing.md) for information on the B2B licensing.
-
->[!NOTE]
->Azure AD Multi-Factor Authentication is done at resource tenancy to ensure predictability. When the guest user signs in, they'll see the resource tenant sign-in page displayed in the background, and their own home tenant sign-in page and company logo in the foreground.
-
-### Azure AD Multi-Factor Authentication reset for B2B users
-
-Now, the following PowerShell cmdlets are available to proof up B2B guest users:
-
-1. Connect to Azure AD
-
- ```
- $cred = Get-Credential
- Connect-MsolService -Credential $cred
- ```
-2. Get all users with proof up methods
-
- ```
- Get-MsolUser | where { $_.StrongAuthenticationMethods} | select UserPrincipalName, @{n="Methods";e={($_.StrongAuthenticationMethods).MethodType}}
- ```
- Here is an example:
-
- ```
- Get-MsolUser | where { $_.StrongAuthenticationMethods} | select UserPrincipalName, @{n="Methods";e={($_.StrongAuthenticationMethods).MethodType}}
- ```
-
-3. Reset the Azure AD Multi-Factor Authentication method for a specific user to require the B2B collaboration user to set proof-up methods again.
- Here is an example:
-
- ```
- Reset-MsolStrongAuthenticationMethodByUpn -UserPrincipalName gsamoogle_gmail.com#EXT#@ WoodGroveAzureAD.onmicrosoft.com
- ```
-
-## Conditional Access for B2B users
-
-There are various factors that influence CA policies for B2B guest users.
-
-### Device-based Conditional Access
-
-In CA, there's an option to require a userΓÇÖs [device to be Compliant or Hybrid Azure AD joined](../conditional-access/concept-conditional-access-conditions.md#device-state-preview). B2B guest users can only satisfy compliance if the resource tenant can manage their device. Devices cannot be managed by more than one organization at a time. B2B guest users can't satisfy the Hybrid Azure AD join because they don't have an on-premises AD account.
-
->[!Note]
->- It is not recommended to require a managed device for external users.
->- When guest users try to access a resource protected by Conditional Access, they'll no longer be asked to re-register their devices in your tenant. Previously, guest users would be able to start the re-registration process. However, this would remove their existing device registration, and they'd be unable to complete registration. Now, they'll see a Conditional Access blocking page to prevent them from trying to re-register their devices.
-
-### Mobile application management policies
-
-The CA grant controls such as **Require approved client apps** and **Require app protection policies** need the device to be registered in the tenant. These controls can only be applied to [iOS and Android devices](../conditional-access/concept-conditional-access-conditions.md#device-platforms). However, neither of these controls can be applied to B2B guest users if the userΓÇÖs device is already being managed by another organization. A mobile device cannot be registered in more than one tenant at a time. If the mobile device is managed by another organization, the user will be blocked.
-
->[!NOTE]
->It is not recommended to require an app protection policy for external users.
-
-### Location-based Conditional Access
-
-The [location-based policy](../conditional-access/concept-conditional-access-conditions.md#locations) based on IP ranges can be enforced if the inviting organization can create a trusted IP address range that defines their partner organizations.
-
-Policies can also be enforced based on **geographical locations**.
-
-### Risk-based Conditional Access
-
-The [Sign-in risk policy](../conditional-access/concept-conditional-access-conditions.md#sign-in-risk) is enforced if the B2B guest user satisfies the grant control. For example, an organization could require Azure AD Multi-Factor Authentication for medium or high sign-in risk. However, if a user hasn't previously registered for Azure AD Multi-Factor Authentication in the resource tenant, the user will be blocked. This is done to prevent malicious users from registering their own Azure AD Multi-Factor Authentication credentials in the event they compromise a legitimate userΓÇÖs password.
-
-The [User-risk policy](../conditional-access/concept-conditional-access-conditions.md#user-risk) however cannot be resolved in the resource tenant. For example, if you require a password change for high-risk guest users, they'll be blocked because of the inability to reset passwords in the resource directory.
-
-### Conditional Access client apps condition
-
-[Client apps conditions](../conditional-access/concept-conditional-access-conditions.md#client-apps) behave the same for B2B guest users as they do for any other type of user. For example, you could prevent guest users from using legacy authentication protocols.
-
-### Conditional Access session controls
-
-[Session controls](../conditional-access/concept-conditional-access-session.md) behave the same for B2B guest users as they do for any other type of user.
-
-## Next steps
-
-For more information, see the following articles on Azure AD B2B collaboration:
--- [What is Azure AD B2B collaboration?](./what-is-b2b.md)-- [Identity Protection and B2B users](../identity-protection/concept-identity-protection-b2b.md)-- [External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/)-- [Frequently Asked Questions (FAQs)](./faq.yml)
active-directory Cross Tenant Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/cross-tenant-access-overview.md
+
+ Title: Cross-tenant access overview - Azure AD
+description: Get an overview of cross-tenant access in Azure AD External Identities. Learn how to manage your B2B collaboration with other Azure AD organizations through this overview of cross-tenant access settings.
++++ Last updated : 01/31/2022++++++++
+# Overview: Cross-tenant access with Azure AD External Identities (Preview)
+
+> [!NOTE]
+> Cross-tenant access settings are preview features of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Azure AD organizations can use External Identities cross-tenant access settings to manage how they collaborate with other Azure AD organizations through B2B collaboration. [Cross-tenant access settings](cross-tenant-access-settings-b2b-collaboration.md) give you granular control over how external Azure AD organizations collaborate with you (inbound access) and how your users collaborate with external Azure AD organizations (outbound access). These settings also let you trust multi-factor authentication (MFA) and device claims ([compliant claims and hybrid Azure AD joined claims](../conditional-access/howto-conditional-access-policy-compliant-device.md)) from other Azure AD organizations.
+
+This article describes cross-tenant access settings that are used to manage B2B collaboration with external Azure AD organizations. For B2B collaboration with non-Azure AD identities (for example, social identities or non-IT managed external accounts), use external collaboration settings. External collaboration settings include options for restricting guest user access, specifying who can invite guests, and allowing or blocking domains.
+
+![Overview diagram of cross-tenant access settings](media/cross-tenant-access-overview/cross-tenant-access-settings-overview.png)
+
+## Manage external access with inbound and outbound settings
+
+B2B collaboration is enabled by default, but comprehensive admin settings let you control your B2B collaboration with external partners and organizations
+
+- **Outbound access settings** control whether your users can access resources in an external organization. You can apply these settings to everyone, or you can specify individual users, groups, and applications.
+
+- **Inbound access settings** control whether users from external Azure AD organizations can access resources in your organization. You can apply these settings to everyone, or you can specify individual users, groups, and applications.
+
+- **Trust settings** (inbound) determine whether your Conditional Access policies will trust the multi-factor authentication (MFA), compliant device, and [hybrid Azure AD joined device](../devices/concept-azure-ad-join-hybrid.md) claims from an external organization if their users have already satisfied these requirements in their home tenants. For example, when you configure your trust settings to trust MFA, your MFA policies are still applied to external users, but users who have already completed MFA in their home tenants won't have to complete MFA again in your tenant.
+
+## Default settings
+
+The default cross-tenant access settings apply to all Azure AD organizations external to your tenant, except those for which you've configured organizational settings. You can change your default settings, but the initial default settings for B2B collaboration are as follows:
+
+- All your internal users are enabled for B2B collaboration by default. This means your users can invite external guests to access your resources and they can be invited to external organizations as guests. MFA and device claims from other Azure AD organizations aren't trusted.
+
+- No organizations are added to your Organizational settings by default. This means all external Azure AD organizations are enabled for B2B collaboration with your organization.
+
+## Organizational settings
+
+You can configure organization-specific settings by adding an organization and modifying the inbound and outbound settings for that organization. Organizational settings take precedence over default settings.
+
+- For B2B collaboration with other Azure AD organizations, you can use cross-tenant access settings to manage inbound and outbound B2B collaboration and scope access to specific users, groups, and applications. You can set a default configuration that applies to all external organizations, and then create individual, organization-specific settings as needed. Using cross-tenant access settings, you can also trust multi-factor (MFA) and device claims (compliant claims and hybrid Azure AD joined claims) from other Azure AD organizations.
+
+- You can use external collaboration settings to limit who can invite external users, allow or block B2B specific domains, and set restrictions on guest user access to your directory.
+
+## Important considerations
+
+> [!CAUTION]
+> Changing the default inbound or outbound settings to Block access could block existing business-critical access to apps in your organization or partner organizations. Be sure to use the tools described in this article and consult with your business stakeholders to identify the required access.
+
+- Cross-tenant access settings are used to manage B2B collaboration with other Azure AD organizations. For non-Azure AD identities (for example, social identities or non-IT managed external accounts), use [external collaboration settings](external-collaboration-settings-configure.md). External collaboration settings include options for restricting guest user access, specifying who can invite guests, and allowing or blocking domains.
+
+- If you want to apply access settings to specific users, groups, or applications in an external organization, you'll need to contact the organization for information before configuring your settings. Obtain their user object IDs, group object IDs, or application IDs (*client app IDs* or *resource app IDs*) so you can target your settings correctly.
+
+ > [!TIP]
+ > You might be able to find the application IDs for apps in external organizations by checking your sign-in logs. See the [Identify inbound and outbound sign-ins](#identify-inbound-and-outbound-sign-ins) section.
+
+- The access settings you configure for users and groups must match the access settings for applications. Conflicting settings aren't allowed, and youΓÇÖll see warning messages if you try to configure them.
+
+ - **Example 1**: If you block inbound B2B collaboration for all external users and groups, access to all your applications must also be blocked.
+
+ - **Example 2**: If you allow outbound B2B collaboration for all your users (or specific users or groups), youΓÇÖll be prevented from blocking all access to external applications; access to at least one application must be allowed.
+
+- If you block access to all apps by default, users will be unable to read emails encrypted with Microsoft Rights Management Service (also known as Office 365 Message Encryption or OME). To avoid this issue, we recommend configuring your outbound settings to allow your users to access this app ID: 00000012-0000-0000-c000-000000000000. If this is the only application you allow, access to all other apps will be blocked by default.
+
+- To configure cross-tenant access settings in the Azure portal, you'll need an account with a Global administrator or Security administrator role.
+
+- To configure trust settings or apply access settings to specific users, groups, or applications, you'll need an Azure AD Premium P1 license.
+
+## Identify inbound and outbound sign-ins
+
+Several tools are available to help you identify the access your users and partners need before you set inbound and outbound access settings. To ensure you donΓÇÖt remove access that your users and partners need, you can examine current sign-in behavior. Taking this preliminary step will help prevent loss of desired access for your end users and partner users. However, in some cases these logs are only retained for 30 days, so we strongly recommend you speak with your business stakeholders to ensure required access isn't lost.
+
+### Sign-In Logs
+
+To determine your users access to external Azure AD organizations in the last 30 days, run the following PowerShell script:
+
+```powershell
+Get-MgAuditLogsSignIn `
+-Filter ΓÇ£ResourceTenantID ne ΓÇÿyour tenant idΓÇÖΓÇ¥ `
+-all:$True| `
+group ResourceTenantId,AppDisplayName,UserPrincipalName| `
+select count, @{n=ΓÇÖExt TenantID/App User PairΓÇÖ;e={$_.name}}]
+```
+
+The output is a list of outbound sign-ins initiated by your users to apps in external tenants, for example:
+
+```powershell
+Count Ext TenantID/App User Pair
+-- --
+ 6 45fc4ed2-8f2b-42c1-b98c-b254d552f4a7, ADIbizaUX, a@b.com
+ 6 45fc4ed2-8f2b-42c1-b98c-b254d552f4a7, Azure Portal, a@b.com
+ 6 45fc4ed2-8f2b-42c1-b98c-b254d552f4a7, Access Panel, a@b.com
+ 6 45fc4ed2-8f2b-42c1-b98c-b254d552f4a7, MS-PIM, a@b.com
+ 6 45fc4ed2-8f2b-42c1-b98c-b254d552f4a7, AAD ID Gov, a@b.com
+ 6 45fc4ed2-8f2b-42c1-b98c-b254d552f4a7, Access Panel, a@b.com
+```
+
+For the most up-to-date PowerShell script, see the [cross-tenant user sign-in activity script](https://aka.ms/cross-tenant-signins-ps).
+
+### Azure Monitor
+
+If your organization subscribes to the Azure Monitor service, you can use the **Cross-tenant access activity** workbook (available in the Monitoring workbooks gallery in the Azure portal) to visually explore inbound and outbound sign-ins for longer time periods.
+
+### Security Information and Event Management (SIEM) Systems
+
+If your organization exports sign-in logs to a Security Information and Event Management (SIEM) system, you can retrieve required information from your SIEM system.
+
+## Next steps
+
+[Configure cross-tenant access settings for B2B collaboration](cross-tenant-access-settings-b2b-collaboration.md)
active-directory Cross Tenant Access Settings B2b Collaboration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/cross-tenant-access-settings-b2b-collaboration.md
+
+ Title: Configure B2B collaboration cross-tenant access - Azure AD
+description: Use cross-tenant collaboration settings to manage how you collaborate with other Azure AD organizations. Learn how to configure outbound access to external organizations and inbound access from external Azure AD for B2B collaboration.
++++ Last updated : 01/31/2022++++++++
+# Configure cross-tenant access settings for B2B collaboration (Preview)
+
+> [!NOTE]
+> Cross-tenant access settings are preview features of Azure Active Directory. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+Use External Identities cross-tenant access settings to manage how you collaborate with other Azure AD organizations through B2B collaboration. These settings determine both the level of *inbound* access users in external Azure AD organizations have to your resources, as well as the level of *outbound* access your users have to external organizations. They also let you trust multi-factor authentication (MFA) and device claims ([compliant claims and hybrid Azure AD joined claims](../conditional-access/howto-conditional-access-policy-compliant-device.md)) from other Azure AD organizations. For details and planning considerations, see [Cross-tenant access in Azure AD External Identities](cross-tenant-access-overview.md).
+
+## Before you begin
+
+ > [!CAUTION]
+ > Changing the default inbound or outbound settings to **Block access** could block existing business-critical access to apps in your organization or partner organizations. Be sure to use the tools described in [Cross-tenant access in Azure AD External Identities](cross-tenant-access-overview.md) and consult with your business stakeholders to identify the required access.
+
+- Review the [Important considerations](cross-tenant-access-overview.md#important-considerations) section in the [cross-tenant access overview](cross-tenant-access-overview.md) before configuring your cross-tenant access settings.
+- Use the tools and follow the recommendations in [Identify inbound and outbound sign-ins](cross-tenant-access-overview.md#identify-inbound-and-outbound-sign-ins) to understand which external Azure AD organizations and resources users are currently accessing.
+- Decide on the default level of access you want to apply to all external Azure AD organizations.
+- Identify any Azure AD organizations that will need customized settings so you can configure **Organizational settings** for them.
+- If you want to apply access settings to specific users, groups, or applications in an external organization, you'll need to contact the organization for information before configuring your settings. Obtain their user object IDs, group object IDs, or application IDs (*client app IDs* or *resource app IDs*) so you can target your settings correctly.
+
+## Configure default settings
+
+ Default cross-tenant access settings apply to all external tenants for which you haven't created organization-specific customized settings. If you want to modify the Azure AD-provided default settings, follow these steps.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator or Security administrator account. Then open the **Azure Active Directory** service.
+1. Select **External Identities**, and then select **Cross-tenant access settings (Preview)**.
+1. Select the **Default settings** tab and review the summary page.
+
+ ![Screenshot showing the Cross-tenant access settings Default settings tab](media/cross-tenant-access-settings-b2b-collaboration/cross-tenant-defaults.png)
+
+1. To change the settings, select the **Edit inbound defaults** link or the **Edit outbound defaults** link.
+
+ ![Screenshot showing edit buttons for Default settings](media/cross-tenant-access-settings-b2b-collaboration/cross-tenant-defaults-edit.png)
++
+1. Modify the default settings by following the detailed steps in these sections:
+
+ - [Modify inbound access settings](#modify-inbound-access-settings)
+ - [Modify outbound access settings](#modify-outbound-access-settings)
+
+## Add an organization
+
+Follow these steps to configure customized settings for specific organizations.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator or Security administrator account. Then open the **Azure Active Directory** service.
+1. Select **External Identities**, and then select **Cross-tenant access settings (preview)**.
+1. Select **Organizational settings**.
+1. Select **Add organization**.
+1. On the **Add organization** pane, type the full domain name (or tenant ID) for the organization.
+
+ ![Screenshot showing adding an organization](media/cross-tenant-access-settings-b2b-collaboration/cross-tenant-add-organization.png)
+
+1. Select the organization in the search results, and then select **Add**.
+1. The organization appears in the **Organizational settings** list. At this point, all access settings for this organization are inherited from your default settings. To change the settings for this organization, select the **Inherited from default** link under the **Inbound access** or **Outbound access** column.
+
+ ![Screenshot showing an organization added with default settings](media/cross-tenant-access-settings-b2b-collaboration/org-specific-settings-inherited.png)
++
+1. Modify the organization's settings by following the detailed steps in these sections:
+
+ - [Modify inbound access settings](#modify-inbound-access-settings)
+ - [Modify outbound access settings](#modify-outbound-access-settings)
+
+## Modify inbound access settings
+
+With inbound settings, you select which external users and groups will be able to access the internal applications you choose. Whether you're configuring default settings or organization-specific settings, the steps for changing inbound cross-tenant access settings are the same. As described in this section, you'll navigate to either the **Default** tab or an organization on the **Organizational settings** tab, and then make your changes.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator or Security administrator account. Then open the **Azure Active Directory** service.
+
+1. Select **External Identities** > **Cross-tenant access settings (preview)**.
+
+1. Navigate to the settings you want to modify:
+ - **Default settings**: To modify default inbound settings, select the **Default settings** tab, and then under **Inbound access settings**, select **Edit inbound defaults**.
+ - **Organizational settings**: To modify settings for a specific organization, select the **Organizational settings** tab, find the organization in the list (or [add one](#add-an-organization)), and then select the link in the **Inbound access** column.
+
+1. Follow the detailed steps for the inbound settings you want to change:
+
+ - [To change inbound B2B collaboration settings](#to-change-inbound-b2b-collaboration-settings)
+ - [To change inbound trust settings for accepting MFA and device claims](#to-change-inbound-trust-settings-for-mfa-and-device-claims)
+
+### To change inbound B2B collaboration settings
+
+1. Select the **B2B collaboration** tab.
+
+1. (This step applies to **Organizational settings** only.) If you're configuring inbound access settings for a specific organization, select one of the following:
+
+ - **Default settings**: Select this option if you want the organization to use the default inbound settings (as configured on the **Default** settings tab). If customized settings were already configured for this organization, you'll need to select **Yes** to confirm that you want all settings to be replaced by the default settings. Then select **Save**, and skip the rest of the steps in this procedure.
+
+ - **Customize settings**: Select this option if you want to customize the settings for this organization, which will be enforced for this organization instead of the default settings. Continue with the rest of the steps in this procedure.
+
+1. Select **External users and groups**.
+
+1. Under **Access status**, select one of the following:
+
+ - **Allow access**: Allows the users and groups specified under **Target** to be invited for B2B collaboration.
+ - **Block access**: Blocks the users and groups specified under **Target** from being invited to B2B collaboration.
+
+ ![Screenshot showing selecting the user access status for B2B collaboration](media/cross-tenant-access-settings-b2b-collaboration/generic-inbound-external-users-groups-access.png)
+
+1. Under **Target**, select one of the following:
+
+ - **All external users and groups**: Applies the action you chose under **Access status** to all users and groups from external Azure AD organizations.
+ - **Select external users and groups** (requires an Azure AD premium subscription): Lets you apply the action you chose under **Access status** to specific users and groups within the external organization.
+
+ > [!NOTE]
+ > If you block access for all external users and groups, you also need to block access to all your internal applications (on the **Applications** tab).
+
+ ![Screenshot showing selecting the target users and groups](media/cross-tenant-access-settings-b2b-collaboration/generic-inbound-external-users-groups-target.png)
+
+1. If you chose **Select external users and groups**, do the following for each user or group you want to add:
+
+ - Select **Add external users and groups**.
+ - In the **Add other users and groups** pane, in the search box, type the user object ID or group object ID you obtained from your partner organization.
+ - In the menu next to the search box, choose either **user** or **group**.
+ - Select **Add**.
+
+ ![Screenshot showing adding users and groups](media/cross-tenant-access-settings-b2b-collaboration/generic-inbound-external-users-groups-add.png)
+
+1. When you're done adding users and groups, select **Submit**.
+
+ ![Screenshot showing submitting users and groups](media/cross-tenant-access-settings-b2b-collaboration/generic-inbound-external-users-groups-submit.png)
+
+1. Select the **Applications** tab.
+
+1. Under **Access status**, select one of the following:
+
+ - **Allow access**: Allows the applications specified under **Target** to be accessed by B2B collaboration users.
+ - **Block access**: Blocks the applications specified under **Target** from being accessed by B2B collaboration users.
+
+ ![Screenshot showing applications access status](media/cross-tenant-access-settings-b2b-collaboration/generic-inbound-applications-access.png)
+
+1. Under **Target**, select one of the following:
+
+ - **All applications**: Applies the action you chose under **Access status** to all of your applications.
+ - **Select applications** (requires an Azure AD premium subscription): Lets you apply the action you chose under **Access status** to specific applications in your organization.
+
+ > [!NOTE]
+ > If you block access to all applications, you also need to block access for all external users and groups (on the **External users and groups** tab).
+
+ ![Screenshot showing target applications](media/cross-tenant-access-settings-b2b-collaboration/generic-inbound-applications-target.png)
+
+1. If you chose **Select applications**, do the following for each application you want to add:
+
+ - Select **Add Microsoft applications** or **Add other applications**.
+ - In the **Select** pane, type the application name or the application ID (either the *client app ID* or the *resource app ID*) in the search box. Then select the application in the search results. Repeat for each application you want to add.
+ - When you're done selecting applications, choose **Select**.
+
+ ![Screenshot showing selecting applications](media/cross-tenant-access-settings-b2b-collaboration/generic-inbound-applications-add.png)
+
+1. Select **Save**.
+
+### To change inbound trust settings for MFA and device claims
+
+1. Select the **Trust settings** tab.
+
+1. (This step applies to **Organizational settings** only.) If you're configuring settings for an organization, select one of the following:
+
+ - **Default settings**: The organization will use the settings configured on the **Default** settings tab. If customized settings were already configured for this organization, you'll need to select **Yes** to confirm that you want all settings to be replaced by the default settings. Then select **Save**, and skip the rest of the steps in this procedure.
+
+ - **Customize settings**: You can customize the settings for this organization, which will be enforced for this organization instead of the default settings. Continue with the rest of the steps in this procedure.
+
+1. Select one or more of the following options:
+
+ - **Trust multi-factor authentication from Azure AD tenants**: Select this checkbox to allow your Conditional Access policies to trust MFA claims from external organizations. During authentication, Azure AD will check a user's credentials for a claim that the user has completed MFA. If not, an MFA challenge will be initiated in the user's home tenant.
+
+ - **Trust compliant devices**: Allows your Conditional Access policies to trust compliant device claims from an external organization when their users access your resources.
+
+ - **Trust hybrid Azure AD joined devices**: Allows your Conditional Access policies to trust hybrid Azure AD joined device claims from an external organization when their users access your resources.
+
+ ![Screenshot showing trust settings](media/cross-tenant-access-settings-b2b-collaboration/inbound-trust-settings.png)
+
+1. Select **Save**.
+
+## Modify outbound access settings
+
+With outbound settings, you select which of your users and groups will be able to access the external applications you choose. Whether you're configuring default settings or organization-specific settings, the steps for changing outbound cross-tenant access settings are the same. As described in this section, you'll navigate to either the **Default** tab or an organization on the **Organizational settings** tab, and then make your changes.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator or Security administrator account. Then open the **Azure Active Directory** service.
+
+1. Select **External Identities**, and then select **Cross-tenant access settings (Preview)**.
+
+1. Navigate to the settings you want to modify:
+
+ - To modify default outbound settings, select the **Default settings** tab, and then under **Outbound access settings**, select **Edit outbound defaults**.
+
+ - To modify settings for a specific organization, select the **Organizational settings** tab, find the organization in the list (or [add one](#add-an-organization)) and then select the link in the **Outbound access** column.
+
+1. Select the **B2B collaboration** tab.
+
+1. (This step applies to **Organizational settings** only.) If you're configuring settings for an organization, select one of the following:
+
+ - **Default settings**: The organization will use the settings configured on the **Default** settings tab. If customized settings were already configured for this organization, you'll need to select **Yes** to confirm that you want all settings to be replaced by the default settings. Then select **Save**, and skip the rest of the steps in this procedure.
+
+ - **Customize settings**: You can customize the settings for this organization, which will be enforced for this organization instead of the default settings. Continue with the rest of the steps in this procedure.
+
+1. Select **Users and groups**.
+
+1. Under **Access status**, select one of the following:
+
+ - **Allow access**: Allows your users and groups specified under **Target** to be invited to external organizations for B2B collaboration.
+ - **Block access**: Blocks your users and groups specified under **Target** from being invited to B2B collaboration. If you block access for all users and groups, this will also block all external applications from being accessed via B2B collaboration.
+
+ ![Screenshot showing users and groups access status for b2b collaboration](media/cross-tenant-access-settings-b2b-collaboration/generic-outbound-external-users-groups-access.png)
+
+1. Under **Target**, select one of the following:
+
+ - **All \<your organization\> users**: Applies the action you chose under **Access status** to all your users and groups.
+ - **Select \<your organization\> users and groups** (requires an Azure AD premium subscription): Lets you apply the action you chose under **Access status** to specific users and groups.
+
+ > [!NOTE]
+ > If you block access for all of your users and groups, you also need to block access to all external applications (on the **External applications** tab).
+
+ ![Screenshot showing selecting the target users for b2b collaboration](media/cross-tenant-access-settings-b2b-collaboration/generic-outbound-external-users-groups-target.png)
+
+1. If you chose **Select \<your organization\> users and groups**, do the following for each user or group you want to add:
+
+ - Select **Add \<your organization\> users and groups**.
+ - In the **Select** pane, type the user name or group name in the search box.
+ - Select the user or group in the search results.
+ - When you're done selecting the users and groups you want to add, choose **Select**.
+
+1. Select the **External applications** tab.
+
+1. Under **Access status**, select one of the following:
+
+ - **Allow access**: Allows the external applications specified under **Target** to be accessed by your users via B2B collaboration.
+ - **Block access**: Blocks the external applications specified under **Target** from being accessed by your users via B2B collaboration.
+
+ ![Screenshot showing applications access status for b2b collaboration](media/cross-tenant-access-settings-b2b-collaboration/generic-outbound-applications-access.png)
+
+1. Under **Target**, select one of the following:
+
+ - **All external applications**: Applies the action you chose under **Access status** to all external applications.
+ - **Select external applications**: Applies the action you chose under **Access status** to all external applications.
+
+ > [!NOTE]
+ > If you block access to all external applications, you also need to block access for all of your users and groups (on the **Users and groups** tab).
+
+ ![Screenshot showing application targets for b2b collaboration](media/cross-tenant-access-settings-b2b-collaboration/generic-outbound-applications-target.png)
+
+1. If you chose **Select external applications**, do the following for each application you want to add:
+
+ - Select **Add Microsoft applications** or **Add other applications**.
+ - In the search box, type the application name or the application ID (either the *client app ID* or the *resource app ID*). Then select the application in the search results. Repeat for each application you want to add.
+ - When you're done selecting applications, choose **Select**.
+
+ ![Screenshot showing selecting applications for b2b collaboration](media/cross-tenant-access-settings-b2b-collaboration/outbound-b2b-collaboration-add-apps.png)
+
+1. Select **Save**.
+
+## Next steps
+
+See [Configure external collaboration settings](external-collaboration-settings-configure.md) for B2B collaboration with non-Azure AD identities, social identities, and non-IT managed external accounts.
active-directory Current Limitations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/current-limitations.md
Previously updated : 05/29/2019 Last updated : 01/31/2022
Azure Active Directory (Azure AD) B2B collaboration is currently subject to the limitations described in this article. ## Possible double multi-factor authentication
-With Azure AD B2B, you can enforce multi-factor authentication at the resource organization (the inviting organization). The reasons for this approach are detailed in [Conditional Access for B2B collaboration users](conditional-access.md). If a partner already has multi-factor authentication set up and enforced, their users might have to perform the authentication once in their home organization and then again in yours.
+With Azure AD B2B, you can enforce multi-factor authentication at the resource organization (the inviting organization). The reasons for this approach are detailed in [Conditional Access for B2B collaboration users](authentication-conditional-access.md). If a partner already has multi-factor authentication set up and enforced, their users might have to perform the authentication once in their home organization and then again in yours.
## Instant-on In the B2B collaboration flows, we add users to the directory and dynamically update them during invitation redemption, app assignment, and so on. The updates and writes ordinarily happen in one directory instance and must be replicated across all instances. Replication is completed once all instances are updated. Sometimes when the object is written or updated in one instance and the call to retrieve this object is to another instance, replication latencies can occur. If that happens, refresh or retry to help. If you are writing an app using our API, then retries with some back-off is a good, defensive practice to alleviate this issue.
In the B2B collaboration flows, we add users to the directory and dynamically up
## Azure AD directories Azure AD B2B is subject to Azure AD service directory limits. For details about the number of directories a user can create and the number of directories to which a user or guest user can belong, see [Azure AD service limits and restrictions](../enterprise-users/directory-service-limits-restrictions.md).
-## National clouds
-[National clouds](../develop/authentication-national-cloud.md) are physically isolated instances of Azure. B2B collaboration is not supported across national cloud boundaries. For example, if your Azure tenant is in the public, global cloud, you can't invite a user whose account is in a national cloud. To collaborate with the user, ask them for another email address or create a member user account for them in your directory.
-
-## Azure US Government clouds
-Within the Azure US Government cloud, B2B collaboration is supported between tenants that are both within Azure US Government cloud and that both support B2B collaboration. Azure US Government tenants that support B2B collaboration can also collaborate with social users using Microsoft, Google accounts, or email one-time passcode accounts. If you invite a user outside of these groups (for example, if the user is in a tenant that isn't part of the Azure US Government cloud or doesn't yet support B2B collaboration), the invitation will fail or the user won't be able to redeem the invitation. For Microsoft accounts (MSAs), there are known limitations with accessing the Azure portal: newly invited MSA guests are unable to redeem direct link invitations to the Azure portal, and existing MSA guests are unable to sign in to the Azure portal. For details about other limitations, see [Azure Active Directory Premium P1 and P2 Variations](../../azure-government/compare-azure-government-global-azure.md#azure-active-directory-premium-p1-and-p2).
-
-### How can I tell if B2B collaboration is available in my Azure US Government tenant?
-To find out if your Azure US Government cloud tenant supports B2B collaboration, do the following:
-
-1. In a browser, go to the following URL, substituting your tenant name for *&lt;tenantname&gt;*:
-
- `https://login.microsoftonline.com/<tenantname>/v2.0/.well-known/openid-configuration`
-
-2. Find `"tenant_region_scope"` in the JSON response:
-
- - If `"tenant_region_scope":"USGOVΓÇ¥` appears, B2B is supported.
- - If `"tenant_region_scope":"USG"` appears, B2B is not supported.
- ## Next steps See the following articles on Azure AD B2B collaboration:
+- [Azure AD B2B in government and national clouds](b2b-government-national-clouds.md)
- [What is Azure AD B2B collaboration?](what-is-b2b.md)-- [Delegate B2B collaboration invitations](delegate-invitations.md)
+- [Delegate B2B collaboration invitations](external-collaboration-settings-configure.md)
active-directory External Collaboration Settings Configure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/external-collaboration-settings-configure.md
+
+ Title: Enable B2B external collaboration settings - Azure AD
+description: Learn how to enable Active Directory B2B external collaboration and manage who can invite guest users. Use the Guest Inviter role to delegate invitations.
+++++ Last updated : 01/31/2022+++++++++
+# Configure external collaboration settings
+
+External collaboration settings let you specify what roles in your organization can invite external users for B2B collaboration. These settings also include options for [allowing or blocking specific domains](allow-deny-list.md), and options for restricting what external guest users can see in your Azure AD directory. The following options are available:
+
+- **Determine guest user access**: Azure AD allows you to restrict what external guest users can see in your Azure AD directory. For example, you can limit guest users' view of group memberships, or allow guests to view only their own profile information.
+
+- **Specify who can invite guests**: By default, all users in your organization, including B2B collaboration guest users, can invite external users to B2B collaboration. If you want to limit the ability to send invitations, you can turn invitations on or off for everyone, or limit invitations to certain roles.
+
+- **Enable guest self-service sign-up via user flows**: For applications you build, you can create user flows that allow a user to sign up for an app and create a new guest account. You can enable the feature in your external collaboration settings, and then [add a self-service sign-up user flow to your app](self-service-sign-up-user-flow.md).
+
+- **Allow or block domains**: You can use collaboration restrictions to allow or deny invitations to the domains you specify. For details, see [Allow or block domains](allow-deny-list.md).
+
+For B2B collaboration with other Azure AD organizations, you should also review your [cross-tenant access settings](cross-tenant-access-settings-b2b-collaboration.md) to ensure your inbound and outbound B2B collaboration and scope access to specific users, groups, and applications.
+
+### To configure external collaboration settings:
+
+1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator or Security administrator account and open the **Azure Active Directory** service.
+1. Select **External Identities** > **External collaboration settings**.
+
+1. Under **Guest user access**, choose the level of access you want guest users to have:
+
+ - **Guest users have the same access as members (most inclusive)**: This option gives guests the same access to Azure AD resources and directory data as member users.
+
+ - **Guest users have limited access to properties and memberships of directory objects**: (Default) This setting blocks guests from certain directory tasks, like enumerating users, groups, or other directory resources. Guests can see membership of all non-hidden groups.
+
+ - **Guest user access is restricted to properties and memberships of their own directory objects (most restrictive)**: With this setting, guests can access only their own profiles. Guests are not allowed to see other users' profiles, groups, or group memberships.
+
+1. Under **Guest invite settings**, choose the appropriate settings:
+
+ ![Guest invite settings](./media/external-collaboration-settings-configure/guest-invite-settings.png)
+
+ - **Anyone in the organization can invite guest users including guests and non-admins (most inclusive)**: To allow guests in the organization to invite other guests including those who are not members of an organization, select this radio button.
+ - **Member users and users assigned to specific admin roles can invite guest users including guests with member permissions**: To allow member users and users who have specific administrator roles to invite guests, select this radio button.
+ - **Only users assigned to specific admin roles can invite guest users**: To allow only those users with administrator roles to invite guests, select this radio button. The administrator roles include [Global Administrator](../roles/permissions-reference.md#global-administrator), [User Administrator](../roles/permissions-reference.md#user-administrator), and [Guest Inviter](../roles/permissions-reference.md#guest-inviter).
+ - **No one in the organization can invite guest users including admins (most restrictive)**: To deny everyone in the organization from inviting guests, select this radio button.
+ > [!NOTE]
+ > If **Members can invite** is set to **No** and **Admins and users in the guest inviter role can invite** is set to **Yes**, users in the **Guest Inviter** role will still be able to invite guests.
+
+1. Under **Enable guest self-service sign up via user flows**, select **Yes** if you want to be able to create user flows that let users sign up for apps. For more information about this setting, see [Add a self-service sign-up user flow to an app](self-service-sign-up-user-flow.md).
+
+ ![Self-service sign up via user flows setting](./media/external-collaboration-settings-configure/self-service-sign-up-setting.png)
+
+1. Under **Collaboration restrictions**, you can choose whether to allow or deny invitations to the domains you specify and enter specific domain names in the text boxes. For multiple domains, enter each domain on a new line. For more information, see [Allow or block invitations to B2B users from specific organizations](allow-deny-list.md).
+
+ ![Collaboration restrictions settings](./media/external-collaboration-settings-configure/collaboration-restrictions.png)
+## Assign the Guest Inviter role to a user
+
+With the Guest Inviter role, you can give individual users the ability to invite guests without assigning them a global administrator or other admin role. Assign the Guest inviter role to individuals. Then make sure you set **Admins and users in the guest inviter role can invite** to **Yes**.
+
+Here's an example that shows how to use PowerShell to add a user to the Guest Inviter role:
+
+```
+Add-MsolRoleMember -RoleObjectId 95e79109-95c0-4d8e-aee3-d01accf2d47b -RoleMemberEmailAddress <RoleMemberEmailAddress>
+```
+
+## Next steps
+
+See the following articles on Azure AD B2B collaboration:
+
+- [What is Azure AD B2B collaboration?](what-is-b2b.md)
+- [Add B2B collaboration guest users without an invitation](add-user-without-invite.md)
+- [Adding a B2B collaboration user to a role](./add-users-administrator.md)
active-directory External Identities Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/external-identities-overview.md
++
+ Title: External Identities in Azure Active Directory | Microsoft Docs
+description: Azure AD External Identities allow you to collaborate with or publish apps to people outside your organization. Compare solutions for External Identities, including Azure Active Directory B2B collaboration, Azure AD B2B collaboration, and Azure AD B2C.
+++++ Last updated : 01/31/2022+++++++
+# External Identities in Azure Active Directory
+
+Azure AD External Identities refers to all the ways you can securely interact with users outside of your organization. If you want to collaborate with partners, distributors, suppliers, or vendors, you can share your resources and define how your internal users can access external organizations. If you're a developer creating consumer-facing apps, you can manage your customers' identity experiences.
+
+With External Identities, external users can "bring their own identities." Whether they have a corporate or government-issued digital identity, or an unmanaged social identity like Google or Facebook, they can use their own credentials to sign in. The external userΓÇÖs identity provider manages their identity, and you manage access to your apps with Azure AD or Azure AD B2C to keep your resources protected.
+
+The following capabilities make up External Identities:
+
+- **B2B collaboration** - Collaborate with external users by letting them use their preferred identity to sign in to your Microsoft applications or other enterprise applications (SaaS apps, custom-developed apps, etc.). B2B collaboration users are represented in your directory, typically as guest users.
+
+- **Azure AD B2C** - Publish modern SaaS apps or custom-developed apps (excluding Microsoft apps) to consumers and customers, while using Azure AD B2C for identity and access management.
+
+Depending on how you want to interact with external organizations and the types of resources you need to share, you can use a combination of these capabilities.
+
+![External Identities overview diagram](media/external-identities-overview/external-identities-b2b-overview.png)
+
+## B2B collaboration
+
+With B2B collaboration, you can invite anyone to sign in to your Azure AD organization using their own credentials so they can access the apps and resources you want to share with them. Use B2B collaboration when you need to let external users access your Office 365 apps, software-as-a-service (SaaS) apps, and line-of-business applications, especially when the partner doesn't use Azure AD. There are no credentials associated with B2B collaboration users. Instead, they authenticate with their home organization or identity provider, and then your organization checks the guest userΓÇÖs eligibility for B2B collaboration.
+
+There are various ways to add external users to your organization for B2B collaboration:
+
+- Invite users to B2B collaboration using their Azure AD accounts, Microsoft accounts, or social identities that you enable, such as Google. An admin can use the Azure portal or PowerShell to invite users to B2B collaboration. The user signs into the shared resources using a simple redemption process with their work, school, or other email account.
+
+- Use self-service sign-up user flows to let external users sign up for applications themselves. The experience can be customized to allow sign-up with a work, school, or social identity (like Google or Facebook). You can also collect information about the user during the sign-up process.
+
+- Use [Azure AD entitlement management](../governance/entitlement-management-overview.md), an identity governance feature that lets you manage [identity and access for external users at scale](../governance/entitlement-management-external-users.md#how-access-works-for-external-users) by automating access request workflows, access assignments, reviews, and expiration.
+
+A user object is created for the B2B collaboration user in the same directory as your employees. This user object can be managed like other user objects in your directory, added to groups, and so on. You can assign permissions to the user object (for authorization) while letting them use their existing credentials (for authentication).
+
+You can use [cross-tenant access settings](cross-tenant-access-overview.md) to manage B2B collaboration with other Azure AD organizations. For B2B collaboration with non-Azure AD external users and organizations, use [external collaboration settings](external-collaboration-settings-configure.md).
+
+Learn more about [B2B collaboration in Azure AD](what-is-b2b.md).
+
+## Azure AD B2C
+
+Azure AD B2C is a Customer Identity and Access Management (CIAM) solution that lets you build user journeys for consumer- and customer-facing apps. If you're a business or individual developer creating customer-facing apps, you can scale to millions of consumers, customers, or citizens by using Azure AD B2C. Developers can use Azure AD B2C as the full-featured CIAM system for their applications.
+
+With Azure AD B2C, customers can sign in with an identity they've already established (like Facebook or Gmail). With Azure AD B2C, you can completely customize and control how customers sign up, sign in, and manage their profiles when using your applications. For more information, see the Azure AD B2C documentation.
+
+Learn more about [Azure AD B2C](../../active-directory-b2c/index.yml).
+
+## Comparing External Identities feature sets
+
+The following table gives a detailed comparison of the scenarios you can enable with Azure AD External Identities. In the B2B scenarios, an external user is anyone who is not homed in your Azure AD organization.
+
+| | B2B collaboration | Azure AD B2C |
+| - | | |
+| **Primary scenario** | Collaborate with external users by letting them use their preferred identity to sign in to resources in your Azure AD organization. Provides access to Microsoft applications or your own applications (SaaS apps, custom-developed apps, etc.). <br><br> *Example:* Invite an external user to sign in to your Microsoft apps or become a guest member in Teams. | Publish apps to consumers and customers using Azure AD B2C for identity experiences. Provides identity and access management for modern SaaS or custom-developed applications (not first-party Microsoft apps). |
+| **Intended for** | Collaborating with business partners from external organizations like suppliers, partners, vendors. These users may or may not have Azure AD or managed IT. | Customers of your product. These users are managed in a separate Azure AD directory. |
+| **User management** | B2B collaboration users are managed in the same directory as employees but are typically annotated as guest users. Guest users can be managed the same way as employees, added to the same groups, and so on. Cross-tenant access settings can be used to determine which users have access to B2B collaboration. | User objects are created for consumer users in your Azure AD B2C directory. They're managed separately from the organization's employee and partner directory (if any). |
+| **Identity providers supported** | External users can collaborate using work accounts, school accounts, any email address, SAML and WS-Fed based identity providers, and social identity providers like Gmail and Facebook. | Consumer users with local application accounts (any email address, user name, or phone number), Azure AD, various supported social identities, and users with corporate and government-issued identities via SAML/WS-Fed-based identity provider federation. |
+| **Single sign-on (SSO)** | SSO to all Azure AD-connected apps is supported. For example, you can provide access to Microsoft 365 or on-premises apps, and to other SaaS apps such as Salesforce or Workday. | SSO to customer owned apps within the Azure AD B2C tenants is supported. SSO to Microsoft 365 or to other Microsoft SaaS apps isn't supported. |
+| **Licensing and billing** | Based on monthly active users (MAU), including B2B collaboration and Azure AD B2C users. Learn more about [External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/) and [billing setup for B2B](external-identities-pricing.md). | Based on monthly active users (MAU), including B2B collaboration and Azure AD B2C users. Learn more about [External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/) and [billing setup for Azure AD B2C](../../active-directory-b2c/billing.md). |
+| **Security policy and compliance** | Managed by the host/inviting organization (for example, with [Conditional Access policies](authentication-conditional-access.md) and cross-tenant access settings). | Managed by the organization via Conditional Access and Identity Protection. |
+| **Branding** | Host/inviting organization's brand is used. | Fully customizable branding per application or organization. |
+| **More information** | [Blog post](https://blogs.technet.microsoft.com/enterprisemobility/2017/02/01/azure-ad-b2b-new-updates-make-cross-business-collab-easy/), [Documentation](what-is-b2b.md) | [Product page](https://azure.microsoft.com/services/active-directory-b2c/), [Documentation](../../active-directory-b2c/index.yml) |
+
+## Managing External Identities features
+
+Azure AD B2B collaboration is a feature of Azure AD, and it's managed in the Azure portal through the Azure Active Directory service. To control inbound and outbound collaboration with other Azure AD organizations, you can use *cross-tenant access settings*. To control inbound collaboration with other non-Azure AD organizations, you can use *external collaboration settings*.
+
+### Cross-tenant access settings (Preview)
+
+Cross-tenant access settings let you manage B2B collaboration with other Azure AD organizations. You can determine how other Azure AD organizations collaborate with you (inbound access), and how your users collaborate with other Azure AD organizations (outbound access). Granular controls let you determine the people, groups, and apps, both in your organization and in external Azure AD organizations, that can participate in B2B collaboration. You can also trust multi-factor authentication (MFA) and device claims (compliant claims and hybrid Azure AD joined claims) from other Azure AD organizations.
+
+- **Default cross-tenant access settings** determine your baseline inbound and outbound settings for B2B collaboration. Initially, your default settings are configured to allow all inbound and outbound B2B collaboration with other Azure AD organizations. You can change these initial settings to create your own default configuration.
+
+- **Organization-specific access settings** let you configure customized settings for individual Azure AD organizations. Once you add an organization and customize your cross-tenant access settings with this organization, these settings will take precedence over your defaults. For example, you could enable B2B collaboration with all external organizations by default, but disable this feature only for Fabrikam.
+
+For more information, see [Cross-tenant access in Azure AD External Identities](cross-tenant-access-overview.md).
+
+### External collaboration settings
+
+External collaboration settings determine whether your users can send B2B collaboration invitations to external users and the level of access guest users have to your directory. With these settings, you can:
+
+- **Determine guest user permissions**. Azure AD allows you to restrict what external guest users can see in your Azure AD directory. For example, you can limit guest users' view of group memberships, or allow guests to view only their own profile information.
+
+- **Specify who can invite guests**. By default, all users in your organization, including B2B collaboration guest users, can invite external users to B2B collaboration. If you want to limit the ability to send invitations, you can turn invitations on or off for everyone, or limit invitations to certain roles.
+
+- **Allow or block domains**. Choose whether to allow or deny invitations to the domains you specify. For details, see [Allow or block domains](allow-deny-list.md).
+
+For more information, see how to [configure B2B external collaboration settings](external-collaboration-settings-configure.md).
+### How external collaboration and cross-tenant access settings work together
+
+External collaboration settings work at the invitation level, whereas cross-tenant access settings work at the authentication level.
+
+Cross-tenant access settings and external collaboration settings are used to manage two different aspects of B2B collaboration. Cross-tenant access settings control whether users can authenticate with external Azure AD tenants, and they apply to both inbound and outbound B2B collaboration. By contrast, external collaboration settings control which of your users are allowed to send B2B collaboration invitations to external users from any organization.
+
+When you're considering B2B collaboration with a specific external Azure AD organization, youΓÇÖll want to assess whether your cross-tenant access settings allow B2B collaboration with that organization, and whether your external collaboration settings allow your users to send invitations to that organization's domain. Here are some examples:
+
+- **Example 1**: You've previously added `adatum.com` (an Azure AD organization) to the list of blocked domains in your external collaboration settings, but your cross-tenant access settings enable B2B collaboration for all Azure AD organizations. In this case, the most restrictive setting applies. Your external collaboration settings will prevent your users from sending invitations to users at `adatum.com`.
+
+- **Example 2**: You allow B2B collaboration with Fabrikam in your cross-tenant access settings, but then you add `fabrikam.com` to your blocked domains in your external collaboration settings. Your users won't be able to invite new Fabrikam guest users, but existing Fabrikam guests will be able to continue using B2B collaboration.
+
+### Azure Active Directory B2C management
+
+Azure AD B2C is a separate consumer-based directory that you manage in the Azure portal through the Azure AD B2C service. Each Azure AD B2C tenant is separate and distinct from other Azure Active Directory and Azure AD B2C tenants. The Azure AD B2C portal experience is similar to Azure AD, but there are key differences, such as the ability to customize your user journeys using the Identity Experience Framework.
+
+For details about configuring and managing Azure AD B2C, see the [Azure AD B2B documentation](../../active-directory-b2c/index.yml).
+
+## Related Azure AD technologies
+
+There are several Azure AD technologies that are related to collaboration with external users and organizations. As you design your External Identities collaboration model, consider these additional features.
+
+### Azure AD entitlement management for B2B guest user sign-up
+
+As an inviting organization, you might not know ahead of time who the individual external collaborators are who need access to your resources. You need a way for users from partner companies to sign themselves up with policies that you control. If you want to enable users from other organizations to request access, and upon approval be provisioned with guest accounts and assigned to groups, apps, and SharePoint Online sites, you can use [Azure AD entitlement management](../governance/entitlement-management-overview.md) to configure policies that [manage access for external users](../governance/entitlement-management-external-users.md#how-access-works-for-external-users).
+
+### Azure AD Microsoft Graph API for B2B collaboration
+
+Microsoft Graph APIs are available for creating and managing External Identities features.
+
+- **Cross-tenant access settings API**: The Microsoft Graph cross-tenant access API lets you programmatically create the same B2B collaboration policies that are configurable in the Azure portal. Using the API, you can set up policies for inbound and outbound collaboration to allow or block features for everyone by default and limit access to specific organizations, groups, users, and applications. The API also allows you to accept MFA and device claims (compliant claims and hybrid Azure AD joined claims) from other Azure AD organizations.
+
+- **B2B collaboration invitation manager**: The [Microsoft Graph invitation manager API](/graph/api/resources/invitation) is available for building your own onboarding experiences for B2B guest users. You can use the [create invitation API](/graph/api/invitation-post?tabs=http) to automatically send a customized invitation email directly to the B2B user, for example. Or your app can use the inviteRedeemUrl returned in the creation response to craft your own invitation (through your communication mechanism of choice) to the invited user.
+
+### Conditional Access
+
+Organizations can enforce Conditional Access policies for external B2B collaboration users in the same way that they're enabled for full-time employees and members of the organization. For Azure AD cross-tenant scenarios, if your Conditional Access policies require MFA or device compliance, you can now trust MFA and device compliance claims from an external user's home organization. When trust settings are enabled, during authentication, Azure AD will check a user's credentials for an MFA claim or a device ID to determine if the policies have already been met. If so, the external user will be granted seamless sign-on to your shared resource. Otherwise, an MFA or device challenge will be initiated in the user's home tenant. Learn more about the [authentication flow and Conditional Access for external users](authentication-conditional-access.md).
+
+### Multitenant applications
+
+If you offer a Software as a Service (SaaS) application to many organizations, you can configure your application to accept sign-ins from any Azure Active Directory (Azure AD) tenant. This configuration is called making your application multi-tenant. Users in any Azure AD tenant will be able to sign in to your application after consenting to use their account with your application. See how to [enable multitenant sign-ins](../develop/howto-convert-app-to-be-multi-tenant.md).
+
+## Next steps
+
+- [What is Azure AD B2B collaboration?](what-is-b2b.md)
+- [About Azure AD B2C](../../active-directory-b2c/overview.md)
active-directory Hybrid Cloud To On Premises https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/hybrid-cloud-to-on-premises.md
The following diagram provides a high-level overview of how Azure AD Application
You can manage the on-premises B2B user objects through lifecycle management policies. For example: -- You can set up multi-factor authentication (MFA) policies for the Guest user so that MFA is used during Application Proxy authentication. For more information, see [Conditional Access for B2B collaboration users](conditional-access.md).
+- You can set up multi-factor authentication (MFA) policies for the Guest user so that MFA is used during Application Proxy authentication. For more information, see [Conditional Access for B2B collaboration users](authentication-conditional-access.md).
- Any sponsorships, access reviews, account verifications, etc. that are performed on the cloud B2B user applies to the on-premises users. For example, if the cloud user is deleted through your lifecycle management policies, the on-premises user is also deleted by MIM Sync or through Azure AD Connect sync. For more information, see [Manage guest access with Azure AD access reviews](../governance/manage-guest-access-with-access-reviews.md).
+### Create B2B guest user objects through an Azure AD B2B script
+
+You can use an [Azure AD B2B sample script](https://github.com/Azure-Samples/B2B-to-AD-Sync) to create shadow Azure AD accounts synced from Azure AD B2B accounts. You can then use the shadow accounts for on-premises apps that use KCD.
+ ### Create B2B guest user objects through MIM For information about how to use MIM 2016 Service Pack 1 and the MIM management agent for Microsoft Graph to create the guest user objects in the on-premises directory, see [Azure AD business-to-business (B2B) collaboration with Microsoft Identity Manager (MIM) 2016 SP1 with Azure Application Proxy](/microsoft-identity-manager/microsoft-identity-manager-2016-graph-b2b-scenario).
active-directory O365 External User https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/o365-external-user.md
You can enable this feature by using the setting 'ShowPeoplePickerSuggestionsFor
* [What is Azure AD B2B collaboration?](what-is-b2b.md) * [Adding a B2B collaboration user to a role](./add-users-administrator.md)
-* [Delegate B2B collaboration invitations](delegate-invitations.md)
+* [Delegate B2B collaboration invitations](external-collaboration-settings-configure.md)
* [Dynamic groups and B2B collaboration](use-dynamic-groups.md) * [Troubleshooting Azure Active Directory B2B collaboration](troubleshoot.md)
active-directory One Time Passcode https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/one-time-passcode.md
You can enable this feature at any time in the Azure portal by configuring the E
> [!IMPORTANT] >
-> - **Starting November 1, 2021**, we'll begin rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. As part of this change, Microsoft will stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption. To minimize disruptions during the holidays and deployment lockdowns, the majority of tenants will see changes rolled out in January 2022. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode).
+> - **As of November 1, 2021**, we began rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. As part of this change, Microsoft no longer creates new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption. To minimize disruptions during the holidays and deployment lockdowns, the majority of tenants will see changes rolled out in January 2022. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode).
> - Email one-time passcode settings have moved in the Azure portal from **External collaboration settings** to **All identity providers**. > [!NOTE]
Guest user teri@gmail.com is invited to Fabrikam, which does not have Google fed
## Disable email one-time passcode
-Starting November 1, 2021, we'll begin rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. At that time, Microsoft will no longer support the redemption of invitations by creating unmanaged ("viral" or "just-in-time") Azure AD accounts and tenants for B2B collaboration scenarios. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, you have the option of disabling this feature if you choose not to use it.
+Starting November 1, 2021, we began rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. Microsoft no longer supports the redemption of invitations by creating unmanaged ("viral" or "just-in-time") Azure AD accounts and tenants for B2B collaboration scenarios. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, you have the option of disabling this feature if you choose not to use it.
> [!NOTE] >
To enable the email one-time passcode feature in Azure US Government cloud:
4. Select **Email one-time passcode**, and then select **Yes**. 5. Select **Save**.
-For more information about current limitations, see [Azure US Government clouds](current-limitations.md#azure-us-government-clouds).
+For more information about current limitations, see [Azure AD B2B in government and national clouds](b2b-government-national-clouds.md).
## Frequently asked questions
When we support the ability to disable Microsoft Account in the Identity provide
**Does this change include SharePoint and OneDrive integration with Azure AD B2B?**
-No, the global rollout of the change to enable email one-time passcode by default that begins on November 1, 2021 doesn't include SharePoint and OneDrive integration with Azure AD B2B. To learn how to enable integration so that collaboration on SharePoint and OneDrive uses B2B capabilities, or how to disable this integration, see [SharePoint and OneDrive Integration with Azure AD B2B](/sharepoint/sharepoint-azureb2b-integration).
+No, the global rollout of the change to enable email one-time passcode by default that began on November 1, 2021 doesn't include SharePoint and OneDrive integration with Azure AD B2B. To learn how to enable integration so that collaboration on SharePoint and OneDrive uses B2B capabilities, or how to disable this integration, see [SharePoint and OneDrive Integration with Azure AD B2B](/sharepoint/sharepoint-azureb2b-integration).
active-directory Redemption Experience https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/redemption-experience.md
When you add a guest user to your directory, the guest user account has a consen
> [!IMPORTANT] > - **Starting July 12, 2021**, if Azure AD B2B customers set up new Google integrations for use with self-service sign-up for their custom or line-of-business applications, authentication with Google identities wonΓÇÖt work until authentications are moved to system web-views. [Learn more](google-federation.md#deprecation-of-web-view-sign-in-support). > - **Starting September 30, 2021**, Google is [deprecating embedded web-view sign-in support](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html). If your apps authenticate users with an embedded web-view and you're using Google federation with [Azure AD B2C](../../active-directory-b2c/identity-provider-google.md) or Azure AD B2B for [external user invitations](google-federation.md) or [self-service sign-up](identity-providers.md), Google Gmail users won't be able to authenticate. [Learn more](google-federation.md#deprecation-of-web-view-sign-in-support).
-> - **Starting November 1, 2021**, we'll begin rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. As part of this change, Microsoft will stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption. To minimize disruptions during the holidays and deployment lockdowns, the majority of tenants will see changes rolled out in January 2022. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode).
+> - **As of November 1, 2021**, we began rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. As part of this change, Microsoft will stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption. To minimize disruptions during the holidays and deployment lockdowns, the majority of tenants will see changes rolled out in January 2022. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode).
## Redemption and sign-in through a common endpoint
If you see an error that requires admin consent while accessing an application,
- [Add Azure Active Directory B2B collaboration users in the Azure portal](add-users-administrator.md) - [How do information workers add B2B collaboration users to Azure Active Directory?](add-users-information-worker.md) - [Add Azure Active Directory B2B collaboration users by using PowerShell](customize-invitation-api.md#powershell)-- [Leave an organization as a guest user](leave-the-organization.md)
+- [Leave an organization as a guest user](leave-the-organization.md)
active-directory Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/troubleshoot.md
Previously updated : 10/21/2021 Last updated : 01/31/2022 tags: active-directory
Here are some remedies for common problems with Azure Active Directory (Azure AD
> > - **Starting July 12, 2021**, if Azure AD B2B customers set up new Google integrations for use with self-service sign-up for their custom or line-of-business applications, authentication with Google identities wonΓÇÖt work until authentications are moved to system web-views. [Learn more](google-federation.md#deprecation-of-web-view-sign-in-support). > - **Starting September 30, 2021**, Google is [deprecating embedded web-view sign-in support](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html). If your apps authenticate users with an embedded web-view and you're using Google federation with [Azure AD B2C](../../active-directory-b2c/identity-provider-google.md) or Azure AD B2B for [external user invitations](google-federation.md) or [self-service sign-up](identity-providers.md), Google Gmail users won't be able to authenticate. [Learn more](google-federation.md#deprecation-of-web-view-sign-in-support).
- > - **Starting November 1, 2021**, we'll begin rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. As part of this change, Microsoft will stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption. To minimize disruptions during the holidays and deployment lockdowns, the majority of tenants will see changes rolled out in January 2022. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode).
+ > - **As of November 1, 2021**, we began rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. As part of this change, Microsoft will stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption. To minimize disruptions during the holidays and deployment lockdowns, the majority of tenants will see changes rolled out in January 2022. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode).
+
+## An error similar to "Failure to update policy due to object limit" appears while configuring cross-tenant access settings
+
+While configuring [cross-tenant access settings](cross-tenant-access-settings-b2b-collaboration.md), if you receive an error that says ΓÇ£Failure to update policy due to object limitΓÇ¥ you have reached the policy object limit of 25 KB. We're working toward increasing this limit. If you need to be able to calculate how close the current policy is to this limit, do the following:
+
+1. Open Microsoft Graph Explorer and run the following:
+
+ `GET https://graph.microsoft.com/beta/policies/crosstenantaccesspolicy`
+
+1. Copy the entire JSON response and save it as a txt file, for example `policyobject.txt`.
+
+1. Open PowerShell and run the following script, substituting the file location in the first line with your text file:
+
+```powershell
+$policy = Get-Content ΓÇ£C:\policyobject.txtΓÇ¥ | ConvertTo-Json
+$maxSize = 1024*25
+$size = [System.Text.Encoding]::UTF8.GetByteCount($policy)
+write-host "Remaining Bytes available in policy object"
+$maxSize - $size
+write-host "Is current policy within limits?"
+if ($size -le $maxSize) { return ΓÇ£validΓÇ¥ }; else { return ΓÇ£invalidΓÇ¥ }
+```
+
+## Users can no longer read email encrypted with Microsoft Rights Management Service (OME))
+
+When [configuring cross-tenant access settings](cross-tenant-access-settings-b2b-collaboration.md), if you block access to all apps by default, users will be unable to read emails encrypted with Microsoft Rights Management Service (also known as OME). To avoid this issue, we recommend configuring your outbound settings to allow your users to access this app ID: 00000012-0000-0000-c000-000000000000. If this is the only application you allow, access to all other apps will be blocked by default.
## IΓÇÖve added an external user but do not see them in my Global Address Book or in the people picker
Let's say you inadvertently invite a guest user with an email address that match
## Next steps
-[Get support for B2B collaboration](../fundamentals/active-directory-troubleshooting-support-howto.md)
+[Get support for B2B collaboration](../fundamentals/active-directory-troubleshooting-support-howto.md)
active-directory Use Dynamic Groups https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/use-dynamic-groups.md
The following image shows the rule syntax for a dynamic group modified to includ
- [B2B collaboration user properties](user-properties.md) - [Adding a B2B collaboration user to a role](./add-users-administrator.md)-- [Conditional Access for B2B collaboration users](conditional-access.md)
+- [Conditional Access for B2B collaboration users](authentication-conditional-access.md)
active-directory User Properties https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/user-properties.md
Previously updated : 01/04/2022 Last updated : 01/31/2022 - # Properties of an Azure Active Directory B2B collaboration user
-This article describes the properties and states of an invited Azure Active Directory B2B (Azure AD B2B) collaboration user object both before and after invitation redemption. An Azure AD B2B collaboration user is an external user, typically from a partner organization, that you invite to sign into your Azure AD organization using their own credentials. This B2B collaboration user (also generally referred to as a *guest user*) can then access the apps and resources you want to share with them. A user object is created for the B2B collaboration user in the same directory as your employees. B2B collaboration user objects have limited privileges in your directory by default, and they can be managed like employees, added to groups, and so on.
+B2B collaboration is a capability of Azure AD External Identities that lets you collaborate with users and partners outside of your organization. With B2B collaboration, an external user is invited to sign in to your Azure AD organization using their own credentials. This B2B collaboration user can then access the apps and resources you want to share with them. A user object is created for the B2B collaboration user in the same directory as your employees. B2B collaboration user objects have limited privileges in your directory by default, and they can be managed like employees, added to groups, and so on. This article discusses the properties of this user object and ways to manage it.
-Depending on the inviting organization's needs, an Azure AD B2B collaboration user can be in one of the following account states:
+The following table describes B2B collaboration users based on how they authenticate (internally or externally) and their relationship to your organization (guest or member).
-- State 1: Homed in an external instance of Azure AD and represented as a guest user in the inviting organization. In this case, the B2B user signs in by using an Azure AD account that belongs to the invited tenant. If the partner organization doesn't use Azure AD, the guest user in Azure AD is still created. The requirements are that they redeem their invitation and Azure AD verifies their email address. This arrangement is also called a just-in-time (JIT) tenancy, a "viral" tenancy, or an unmanaged Azure AD tenancy.
+![Diagram showing B2B collaboration users](media/user-properties/table-user-properties.png)
- > [!IMPORTANT]
- > **Starting November 1, 2021**, we'll begin rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. As part of this change, Microsoft will stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption. To minimize disruptions during the holidays and deployment lockdowns, the majority of tenants will see changes rolled out in January 2022. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode).
+- **External guest**: Most users who are commonly considered external users or guests fall into this category. This B2B collaboration user has an account in an external Azure AD organization or an external identity provider (such as a social identity), and they have guest-level permissions in the resource organization. The user object created in the resource Azure AD directory has a UserType of Guest.
+- **External member**: This B2B collaboration user has an account in an external Azure AD organization or an external identity provider (such as a social identity) and member-level access to resources in your organization. This scenario is common in organizations consisting of multiple tenants, where users are considered part of the larger organization and need member-level access to resources in the organizationΓÇÖs other tenants. The user object created in the resource Azure AD directory has a UserType of Member.
+- **Internal guest**: Before Azure AD B2B collaboration was available, it was common to collaborate with distributors, suppliers, vendors, and others by setting up internal credentials for them and designating them as guests by setting the user object UserType to Guest. If you have internal guest users like these, you can invite them to use B2B collaboration instead so they can use their own credentials, allowing their external identity provider to manage authentication and their account lifecycle.
+- **Internal member**: These users are generally considered employees of your organization. The user authenticates internally via Azure AD, and the user object created in the resource Azure AD directory has a UserType of Member.
-- State 2: Homed in a Microsoft or other account and represented as a guest user in the host organization. In this case, the guest user signs in with a Microsoft account or a social account (google.com or similar). The invited user's identity is created as a Microsoft account in the inviting organizationΓÇÖs directory during offer redemption.
+> [!IMPORTANT]
+> **As of November 1, 2021**, we began rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. As part of this change, Microsoft will stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode).
-- State 3: Homed in the host organization's on-premises Active Directory and synced with the host organization's Azure AD. You can use Azure AD Connect to sync the partner accounts to the cloud as Azure AD B2B users with UserType = Guest. See [Grant locally-managed partner accounts access to cloud resources](hybrid-on-premises-to-cloud.md).--- State 4: Homed in the host organization's Azure AD with UserType = Guest and credentials that the host organization manages.-
- ![Diagram depicting the four user states](media/user-properties/redemption-diagram.png)
+## Invitation redemption
Now, let's see what an Azure AD B2B collaboration user looks like in Azure AD. ### Before invitation redemption
-State 1 and State 2 accounts are the result of inviting guest users to collaborate by using the guest users' own credentials. When the invitation is initially sent to the guest user, an account is created in your tenant. This account doesnΓÇÖt have any credentials associated with it because authentication is performed by the guest user's identity provider.
-
-The **Issuer** property for the guest user account in your directory is set to the host's organization domain until the guest redeems their invitation. In the portal, the **Invitation accepted** property in the invited userΓÇÖs Azure AD portal profile will be set to `No` and querying for **externalUserState** using the Microsoft Graph API will return `Pending Acceptance`.
+B2B collaboration user accounts are the result of inviting guest users to collaborate by using the guest users' own credentials. When the invitation is initially sent to the guest user, an account is created in your tenant. This account doesnΓÇÖt have any credentials associated with it because authentication is performed by the guest user's identity provider. The **Issuer** property for the guest user account in your directory is set to the host's organization domain until the guest redeems their invitation. In the portal, the **Invitation accepted** property in the invited userΓÇÖs Azure AD portal profile will be set to **No** and querying for `externalUserState` using the Microsoft Graph API will return `Pending Acceptance`.
-![Screenshot showing user properties before offer redemption](media/user-properties/before-redemption.png)
+![Screenshot of user profile before redemption](media/user-properties/before-redemption.png)
### After invitation redemption
-After the guest user accepts the invitation, the **Issuer** property is updated based on the guest userΓÇÖs identity provider.
+After the B2B collaboration user accepts the invitation, the **Issuer** property is updated based on the userΓÇÖs identity provider.
-For guest users in State 1, the **issuer** is **External Azure AD**.
+If the B2B collaboration user is using credentials from another Azure AD organization, the **Issuer** is **External Azure AD**.
-![State 1 guest user after offer redemption](media/user-properties/after-redemption-state-1.png)
+![Screenshot of user profile after redemption](media/user-properties/after-redemption-state-1.png)
-For guest users in State 2, the **issuer** is **Microsoft Account**.
+If the B2B collaboration user is using a Microsoft account or credentials from another external identity provider, the **Issuer** reflects the identity provider, for example **Microsoft Account**, **google.com**, or **facebook.com**.
-![State 2 guest user after offer redemption](media/user-properties/after-redemption-state-2.png)
+![Screenshot of user profile showing an external identity provider](media/user-properties/after-redemption-state-2.png)
-For guest users in State 3 and State 4, the **issuer** property is set to the hostΓÇÖs organization domain. The **Directory synced** property in the Azure portal or **onPremisesSyncEnabled** in Microsoft Graph can be used to distinguish between state 3 and 4, yes indicating that the user is homed in the hostΓÇÖs on premises Active Directory.
+For external users who are using internal credentials, the **Issuer** property is set to the hostΓÇÖs organization domain. The **Directory synced** property is **Yes** if the account is homed in the organizationΓÇÖs on-premises Active Directory and synced with Azure AD, or **No** if the account is a cloud-only Azure AD account. The directory sync information is also available via the `onPremisesSyncEnabled` property in Microsoft Graph.
## Key properties of the Azure AD B2B collaboration user
-### UserType
+
+### User Principal Name
+
+The user principal name for a B2B collaboration user object contains an #EXT# identifier.
+### User type
+ This property indicates the relationship of the user to the host tenancy. This property can have two values:-- Member: This value indicates an employee of the host organization and a user in the organization's payroll. For example, this user expects to have access to internal-only sites. This user is not considered an external collaborator. -- Guest: This value indicates a user who isn't considered internal to the company, such as an external collaborator, partner, or customer. Such a user isn't expected to receive a CEO's internal memo or receive company benefits, for example.
+- **Member**: This value indicates an employee of the host organization and a user in the organization's payroll. For example, this user expects to have access to internal-only sites. This user isn't considered an external collaborator.
- > [!NOTE]
- > The UserType has no relation to how the user signs in, the directory role of the user, and so on. This property simply indicates the user's relationship to the host organization and allows the organization to enforce policies that depend on this property.
+- **Guest**: This value indicates a user who isn't considered internal to the company, such as an external collaborator, partner, or customer. Such a user isn't expected to receive a CEO's internal memo or receive company benefits, for example.
-For pricing related details please reference [Azure Active Directory pricing](https://azure.microsoft.com/pricing/details/active-directory).
+> [!NOTE]
+> The UserType has no relation to how the user signs in, the directory role of the user, and so on. This property simply indicates the user's relationship to the host organization and allows the organization to enforce policies that depend on this property.
### Issuer
-This property indicates the userΓÇÖs primary identity provider. A user can have several identity providers which can be viewed by selecting issuer in the userΓÇÖs profile or by querying the property via Microsoft Graph API.
+
+This property indicates the userΓÇÖs primary identity provider. A user can have several identity providers, which can be viewed by selecting issuer in the userΓÇÖs profile or by querying the `onPremisesSyncEnabled` property via the Microsoft Graph API.
+
+> [!NOTE]
+> Issuer and UserType are independent properties. A value of issuer does not imply a particular value for UserType
Issuer property value | Sign-in state | -
-External Azure AD organization | This user is homed in an external organization and authenticates by using an Azure AD account that belongs to the other organization. This type of sign-in corresponds to State 1.
-Microsoft account | This user is homed in a Microsoft account and authenticates by using a Microsoft account. This type of sign-in corresponds to State 2.
-{HostΓÇÖs domain} | This user authenticates by using an Azure AD account that belongs to this organization. This type of sign-in corresponds to State 4.
-google.com | This user has a Gmail account and has signed up by using self-service to the other organization. This type of sign-in corresponds to State 2.
-facebook.com | This user has a Facebook account and has signed up by using self-service to the other organization. This type of sign-in corresponds to State 2.
-mail | This user has an email address that does not match with verified Azure AD or SAML/WS-Fed domains, and is not a Gmail address or a Microsoft account. This type of sign-in corresponds to State 4.
-phone | This user has an email address that does not match a verified Azure AD domain or a SAML/WS-Fed domain, and is not a Gmail address or Microsoft account. This type of sign-in corresponds to State 4.
-{issuer URI} | This user is homed in an external organization that does not use Azure Active Directory as their identity provider, but instead uses a SAML/WS-Fed based identity providers. The issuer URI is shown when the issuer field is clicked. This type of sign-in corresponds to State 2.
-
-### Directory synced (or ΓÇÿonPremisesSyncEnabled in MS Graph)
+External Azure AD | This user is homed in an external organization and authenticates by using an Azure AD account that belongs to the other organization.
+Microsoft account | This user is homed in a Microsoft account and authenticates by using a Microsoft account.
+{hostΓÇÖs domain} | This user authenticates by using an Azure AD account that belongs to this organization.
+google.com | This user has a Gmail account and has signed up by using self-service to the other organization.
+facebook.com | This user has a Facebook account and has signed up by using self-service to the other organization.
+mail | This user has an email address that doesn't match with verified Azure AD or SAML/WS-Fed domains, and is not a Gmail address or a Microsoft account.
+phone | This user has an email address that doesn't match a verified Azure AD domain or a SAML/WS-Fed domain, and is not a Gmail address or Microsoft account.
+{issuer URI} | This user is homed in an external organization that doesn't use Azure Active Directory as their identity provider, but instead uses a SAML/WS-Fed-based identity provider. The issuer URI is shown when the issuer field is clicked.
-This property indicates if the user is being synced with on-premises Active Directory and is authenticated on-premises. If the value of this property is ΓÇÿyesΓÇÖ, this corresponds to state 3.
+### Directory synced
- > [!NOTE]
- > Issuer and UserType are independent properties. A value of issuer does not imply a particular value for UserType.
+The **Directory synced** property indicates whether the user is being synced with on-premises Active Directory and is authenticated on-premises. This property is **Yes** if the account is homed in the organizationΓÇÖs on-premises Active Directory and synced with Azure AD, or **No** if the account is a cloud-only Azure AD account. In Microsoft Graph, the Directory synced property corresponds to `onPremisesSyncEnabled`.
## Can Azure AD B2B users be added as members instead of guests?
-Typically, an Azure AD B2B user and guest user are synonymous. Therefore, an Azure AD B2B collaboration user is added as a user with UserType = Guest by default. However, in some cases, the partner organization is a member of a larger organization to which the host organization also belongs. If so, the host organization might want to treat users in the partner organization as members instead of guests. Use the Azure AD B2B Invitation Manager APIs to add or invite a user from the partner organization to the host organization as a member.
+Typically, an Azure AD B2B user and guest user are synonymous. Therefore, an Azure AD B2B collaboration user is added as a user with **UserType** set to **Guest** by default. However, in some cases, the partner organization is a member of a larger organization to which the host organization also belongs. If so, the host organization might want to treat users in the partner organization as members instead of guests. Use the Azure AD B2B Invitation Manager APIs to add or invite a user from the partner organization to the host organization as a member.
## Filter for guest users in the directory ![Screenshot showing the filter for guest users](media/user-properties/filter-guest-users.png) ## Convert UserType
-It's possible to convert UserType from Member to Guest and vice-versa by using PowerShell. However, the UserType property represents the user's relationship to the organization. Therefore, you should change this property only if the relationship of the user to the organization changes. If the relationship of the user changes, should the user principal name (UPN) change? Should the user continue to have access to the same resources? Should a mailbox be assigned?
+
+It's possible to convert UserType from Member to Guest and vice-versa by editing the user's profile in the Azure portal or by using PowerShell. However, the UserType property represents the user's relationship to the organization. Therefore, you should change this property only if the relationship of the user to the organization changes. If the relationship of the user changes, should the user principal name (UPN) change? Should the user continue to have access to the same resources? Should a mailbox be assigned?
## Remove guest user limitations+ There may be cases where you want to give your guest users higher privileges. You can add a guest user to any role and even remove the default guest user restrictions in the directory to give a user the same privileges as members. It's possible to turn off the default limitations so that a guest user in the company directory has the same permissions as a member user.
If a guest user accepts your invitation and they subsequently change their email
* [What is Azure AD B2B collaboration?](what-is-b2b.md) * [B2B collaboration user tokens](user-token.md)
-* [B2B collaboration user claims mapping](claims-mapping.md)
+* [B2B collaboration user claims mapping](claims-mapping.md)
active-directory What Is B2b https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/what-is-b2b.md
Title: What is B2B collaboration in Azure Active Directory?
+ Title: B2B collaboration overview - Azure AD
description: Azure Active Directory B2B collaboration supports guest user access so you can securely share resources and collaborate with external partners. Previously updated : 10/21/2021 Last updated : 01/31/2022
-# What is guest user access in Azure Active Directory B2B?
+# B2B collaboration overview
+
+Azure Active Directory (Azure AD) B2B collaboration is a feature within External Identities that lets you invite guest users to collaborate with your organization. With B2B collaboration, you can securely share your company's applications and services with guest users from any other organization, while maintaining control over your own corporate data. Work safely and securely with external partners, large or small, even if they don't have Azure AD or an IT department.
+
+![Diagram illustrating B2B collaboration](media/what-is-b2b/b2b-collaboration-overview.png)
+
+A simple invitation and redemption process lets partners use their own credentials to access your company's resources. You can also enable self-service sign-up user flows to let external users sign up for apps or resources themselves. Once the external user has redeemed their invitation or completed sign-up, they're represented in your directory as a [user object](user-properties.md). B2B collaboration user objects are typically given a user type of "guest" and can be identified by the #EXT# extension in their user principal name.
+
+Developers can use Azure AD business-to-business APIs to customize the invitation process or write applications like self-service sign-up portals. For licensing and pricing information related to guest users, refer to [Azure Active Directory External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/).
-Azure Active Directory (Azure AD) business-to-business (B2B) collaboration is a feature within External Identities that lets you invite guest users to collaborate with your organization. With B2B collaboration, you can securely share your company's applications and services with guest users from any other organization, while maintaining control over your own corporate data. Work safely and securely with external partners, large or small, even if they don't have Azure AD or an IT department. A simple invitation and redemption process lets partners use their own credentials to access your company's resources. Developers can use Azure AD business-to-business APIs to customize the invitation process or write applications like self-service sign-up portals. For licensing and pricing information related to guest users, refer to [Azure Active Directory External Identities pricing](https://azure.microsoft.com/pricing/details/active-directory/external-identities/).
> [!IMPORTANT]
->
-> - **Starting July 12, 2021**, if Azure AD B2B customers set up new Google integrations for use with self-service sign-up for their custom or line-of-business applications, authentication with Google identities wonΓÇÖt work until authentications are moved to system web-views. [Learn more](google-federation.md#deprecation-of-web-view-sign-in-support).
-> - **Starting September 30, 2021**, Google is [deprecating embedded web-view sign-in support](https://developers.googleblog.com/2016/08/modernizing-oauth-interactions-in-native-apps.html). If your apps authenticate users with an embedded web-view and you're using Google federation with [Azure AD B2C](../../active-directory-b2c/identity-provider-google.md) or Azure AD B2B for [external user invitations](google-federation.md) or [self-service sign-up](identity-providers.md), Google Gmail users won't be able to authenticate. [Learn more](google-federation.md#deprecation-of-web-view-sign-in-support).
-> - **Starting November 1, 2021**, we'll begin rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. As part of this change, Microsoft will stop creating new, unmanaged ("viral") Azure AD accounts and tenants during B2B collaboration invitation redemption. To minimize disruptions during the holidays and deployment lockdowns, the majority of tenants will see changes rolled out in January 2022. We're enabling the email one-time passcode feature because it provides a seamless fallback authentication method for your guest users. However, if you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode).
+> **As of November 1, 2021**, Microsoft no longer supports the redemption of invitations by creating unmanaged Azure AD accounts and tenants for B2B collaboration scenarios. At that time, we began rolling out a change to turn on the email one-time passcode feature for all existing tenants and enable it by default for new tenants. If you don't want to allow this feature to turn on automatically, you can [disable it](one-time-passcode.md#disable-email-one-time-passcode).
## Collaborate with any partner using their identities With Azure AD B2B, the partner uses their own identity management solution, so there is no external administrative overhead for your organization. Guest users sign in to your apps and services with their own work, school, or social identities. -- The partner uses their own identities and credentials; Azure AD is not required.
+- The partner uses their own identities and credentials, whether or not they have an Azure AD account.
- You don't need to manage external accounts or passwords.-- You don't need to sync accounts or manage account lifecycles.
+- You don't need to sync accounts or manage account lifecycles.
+
+## Manage external access with inbound and outbound settings
+
+B2B collaboration is enabled by default, but comprehensive admin settings let you control your B2B collaboration with external partners and organizations:
+
+- For B2B collaboration with other Azure AD organizations, you can use [cross-tenant access settings](cross-tenant-access-overview.md) to manage inbound and outbound B2B collaboration and scope access to specific users, groups, and applications. You can set a default configuration that applies to all external organizations, and then create individual, organization-specific settings as needed. Using cross-tenant access settings, you can also trust multi-factor (MFA) and device claims (compliant claims and hybrid Azure AD joined claims) from other Azure AD organizations.
+
+- You can use [external collaboration settings](external-collaboration-settings-configure.md) to limit who can invite external users, allow or block B2B specific domains, and set restrictions on guest user access to your directory.
## Easily invite guest users from the Azure AD portal
As an administrator, you can easily add guest users to your organization in the
![Screenshot showing the Review permissions page](media/what-is-b2b/consentscreen.png)
+## Allow self-service sign-up
+
+With a self-service sign-up user flow, you can create a sign-up experience for external users who want to access your apps. As part of the sign-up flow, you can provide options for different social or enterprise identity providers, and collect information about the user. Learn about [self-service sign-up and how to set it up](self-service-sign-up-overview.md).
+
+You can also use [API connectors](api-connectors-overview.md) to integrate your self-service sign-up user flows with external cloud systems. You can connect with custom approval workflows, perform identity verification, validate user-provided information, and more.
+
+![Screenshot showing the user flows page](media/what-is-b2b/self-service-sign-up-user-flow-overview.png)
## Use policies to securely share your apps and services
-You can use authorization policies to protect your corporate content. Conditional Access policies, such as multi-factor authentication, can be enforced:
+You can use authentication and authorization policies to protect your corporate content. Conditional Access policies, such as multi-factor authentication, can be enforced:
- At the tenant level. - At the application level.
Azure AD supports external identity providers like Facebook, Microsoft accounts,
![Screenshot showing the Identity providers page](media/what-is-b2b/identity-providers.png) -
-## Create a self-service sign-up user flow
-
-With a self-service sign-up user flow, you can create a sign-up experience for external users who want to access your apps. As part of the sign-up flow, you can provide options for different social or enterprise identity providers, and collect information about the user. Learn about [self-service sign-up and how to set it up](self-service-sign-up-overview.md).
-
-You can also use [API connectors](api-connectors-overview.md) to integrate your self-service sign-up user flows with external cloud systems. You can connect with custom approval workflows, perform identity verification, validate user-provided information, and more.
-
-![Screenshot showing the user flows page](media/what-is-b2b/self-service-sign-up-user-flow-overview.png)
-<!--TODO: Add screenshot with API connectors -->
- ## Next steps - [External Identities pricing](external-identities-pricing.md) - [Add B2B collaboration guest users in the portal](add-users-administrator.md)-- [Understand the invitation redemption process](redemption-experience.md)
+- [Understand the invitation redemption process](redemption-experience.md)
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/external-identities/whats-new-docs.md
Welcome to what's new in Azure Active Directory external identities documentatio
- [Tutorial: Enforce multi-factor authentication for B2B guest users](b2b-tutorial-require-mfa.md) - [Grant B2B users in Azure AD access to your on-premises applications](hybrid-cloud-to-on-premises.md) - [Azure Active Directory external identities: What's new](whats-new-docs.md)-- [Conditional Access for B2B collaboration users](conditional-access.md)
+- [Conditional Access for B2B collaboration users](authentication-conditional-access.md)
## October 2021
Welcome to what's new in Azure Active Directory external identities documentatio
### Updated articles - [Identity Providers for External Identities](identity-providers.md)-- [Enable B2B external collaboration and manage who can invite guests](delegate-invitations.md)
+- [Enable B2B external collaboration and manage who can invite guests](external-collaboration-settings-configure.md)
- [Properties of an Azure Active Directory B2B collaboration user](user-properties.md) - [Add Google as an identity provider for B2B guest users](google-federation.md) - [Azure Active Directory (Azure AD) identity provider for External Identities](azure-ad-account.md)
Welcome to what's new in Azure Active Directory external identities documentatio
- [Troubleshooting Azure Active Directory B2B collaboration](troubleshoot.md) - [Add an API connector to a user flow](self-service-sign-up-add-api-connector.md) - [Add a custom approval workflow to self-service sign-up](self-service-sign-up-add-approvals.md)-- [What are External Identities in Azure Active Directory?](compare-with-b2c.md)
+- [What are External Identities in Azure Active Directory?](external-identities-overview.md)
- [Billing model for Azure AD External Identities](external-identities-pricing.md) - [Dynamic groups and Azure Active Directory B2B collaboration](use-dynamic-groups.md) - [What is guest user access in Azure Active Directory B2B?](what-is-b2b.md) - [Use API connectors to customize and extend self-service sign-up](api-connectors-overview.md) - [Federation with SAML/WS-Fed identity providers for guest users (preview)](direct-federation.md) - [The elements of the B2B collaboration invitation email - Azure Active Directory](invitation-email-elements.md)-- [Conditional Access for B2B collaboration users](conditional-access.md)
+- [Conditional Access for B2B collaboration users](authentication-conditional-access.md)
## June 2021
Welcome to what's new in Azure Active Directory external identities documentatio
- [Troubleshooting Azure Active Directory B2B collaboration](troubleshoot.md) - [Properties of an Azure Active Directory B2B collaboration user](user-properties.md) - [What is guest user access in Azure Active Directory B2B?](what-is-b2b.md)-- [Enable B2B external collaboration and manage who can invite guests](delegate-invitations.md)
+- [Enable B2B external collaboration and manage who can invite guests](external-collaboration-settings-configure.md)
- [Billing model for Azure AD External Identities](external-identities-pricing.md) - [Example: Configure SAML/WS-Fed IdP federation with Active Directory Federation Services (AD FS) (preview)](direct-federation-adfs.md) - [Federation with SAML/WS-Fed identity providers for guest users (preview)](direct-federation.md)
Welcome to what's new in Azure Active Directory external identities documentatio
- [The elements of the B2B collaboration invitation email - Azure Active Directory](invitation-email-elements.md) - [Troubleshooting Azure Active Directory B2B collaboration](troubleshoot.md) - [Quickstart: Add a guest user with PowerShell](b2b-quickstart-invite-powershell.md)-- [Conditional Access for B2B collaboration users](conditional-access.md)
+- [Conditional Access for B2B collaboration users](authentication-conditional-access.md)
## March 2021
Welcome to what's new in Azure Active Directory external identities documentatio
### Updated articles - [Azure Active Directory B2B best practices](b2b-fundamentals.md)-- [Enable B2B external collaboration and manage who can invite guests](delegate-invitations.md)
+- [Enable B2B external collaboration and manage who can invite guests](external-collaboration-settings-configure.md)
- [Azure Active Directory B2B collaboration FAQs](faq.yml) - [Email one-time passcode authentication](one-time-passcode.md) - [Azure Active Directory B2B collaboration invitation redemption](redemption-experience.md)
active-directory 5 Secure Access B2b https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/5-secure-access-b2b.md
Getting your collaboration under control is key to securing external access to y
* [Understood how groups and security work together](4-secure-access-groups.md)
-Once youΓÇÖve done those things, you're ready to move into controlled collaboration. This article will guide you to move all your external collaboration into [Azure Active Directory B2B collaboration](../external-identities/what-is-b2b.md) (Azure AD B2B). Azure AD B2B is a feature of [Azure AD External Identities](../external-identities/compare-with-b2c.md).
+Once youΓÇÖve done those things, you're ready to move into controlled collaboration. This article will guide you to move all your external collaboration into [Azure Active Directory B2B collaboration](../external-identities/what-is-b2b.md) (Azure AD B2B). Azure AD B2B is a feature of [Azure AD External Identities](../external-identities/external-identities-overview.md).
## Control who your organization collaborates with
When you enable Azure AD B2B, you enable the ability to invite guest users via d
Determine who can invite guest users to access resources.
-* The most restrictive setting is to allow only administrators and those users granted the [guest inviter role](../external-identities/delegate-invitations.md) to invite guests.
+* The most restrictive setting is to allow only administrators and those users granted the [guest inviter role](../external-identities/external-collaboration-settings-configure.md) to invite guests.
* If your security requirements allow it, we recommend allowing all users with a userType of Member to invite guests.
active-directory Active Directory Compare Azure Ad To Ad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-compare-azure-ad-to-ad.md
Most IT administrators are familiar with Active Directory Domain Services concep
| Traditional and legacy apps| Most on-premises apps use LDAP, Windows-Integrated Authentication (NTLM and Kerberos), or Header-based authentication to control access to users.| Azure AD can provide access to these types of on-premises apps using [Azure AD application proxy](../app-proxy/application-proxy.md) agents running on-premises. Using this method Azure AD can authenticate Active Directory users on-premises using Kerberos while you migrate or need to coexist with legacy apps. | | SaaS apps|Active Directory doesn't support SaaS apps natively and requires federation system, such as AD FS.|SaaS apps supporting OAuth2, SAML, and WS-\* authentication can be integrated to use Azure AD for authentication. | | Line of business (LOB) apps with modern authentication|Organizations can use AD FS with Active Directory to support LOB apps requiring modern authentication.| LOB apps requiring modern authentication can be configured to use Azure AD for authentication. |
-| Mid-tier/Daemon services|Services running in on-premises environments normally use AD service accounts or group Managed Service Accounts (gMSA) to run. These apps will then inherit the permissions of the service account.| Azure AD provides [managed identities](../managed-identities-azure-resources/index.yml) to run other workloads in the cloud. The lifecycle of these identities is managed by Azure AD and is tied to the resource provider can't be used for other purposes to gain backdoor access.|
+| Mid-tier/Daemon services|Services running in on-premises environments normally use AD service accounts or group Managed Service Accounts (gMSA) to run. These apps will then inherit the permissions of the service account.| Azure AD provides [managed identities](../managed-identities-azure-resources/index.yml) to run other workloads in the cloud. The lifecycle of these identities is managed by Azure AD and is tied to the resource provider and it can't be used for other purposes to gain backdoor access.|
| **Devices**||| | Mobile|Active Directory doesn't natively support mobile devices without third-party solutions.| MicrosoftΓÇÖs mobile device management solution, Microsoft Intune, is integrated with Azure AD. Microsoft Intune provides device state information to the identity system to evaluate during authentication. | | Windows desktops|Active Directory provides the ability to domain join Windows devices to manage them using Group Policy, System Center Configuration Manager, or other third-party solutions.|Windows devices can be [joined to Azure AD](../devices/index.yml). Conditional access can check if a device is Azure AD joined as part of the authentication process. Windows devices can also be managed with [Microsoft Intune](/intune/what-is-intune). In this case, conditional access, will consider whether a device is compliant (for example, up-to-date security patches and virus signatures) before allowing access to the apps.|
active-directory Active Directory Ops Guide Auth https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/active-directory-ops-guide-auth.md
Below are the user and group settings that can be locked down if there isn't an
#### User settings -- **External Users** - external collaboration can happen organically in the enterprise with services like Teams, Power BI, SharePoint Online, and Azure Information Protection. If you have explicit constraints to control user-initiated external collaboration, it is recommended you enable external users by using [Azure AD Entitlement management](../governance/entitlement-management-overview.md) or a controlled operation such as through your help desk. If you don't want to allow organic external collaboration for services, you can [block members from inviting external users completely](../external-identities/delegate-invitations.md). Alternatively, you can also [allow or block specific domains](../external-identities/allow-deny-list.md) in external user invitations.
+- **External Users** - external collaboration can happen organically in the enterprise with services like Teams, Power BI, SharePoint Online, and Azure Information Protection. If you have explicit constraints to control user-initiated external collaboration, it is recommended you enable external users by using [Azure AD Entitlement management](../governance/entitlement-management-overview.md) or a controlled operation such as through your help desk. If you don't want to allow organic external collaboration for services, you can [block members from inviting external users completely](../external-identities/external-collaboration-settings-configure.md). Alternatively, you can also [allow or block specific domains](../external-identities/allow-deny-list.md) in external user invitations.
- **App Registrations** - when App registrations are enabled, end users can onboard applications themselves and grant access to their data. A typical example of App registration is users enabling Outlook plug-ins, or voice assistants such as Alexa and Siri to read their email and calendar or send emails on their behalf. If the customer decides to turn off App registration, the InfoSec and IAM teams must be involved in the management of exceptions (app registrations that are needed based on business requirements), as they would need to register the applications with an admin account, and most likely require designing a process to operationalize the process. - **Administration Portal** - organizations can lock down the Azure AD blade in the Azure portal so that non-administrators can't access Azure AD management in the Azure portal and get confused. Go to the user settings in the Azure AD management portal to restrict access:
active-directory Custom Security Attributes Add https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/custom-security-attributes-add.md
An attribute set is a collection of related attributes. All custom security attr
Select **Yes** to require that this custom security attribute be assigned values from a predefined values list. Select **No** to allow this custom security attribute to be assigned user-defined values or potentially predefined values.
- You can only add the predefined values after you add the custom security attribute by using the Edit attribute page. For more information, see [Edit a custom security attribute](#edit-a-custom-security-attribute).
+1. If **Only allow predefined values to be assigned** is **Yes**, click **Add value** to add predefined values.
-1. When finished, click **Add**.
+ An active value is available for assignment to objects. A value that is not active is defined, but not yet available for assignment.
+
+ ![Screenshot of New attribute pane with Add predefined value pane in Azure portal.](./media/custom-security-attributes-add/attribute-new-value-add.png)
++
+1. When finished, click **Save**.
The new custom security attribute appears in the list of custom security attributes.
Once you add a new custom security attribute, you can later edit some of the pro
1. If **Only allow predefined values to be assigned** is **Yes**, click **Add value** to add predefined values. Click an existing predefined value to change the **Is active?** setting.
- An active value is available for assignment to objects. A value that is not active is defined, but not yet available for assignment.
- ![Screenshot of Add predefined value pane in Azure portal.](./media/custom-security-attributes-add/attribute-predefined-value-add.png) ## Deactivate a custom security attribute
POST https://graph.microsoft.com/beta/directory/customSecurityAttributeDefinitio
} ```
+#### Add a custom security attribute with a list of predefined values
+
+Use the [Create customSecurityAttributeDefinition](/graph/api/directory-post-customsecurityattributedefinitions) API to add a new custom security attribute definition with a list of predefined values.
+
+- Attribute set: `Engineering`
+- Attribute: `Project`
+- Attribute data type: Collection of Strings
+- Predefined values: `Alpine`, `Baker`, `Cascade`
+
+```http
+POST https://graph.microsoft.com/beta/directory/customSecurityAttributeDefinitions
+{
+ "attributeSet": "Engineering",
+ "description": "Active projects for user",
+ "isCollection": true,
+ "isSearchable": true,
+ "name": "Project",
+ "status": "Available",
+ "type": "String",
+ "usePreDefinedValuesOnly": true,
+ "allowedValues": [
+ {
+ "id": "Alpine",
+ "isActive": true
+ },
+ {
+ "id": "Baker",
+ "isActive": true
+ },
+ {
+ "id": "Cascade",
+ "isActive": true
+ }
+ ]
+}
+```
+ #### Update a custom security attribute Use the [Update customSecurityAttributeDefinition](/graph/api/customsecurityattributedefinition-update) API to update a custom security attribute definition.
PATCH https://graph.microsoft.com/beta/directory/customSecurityAttributeDefiniti
} ```
+#### Update the predefined values for a custom security attribute
+
+Use the [Update customSecurityAttributeDefinition](/graph/api/customsecurityattributedefinition-update) API to update the predefined values for a custom security attribute definition.
+
+- Attribute set: `Engineering`
+- Attribute: `Project`
+- Attribute data type: Collection of Strings
+- Update predefined value: `Baker`
+- New predefined value: `Skagit`
+
+> [!NOTE]
+> For this request, you must add the **OData-Version** header and assign it the value `4.01`.
+
+```http
+PATCH https://graph.microsoft.com/beta/directory/customSecurityAttributeDefinitions/Engineering_Project
+{
+ "allowedValues@delta": [
+ {
+ "id": "Baker",
+ "isActive": false
+ },
+ {
+ "id": "Skagit",
+ "isActive": true
+ }
+ ]
+}
+```
+ #### Deactivate a custom security attribute Use the [Update customSecurityAttributeDefinition](/graph/api/customsecurityattributedefinition-update) API to deactivate a custom security attribute definition.
PATCH https://graph.microsoft.com/beta/directory/customSecurityAttributeDefiniti
No, you can't delete custom security attribute definitions. You can only [deactivate custom security attribute definitions](#deactivate-a-custom-security-attribute). Once you deactivate a custom security attribute, it can no longer be applied to the Azure AD objects. Custom security attribute assignments for the deactivated custom security attribute definition are not automatically removed. There is no limit to the number of deactivated custom security attributes. You can have 500 active custom security attribute definitions per tenant with 100 allowed predefined values per custom security attribute definition.
-**Can you add predefined values when you add a new custom security attribute?**
-
-Currently, you can only add predefined values after you defined the custom security attribute by using the [Edit attribute page](#edit-a-custom-security-attribute).
- ## Next steps - [Manage access to custom security attributes in Azure AD](custom-security-attributes-manage.md)
active-directory Custom Security Attributes Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/custom-security-attributes-overview.md
Previously updated : 01/14/2022 Last updated : 02/04/2022
If you use the Microsoft Graph API, you can use [Graph Explorer](/graph/graph-ex
Here are some of the known issues with custom security attributes: -- You can only add the predefined values after you add the custom security attribute by using the **Edit attribute** page. - Users with attribute set-level role assignments can see other attribute sets and custom security attribute definitions. - Global Administrators can read audit logs for custom security attribute definitions and assignments. - If you have an Azure AD Premium P2 license, you can't add eligible role assignments at attribute set scope. - If you have an Azure AD Premium P2 license, the **Assigned roles** page for a user does not list permanent role assignments at attribute set scope. The role assignments exist, but aren't listed.-- If you use the Microsoft Graph API, delegated and application permissions are available to both read and write (*CustomSecAttributeAssignment.ReadWrite.All* and *CustomSecAttributeDefinition.ReadWrite.All*). However, read-only permissions currently are not available. Depending on whether you have an Azure AD Premium P1 or P2 license, here are the role assignment tasks that are currently supported for custom security attribute roles:
active-directory Multi Tenant Common Considerations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/multi-tenant-common-considerations.md
Additionally, while the following CA conditions can be used, be aware of the pos
## Other access control considerations Some additional considerations when configuring access control.
-* Define [access control policies](../external-identities/conditional-access.md) to control access to resources.
+* Define [access control policies](../external-identities/authentication-conditional-access.md) to control access to resources.
* Design CA policies with guest users in mind. * Create policies specifically for guest users. * If your organization is using the [All Users] condition in your existing CA policy, this policy will affect guest users because [Guest] users are in scope of [All Users].
active-directory Multi Tenant User Management Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/multi-tenant-user-management-introduction.md
The following links provide additional information you can visit to find out mor
| [B2B and Office 365 external sharing](../external-identities/o365-external-user.md)| Explains the similarities and differences among sharing resources through B2B, office 365, and SharePoint/OneDrive.| | [Properties on an Azure AD B2B collaboration user](../external-identities/user-properties.md)| Describes the properties and states of the B2B guest user object in Azure Active Directory (Azure AD). The description provides details before and after invitation redemption.| | [B2B user tokens](../external-identities/user-token.md)| Provides examples of the bearer tokens for B2B a B2B guest user.|
-| [Conditional access for B2B](../external-identities/conditional-access.md)| Describes how conditional access and MFA work for guest users.|
+| [Conditional access for B2B](../external-identities/authentication-conditional-access.md)| Describes how conditional access and MFA work for guest users.|
| **How-to articles**| | | [Use PowerShell to bulk invite Azure AD B2B collaboration users](../external-identities/bulk-invite-powershell.md)| Learn how to use PowerShell to send bulk invitations to external users.| | [Enforce multifactor authentication for B2B guest users](../external-identities/b2b-tutorial-require-mfa.md)|Use conditional access and MFA policies to enforce tenant, app, or individual guest user authentication levels. |
active-directory Multi Tenant User Management Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/multi-tenant-user-management-scenarios.md
There are many ways end users can get invited to access resource tenant resource
* [Entitlement Management](../governance/entitlement-management-overview.md): Enables admins or resource owners to tie resources, allowed external organizations, guest user expiration, and access policies together in access packages. Access packages can be published to enable self-service sign-up for resource access by guest users.
-* [Azure portal ](../external-identities/add-users-administrator.md) End users given the [Guest Inviter role](../external-identities/delegate-invitations.md) can sign in to the Azure portal and invite guest users from the Users menu in Azure Active Directory.
+* [Azure portal ](../external-identities/add-users-administrator.md) End users given the [Guest Inviter role](../external-identities/external-collaboration-settings-configure.md) can sign in to the Azure portal and invite guest users from the Users menu in Azure Active Directory.
-* [Programmatic (PowerShell, Graph API)](../external-identities/customize-invitation-api.md) End users given the [Guest Inviter role](../external-identities/delegate-invitations.md) can invite guest users via PowerShell or Graph API.
+* [Programmatic (PowerShell, Graph API)](../external-identities/customize-invitation-api.md) End users given the [Guest Inviter role](../external-identities/external-collaboration-settings-configure.md) can invite guest users via PowerShell or Graph API.
### Redeem invitations
active-directory Resilience B2b Authentication https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/resilience-b2b-authentication.md
# Build resilience in external user authentication
-[Azure Active Directory B2B collaboration](../external-identities/what-is-b2b.md) (Azure AD B2B) is a feature of [External Identities](../external-identities/delegate-invitations.md) that enables collaboration with other organizations and individuals. It enables the secure onboarding of guest users into your Azure AD tenant without having to manage their credentials. External users bring their identity and credentials with them from an external identity provider (IdP), so they donΓÇÖt have to remember a new credential.
+[Azure Active Directory B2B collaboration](../external-identities/what-is-b2b.md) (Azure AD B2B) is a feature of [External Identities](../external-identities/external-collaboration-settings-configure.md) that enables collaboration with other organizations and individuals. It enables the secure onboarding of guest users into your Azure AD tenant without having to manage their credentials. External users bring their identity and credentials with them from an external identity provider (IdP), so they donΓÇÖt have to remember a new credential.
## Ways to authenticate external users
active-directory Users Default Permissions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/users-default-permissions.md
You can restrict default permissions for guest users in the following ways.
Permission | Setting explanation - | **Guest user access restrictions** | Setting this option to **Guest users have the same access as members** grants all member user permissions to guest users by default.<p>Setting this option to **Guest user access is restricted to properties and memberships of their own directory objects** restricts guest access to only their own user profile by default. Access to other users is no longer allowed, even when they're searching by user principal name, object ID, or display name. Access to group information, including groups memberships, is also no longer allowed.<p>This setting does not prevent access to joined groups in some Microsoft 365 services like Microsoft Teams. To learn more, see [Microsoft Teams guest access](/MicrosoftTeams/guest-access).<p>Guest users can still be added to administrator roles regardless of this permission setting.
-**Guests can invite** | Setting this option to **Yes** allows guests to invite other guests. To learn more, see [Delegate invitations for B2B collaboration](../external-identities/delegate-invitations.md#configure-b2b-external-collaboration-settings).
-**Members can invite** | Setting this option to **Yes** allows non-admin members of your directory to invite guests. To learn more, see [Delegate invitations for B2B collaboration](../external-identities/delegate-invitations.md#configure-b2b-external-collaboration-settings).
-**Admins and users in the guest inviter role can invite** | Setting this option to **Yes** allows admins and users in the guest inviter role to invite guests. When you set this option to **Yes**, users in the guest inviter role will still be able to invite guests, regardless of the **Members can invite** setting. To learn more, see [Delegate invitations for B2B collaboration](../external-identities/delegate-invitations.md#assign-the-guest-inviter-role-to-a-user).
+**Guests can invite** | Setting this option to **Yes** allows guests to invite other guests. To learn more, see [Configure external collaboration settings](../external-identities/external-collaboration-settings-configure.md).
+**Members can invite** | Setting this option to **Yes** allows non-admin members of your directory to invite guests. To learn more, see [Configure external collaboration settings](../external-identities/external-collaboration-settings-configure.md).
+**Admins and users in the guest inviter role can invite** | Setting this option to **Yes** allows admins and users in the guest inviter role to invite guests. When you set this option to **Yes**, users in the guest inviter role will still be able to invite guests, regardless of the **Members can invite** setting. To learn more, see [Configure external collaboration settings](../external-identities/external-collaboration-settings-configure.md).
## Object ownership
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/fundamentals/whats-new-archive.md
Some SAML applications require SPNameQualifier to be returned in the assertion s
**Product capability:** B2B/B2C
-Azure Government tenants using the B2B collaboration features can now invite users that have a Microsoft or Google account. To find out if your tenant can use these capabilities, follow the instructions at [How can I tell if B2B collaboration is available in my Azure US Government tenant?](../external-identities/current-limitations.md#how-can-i-tell-if-b2b-collaboration-is-available-in-my-azure-us-government-tenant)
+Azure Government tenants using the B2B collaboration features can now invite users that have a Microsoft or Google account. To find out if your tenant can use these capabilities, follow the instructions at [How can I tell if B2B collaboration is available in my Azure US Government tenant?](../external-identities/b2b-government-national-clouds.md#how-can-i-tell-if-b2b-collaboration-is-available-in-my-azure-us-government-tenant).
For more information about the apps, see [SaaS application integration with Azur
**Service category:** B2B **Product capability:** B2B/B2C
-The Azure AD B2B collaboration features are now available between some Azure Government tenants. To find out if your tenant is able to use these capabilities, follow the instructions at [How can I tell if B2B collaboration is available in my Azure US Government tenant?](../external-identities/current-limitations.md#how-can-i-tell-if-b2b-collaboration-is-available-in-my-azure-us-government-tenant).
+The Azure AD B2B collaboration features are now available between some Azure Government tenants. To find out if your tenant is able to use these capabilities, follow the instructions at [How can I tell if B2B collaboration is available in my Azure US Government tenant?](../external-identities/b2b-government-national-clouds.md#how-can-i-tell-if-b2b-collaboration-is-available-in-my-azure-us-government-tenant).
active-directory Entitlement Management Access Package Request Policy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-access-package-request-policy.md
Follow these steps if you want to allow users in your directory to be able to re
## For users not in your directory
- **Users not in your directory** refers to users who are in another Azure AD directory or domain. These users may not have yet been invited into your directory. Azure AD directories must be configured to be allow invitations in **Collaboration restrictions**. For more information, see [Enable B2B external collaboration and manage who can invite guests](../external-identities/delegate-invitations.md).
+ **Users not in your directory** refers to users who are in another Azure AD directory or domain. These users may not have yet been invited into your directory. Azure AD directories must be configured to be allow invitations in **Collaboration restrictions**. For more information, see [Configure external collaboration settings](../external-identities/external-collaboration-settings-configure.md).
> [!NOTE] > A guest user account will be created for a user not yet in your directory whose request is approved or auto-approved. The guest will be invited, but will not receive an invite email. Instead, they will receive an email when their access package assignment is delivered. By default, later when that guest user no longer has any access package assignments, because their last assignment has expired or been cancelled, that guest user account will be blocked from sign in and subsequently deleted. If you want to have guest users remain in your directory indefinitely, even if they have no access package assignments, you can change the settings for your entitlement management configuration. For more information about the guest user object, see [Properties of an Azure Active Directory B2B collaboration user](../external-identities/user-properties.md).
active-directory Entitlement Management External Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/governance/entitlement-management-external-users.md
To ensure people outside of your organization can request access packages and ge
- If you are using the B2B allow list, you must make sure all the domains of all the organizations you want to partner with using entitlement management are added to the list. Alternatively, if you are using the B2B deny list, you must make sure no domain of any organization you want to partner with is not present on that list. - If you create an entitlement management policy for **All users** (All connected organizations + any new external users), and a user doesnΓÇÖt belong to a connected organization in your directory, a connected organization will automatically be created for them when they request the package. Any B2B allow or deny list settings you have will take precedence. Therefore, be sure to include the domains you intend to include in this policy to your allow list if you are using one, and exclude them from your deny list if you are using a deny list. - If you want to create an entitlement management policy that includes **All users** (All connected organizations + any new external users), you must first enable email one-time passcode authentication for your directory. For more information, see [Email one-time passcode authentication](../external-identities/one-time-passcode.md).-- For more information about Azure AD B2B external collaboration settings, see [Enable B2B external collaboration and manage who can invite guests](../external-identities/delegate-invitations.md).
+- For more information about Azure AD B2B external collaboration settings, see [Configure external collaboration settings](../external-identities/external-collaboration-settings-configure.md).
![Azure AD external collaboration settings](./media/entitlement-management-external-users/collaboration-settings.png)
active-directory Cloud Governed Management For On Premises https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/cloud-governed-management-for-on-premises.md
Azure AD Premium also includes Microsoft Identity Manager, which can import reco
Business-to-business collaboration increasingly requires granting access to people outside your organization. [Azure AD B2B](/azure/active-directory/b2b/) collaboration enables organizations to securely share their applications and services with guest users and external partners while maintaining control over their own corporate data.
-Azure AD can [automatically create accounts in AD for guest users](../external-identities/hybrid-cloud-to-on-premises.md) as needed, enabling business guests to access on-premises AD-integrated applications without needing another password. Organizations can set up [multi-factor authentication (MFA) policies for guest user](../external-identities/conditional-access.md)s so MFA checks are done during application proxy authentication. Also, any [access reviews](../governance/manage-guest-access-with-access-reviews.md) that are done on cloud B2B users apply to on-premises users. For example, if the cloud user is deleted through lifecycle management policies, the on-premises user is also deleted.
+Azure AD can [automatically create accounts in AD for guest users](../external-identities/hybrid-cloud-to-on-premises.md) as needed, enabling business guests to access on-premises AD-integrated applications without needing another password. Organizations can set up [multi-factor authentication (MFA) policies for guest user](../external-identities/authentication-conditional-access.md)s so MFA checks are done during application proxy authentication. Also, any [access reviews](../governance/manage-guest-access-with-access-reviews.md) that are done on cloud B2B users apply to on-premises users. For example, if the cloud user is deleted through lifecycle management policies, the on-premises user is also deleted.
**Credential management for Active Directory accounts** Azure AD's self-service password reset allows users who have forgotten their passwords to be reauthenticated and reset their passwords, with the changed passwords [written to on-premises Active Directory](../authentication/concept-sspr-writeback.md). The password reset process can also use the on-premises Active Directory password policies: When a user resets their password, it's checked to ensure it meets the on-premises Active Directory policy before committing it to that directory. The self-service password reset [deployment plan](../authentication/howto-sspr-deployment.md) outlines best practices to roll out self-service password reset to users via web and Windows-integrated experiences.
active-directory F5 Big Ip Headers Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-headers-easy-button.md
The SHA solution for this scenario is made up of:
SHA for this scenario supports both SP and IdP initiated flows. The following image illustrates the SP initiated flow.
-![Secure hybrid access - SP initiated flow](./media/f5-big-ip-easy-button-header/sp-initiated-flow.png)
+ ![Secure hybrid access - SP initiated flow](./media/f5-big-ip-easy-button-header/sp-initiated-flow.png)
| Steps| Description | | - |-|
With the **Easy Button**, admins no longer go back and forth between Azure AD an
## Register Easy Button
-Before a client or service can access Microsoft Graph, it must be trusted by the Microsoft identity platform.
+Before a client or service can access Microsoft Graph, it must be [trusted by the Microsoft identity platform.](/develop/quickstart-register-app)
-The Easy Button client must also be registered as a client in Azure AD, before it is allowed to establish a trust relationship between each SAML SP instance of a BIG-IP published applications, and the IdP.
+The Easy Button client must also be registered in Azure AD, before it is allowed to establish a trust between each SAML SP instance of a BIG-IP published application, and Azure AD as the SAML IdP.
1. Sign-in to the [Azure AD portal](https://portal.azure.com/) using an account with Application Administrative rights 2. From the left navigation pane, select the **Azure Active Directory** service
You can now access the Easy Button functionality that provides quick configurati
The **Easy Button** template will display the sequence of steps required to publish your application.
-![Configuration steps flow](./media/f5-big-ip-easy-button-ldap/config-steps-flow.png)
-
+ ![Configuration steps flow](./media/f5-big-ip-easy-button-ldap/config-steps-flow.png)
### Configuration Properties
-These are general and service account properties. The **Configuration Properties tab** creates up a new application config and SSO object that will be managed through the BIG-IPΓÇÖs Guided Configuration UI. The configuration can then be reused for publishing more applications through the Easy Button template.
-
-Consider the **Azure Service Account Details** be the BIG-IP client application you registered in your Azure AD tenant earlier. This section allows the BIG-IP to programmatically register a SAML application directly in your tenant, along with the other properties you would normally configure manually, in the portal. Easy Button will do this for every BIG-IP APM service being published and enabled for SHA.
+These are general and service account properties. The **Configuration Properties** tab creates up a new application config and SSO object that will be managed through the BIG-IPΓÇÖs Guided Configuration UI. This configuration can then be reused for publishing more applications through the Easy Button template.
-Some of these are global settings that can be reused for publishing more applications, further reducing deployment time and effort.
+Consider the **Azure Service Account Details** be the BIG-IP client application you registered in your Azure AD tenant earlier. This section allows the BIG-IP to programmatically register a SAML application directly in your tenant, along with the other properties you would normally configure manually in the portal. Easy Button will do this for every BIG-IP APM service being published and enabled for SHA.
1. Enter a unique **Configuration Name** so admins can easily distinguish between Easy Button configurations.
The Service Provider settings define the SAML SP properties for the APM instance
2. Enter **Entity ID**. This is the identifier Azure AD will use to identify the SAML SP requesting a token
- ![Screenshot for Service Provider settings](./media/f5-big-ip-easy-button-ldap/service-provider.png)
+ ![Screenshot for Service Provider settings](./media/f5-big-ip-easy-button-ldap/service-provider.png)
+
+The optional **Security Settings** specify whether Azure AD should encrypt issued SAML assertions. Encrypting assertions between Azure AD and the BIG-IP APM provides additional assurance that the content tokens canΓÇÖt be intercepted, and personal or corporate data be compromised.
+
+3. From the **Assertion Decryption Private Key** list, select **Create New**
+
+ ![Screenshot for Configure Easy Button- Create New import](./media/f5-big-ip-oracle/configure-security-create-new.png)
-Next, under security settings, enter information for Azure AD to encrypt issued SAML assertions. Encrypting assertions between Azure AD and the BIG-IP APM provides additional assurance that the content tokens canΓÇÖt be intercepted, and personal or corporate data be compromised.
+4. Select **OK**. This opens the **Import SSL Certificate and Keys** dialog in a new tab
-3. Check **Enable Encrypted Assertion (Optional)**. Enable to request Azure AD to encrypt SAML assertions
+6. Select **PKCS 12 (IIS) ** to import your certificate and private key. Once provisioned close the browser tab to return to the main tab.
-4. Select **Assertion Decryption Private Key**. The private key for the certificate that BIG-IP APM will use to decrypt Azure AD assertions
+ ![Screenshot for Configure Easy Button- Import new cert](./media/f5-big-ip-oracle/import-ssl-certificates-and-keys.png)
-5. Select **Assertion Decryption Certificate**. This is the certificate that BIG-IP will upload to Azure AD for encrypting the issued SAML assertions. This can be the certificate you provisioned earlier
+6. Check **Enable Encrypted Assertion**
+7. If you have enabled encryption, select your certificate from the **Assertion Decryption Private Key** list. This is the private key for the certificate that BIG-IP APM will use to decrypt Azure AD assertions
+8. If you have enabled encryption, select your certificate from the **Assertion Decryption Certificate** list. This is the certificate that BIG-IP will upload to Azure AD for encrypting the issued SAML assertions.
![Screenshot for Service Provider security settings](./media/f5-big-ip-easy-button-ldap/service-provider-security-settings.png)
This section defines all properties that you would normally use to manually conf
The Easy Button wizard provides a set of pre-defined application templates for Oracle PeopleSoft, Oracle E-business Suite, Oracle JD Edwards, SAP ERP, but weΓÇÖll use the generic SHA template by selecting **F5 BIG-IP APM Azure AD Integration > Add**.
-![Screenshot for Azure configuration add BIG-IP application](./media/f5-big-ip-easy-button-ldap/azure-config-add-app.png)
+ ![Screenshot for Azure configuration add BIG-IP application](./media/f5-big-ip-easy-button-ldap/azure-config-add-app.png)
#### Azure Configuration
The Easy Button wizard provides a set of pre-defined application templates for O
![Screenshot for Azure configuration add display info](./media/f5-big-ip-easy-button-ldap/azure-configuration-properties.png)
-3. Select **Signing key**. The IdP SAML signing certificate you provisioned earlier
+3. Select the refresh icon next to the **Signing Key** and **Signing Certificate** to locate the certificate you imported earlier
+
+5. Enter the certificateΓÇÖs password in **Signing Key Passphrase**
-4. Select the same certificate for **Singing Certificate**
-
-5. Enter the certificateΓÇÖs password in **Passphrase**
-
-6. Select **Signing Options**. It can be enabled optionally to ensure the BIG-IP only accepts tokens and claims that have been signed by your Azure AD tenant
+6. Enable **Signing Option** (optional). This ensures that BIG-IP only accepts tokens and claims that are signed by Azure AD
![Screenshot for Azure configuration - Add signing certificates info](./media/f5-big-ip-easy-button-ldap/azure-configuration-sign-certificates.png)
For this example, you can include one more attribute:
In the **Additional User Attributes tab**, you can enable session augmentation required by a variety of distributed systems such as Oracle, SAP, and other JAVA based implementations requiring attributes stored in other directories. Attributes fetched from an LDAP source can then be injected as additional SSO headers to further control access based on roles, Partner IDs, etc.
-![Screenshot for additional user attributes](./media/f5-big-ip-easy-button-header/additional-user-attributes.png)
+ ![Screenshot for additional user attributes](./media/f5-big-ip-easy-button-header/additional-user-attributes.png)
>[!NOTE] >This feature has no correlation to Azure AD but is another source of attributes.  #### Conditional Access Policy
-You can further protect the published application with policies returned from your Azure AD tenant. These policies are enforced after the first-factor authentication has been completed and uses signals from conditions like device platform, location, user or group membership, or application to determine access.
+CA policies are enforced post Azure AD pre-authentication, to control access based on device, application, location, and risk signals.
-The **Available Policies** by default, lists all CA policies defined without user based actions.
+The **Available Policies** view, by default, will list all CA policies that do not include user based actions.
-The **Selected Policies** list, by default, displays all policies targeting All cloud apps. These policies cannot be deselected or moved to the Available Policies list.
+The **Selected Policies** view, by default, displays all policies targeting All cloud apps. These policies cannot be deselected or moved to the Available Policies list as they are enforced at a tenant level.
To select a policy to be applied to the application being published:
-1. Select the desired policy in the **Available Policies** list
-
-2. Select the right arrow and move it to the **Selected Policies** list
+1. Select the desired policy in the **Available Policies** list
+2. Select the right arrow and move it to the **Selected Policies** list
-Selected policies should either have an **Include** or **Exclude option** checked. If both options are checked, the selected policy is not enforced. Excluding all policies may ease testing, you can go back and enable them later.
+Selected policies should either have an **Include** or **Exclude** option checked. If both options are checked, the selected policy is not enforced.
-![Screenshot for CA policies](./media/f5-big-ip-easy-button-ldap/conditional-access-policy.png)
+ ![Screenshot for CA policies](./media/f5-big-ip-kerberos-easy-button/conditional-access-policy.png)
> [!NOTE] > The policy list is enumerated only once when first switching to this tab. A refresh button is available to manually force the wizard to query your tenant, but this button is displayed only when the application has been deployed.
Selected policies should either have an **Include** or **Exclude option** checke
A virtual server is a BIG-IP data plane object represented by a virtual IP address listening for clients requests to the application. Any received traffic is processed and evaluated against the APM profile associated with the virtual server, before being directed according to the policy results and settings.
-1. Enter **Destination Address**. This is any available IPv4/IPv6 address that the BIG-IP can use to receive client traffic
+1. Enter **Destination Address**. This is any available IPv4/IPv6 address that the BIG-IP can use to receive client traffic. A corresponding record should also exist in DNS, enabling clients to resolve the external URL of your BIG-IP published application to this IP.
2. Enter **Service Port** as *443* for HTTPS
The **Application Pool tab** details the services behind a BIG-IP that are repre
3. Update **Pool Servers**. Select an existing node or specify an IP and port for the server hosting the header-based application
- ![Screenshot for Application pool](./media/f5-big-ip-easy-button-ldap/application-pool.png)
+ ![Screenshot for Application pool](./media/f5-big-ip-oracle/application-pool.png)
Our backend application sits on HTTP port 80 but obviously switch to 443 if yours is HTTPS.
Enabling SSO allows users to access BIG-IP published services without having to
* **Header Name:** employeeid * **Header Value:** %{session.saml.last.attr.name.employeeid}
-![Screenshot for SSO and HTTP headers](./media/f5-big-ip-easy-button-header/sso-http-headers.png)
+ ![Screenshot for SSO and HTTP headers](./media/f5-big-ip-easy-button-header/sso-http-headers.png)
>[!NOTE] > APM session variables defined within curly brackets are CASE sensitive. If you enter EmployeeID when the Azure AD attribute name is being defined as employeeid, it will cause an attribute mapping failure.
If making a change to the app is a no go, then consider having the BIG-IP listen
## Summary
-Select **Deploy** to commit all settings and verify that the application has appeared in your tenant. This last step provides break down of all applied settings before theyΓÇÖre committed.
+This last step provides a breakdown of your configurations. Select **Deploy** to commit all settings and verify that the application now exists in your tenants list of ΓÇÿEnterprise applications.
Your application should now be published and accessible via SHA, either directly via its URL or through MicrosoftΓÇÖs application portals. ## Next steps
-From a browser, **connect** to the applicationΓÇÖs external URL or select the **applicationΓÇÖs icon** in the MyApps portal. After authenticating against Azure AD, youΓÇÖll be redirected to the BIG-IP virtual server for the application and automatically signed in through SSO.
+From a browser, **connect** to the applicationΓÇÖs external URL or select the **applicationΓÇÖs icon** in the [Microsoft MyApps portal](https://myapplications.microsoft.com/). After authenticating against Azure AD, youΓÇÖll be redirected to the BIG-IP virtual server for the application and automatically signed in through SSO.
This shows the output of the injected headers displayed by our headers-based application.
-![Screenshot for App views](./media/f5-big-ip-easy-button-ldap/app-view.png)
+ ![Screenshot for App views](./media/f5-big-ip-easy-button-ldap/app-view.png)
For increased security, organizations using this pattern could also consider blocking all direct access to the application, thereby forcing a strict path through the BIG-IP. ## Advanced deployment
-There may be cases where the Guided Configuration templates lack the flexibility to achieve a particular set of requirements. Or even a need to fast track a proof of concept. For those scenarios, the BIG-IP offers the ability to disable the Guided ConfigurationΓÇÖs strict management mode. That way the bulk of your configurations can be deployed through the wizard-based templates, and any tweaks or additional settings applied manually.
+There may be cases where the Guided Configuration templates lacks the flexibility to achieve more specific requirements. For those scenarios, see [Advanced Configuration for headers-based SSO](./f5-big-ip-header-advanced.md).
-For those scenarios, go ahead and deploy using the Guided Configuration. Then navigate to **Access > Guided Configuration** and select the small padlock icon on the far right of the row for your applicationsΓÇÖ configs. At that point, changes via the wizard UI are no longer possible, but all BIG-IP objects associated with the published instance of the application will be unlocked for direct management.
+Alternatively, the BIG-IP gives you the option to disable **Guided ConfigurationΓÇÖs strict management mode**. This allows you to manually tweak your configurations, even though bulk of your configurations are automated through the wizard-based templates.
-For more information, see [Advanced Configuration for header-based SSO](./f5-big-ip-header-advanced.md).
+You can navigate to **Access > Guided Configuration** and select the **small padlock icon** on the far right of the row for your applicationsΓÇÖ configs.
+
+ ![Screenshot for Configure Easy Button - Strict Management](./media/f5-big-ip-oracle/strict-mode-padlock.png)
+
+At that point, changes via the wizard UI are no longer possible, but all BIG-IP objects associated with the published instance of the application will be unlocked for direct management.
> [!NOTE]
-> Re-enabling strict mode and deploying a configuration will overwrite any settings performed outside of the Guided Configuration UI, therefore we recommend the manual approach for production services.
+> Re-enabling strict mode and deploying a configuration will overwrite any settings performed outside of the Guided Configuration UI, therefore we recommend the advanced configuration method for production services.
## Troubleshooting
If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related
2. The **View Variables** link in this location may also help root cause SSO issues, particularly if the BIG-IP APM fails to obtain the right attributes For more information, visit this F5 knowledge article [Configuring LDAP remote authentication for Active Directory](https://support.f5.com/csp/article/K11072). ThereΓÇÖs also a great BIG-IP reference table to help diagnose LDAP-related issues in this F5 knowledge article on [LDAP Query](https://techdocs.f5.com/kb/en-us/products/big-ip_apm/manuals/product/apm-authentication-single-sign-on-11-5-0/5.html).-
-## Additional resources
-
-* [The end of passwords, go password-less](https://www.microsoft.com/security/business/identity/passwordless)
-
-* [What is Conditional Access?](../conditional-access/overview.md)
-
-* [Microsoft Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/)
active-directory F5 Big Ip Kerberos Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-kerberos-easy-button.md
The SHA solution for this scenario is made up of the following:
SHA for this scenario supports both SP and IdP initiated flows. The following image illustrates the SP initiated flow.
-![Scenario architecture](./media/f5-big-ip-kerberos-easy-button/scenario-architecture.png)
+ ![Scenario architecture](./media/f5-big-ip-kerberos-easy-button/scenario-architecture.png)
| Steps| Description| | -- |-|
The advanced approach provides a more flexible way of implementing SHA by manual
## Register Easy Button
-Before a client or service can access Microsoft Graph, it must be trusted by the Microsoft identity platform by being registered with Azure AD. A BIG-IP must also be registered as a client in Azure AD, before the Easy Button wizard is trusted to access Microsoft Graph.
+Before a client or service can access Microsoft Graph, it must be [trusted by the Microsoft identity platform.](/develop/quickstart-register-app)
+
+The Easy Button client must also be registered in Azure AD, before it is allowed to establish a trust between each SAML SP instance of a BIG-IP published application, and Azure AD as the SAML IdP.
1. Sign-in to the [Azure AD portal](https://portal.azure.com/) using an account with Application Administrative rights
You can now access the Easy Button functionality that provides quick configurati
The **Easy Button** template will display the sequence of steps required to publish your application.
-![Configuration steps flow](./media/f5-big-ip-kerberos-easy-button/config-steps-flow.png)
+ ![Configuration steps flow](./media/f5-big-ip-kerberos-easy-button/config-steps-flow.png)
### Configuration Properties
-These are general and service account properties. Consider this section to be the client application you registered in your Azure AD tenant earlier. These settings allow a BIG-IP to programmatically register a SAML application directly in your tenant, along with the properties you would normally configure manually. Easy Button will do this for every BIG-IP APM service being enabled for SHA.
+These are general and service account properties. The **Configuration Properties** tab creates up a new application config and SSO object that will be managed through the BIG-IPΓÇÖs Guided Configuration UI. This configuration can then be reused for publishing more applications through the Easy Button template.
-Some of these are global settings so can be re-used for publishing more applications, further reducing deployment time and effort.
+Consider the **Azure Service Account Details** be the BIG-IP client application you registered in your Azure AD tenant earlier. This section allows the BIG-IP to programmatically register a SAML application directly in your tenant, along with the other properties you would normally configure manually in the portal. Easy Button will do this for every BIG-IP APM service being published and enabled for SHA.
1. Provide a unique **Configuration Name** so admins can easily distinguish between Easy Button configurations
The Service Provider settings define the SAML SP properties for the APM instance
![Screenshot for Service Provider settings](./media/f5-big-ip-kerberos-easy-button/service-provider.png)
- Next, under security settings, enter information for Azure AD to encrypt issued SAML assertions. Encrypting assertions between Azure AD and the BIG-IP APM provides additional assurance that the content tokens can't be intercepted, and personal or corporate data be compromised.
+The optional **Security Settings** specify whether Azure AD should encrypt issued SAML assertions. Encrypting assertions between Azure AD and the BIG-IP APM provides additional assurance that the content tokens canΓÇÖt be intercepted, and personal or corporate data be compromised.
+
+3. From the **Assertion Decryption Private Key** list, select **Create New**
+
+ ![Screenshot for Configure Easy Button- Create New import](./media/f5-big-ip-oracle/configure-security-create-new.png)
+
+4. Select **OK**. This opens the **Import SSL Certificate and Keys** dialog in a new tab
+
+6. Select **PKCS 12 (IIS) ** to import your certificate and private key. Once provisioned close the browser tab to return to the main tab.
+
+ ![Screenshot for Configure Easy Button- Import new cert](./media/f5-big-ip-oracle/import-ssl-certificates-and-keys.png)
-3. Check **Enable Encrypted Assertion (Optional).** Enable to request Azure AD to encrypt SAML assertions
+6. Check **Enable Encrypted Assertion**
-4. Select **Assertion Decryption Private Key.** The private key for the certificate that BIG-IP APM will use to decrypt Azure AD assertions
+8. If you have enabled encryption, select your certificate from the **Assertion Decryption Private Key** list. This is the private key for the certificate that BIG-IP APM will use to decrypt Azure AD assertions
-5. Select **Assertion Decryption Certificate.** This is the certificate that BIG-IP will upload to Azure AD for encrypting the issued SAML assertions. This can be the certificate you provisioned earlier
+10. If you have enabled encryption, select your certificate from the **Assertion Decryption Certificate** list. This is the certificate that BIG-IP will upload to Azure AD for encrypting the issued SAML assertions.
![Screenshot for Service Provider security settings](./media/f5-big-ip-kerberos-easy-button/service-provider-security-settings.png)
This section defines all properties that you would normally use to manually conf
The Easy Button wizard provides a set of pre-defined application templates for Oracle PeopleSoft, Oracle E-business Suite, Oracle JD Edwards, SAP ERP, but you can use the generic SHA template by selecting **F5 BIG-IP APM Azure AD Integration > Add.**
-![Screenshot for Azure configuration add BIG-IP application](./media/f5-big-ip-kerberos-easy-button/azure-config-add-app.png)
+ ![Screenshot for Azure configuration add BIG-IP application](./media/f5-big-ip-kerberos-easy-button/azure-config-add-app.png)
#### Azure Configuration
The Easy Button wizard provides a set of pre-defined application templates for O
![Screenshot for Azure configuration add display info](./media/f5-big-ip-kerberos-easy-button/azure-config-display-name.png)
-3. Select **Signing key.** The IdP SAML signing certificate you provisioned earlier
+3. Select the refresh icon next to the **Signing Key** and **Signing Certificate** to locate the certificate you imported earlier
+
+5. Enter the certificateΓÇÖs password in **Signing Key Passphrase**
-4. Select the same certificate for **Singing Certificate**
+6. Enable **Signing Option** (optional). This ensures that BIG-IP only accepts tokens and claims that are signed by Azure AD
-5. Enter the certificateΓÇÖs password in **Passphrase**
-
-6. Select **Signing Options**. It can be enabled optionally to ensure the BIG-IP only accepts tokens and claims that have been signed by your Azure AD tenant
-
- ![Screenshot for Azure configuration - Add signing certificates info](./media/f5-big-ip-kerberos-easy-button/azure-configuration-sign-certificates.png)
+ ![Screenshot for Azure configuration - Add signing certificates info](./media/f5-big-ip-easy-button-ldap/azure-configuration-sign-certificates.png)
7. **User and User Groups** are dynamically queried from your Azure AD tenant and used to authorize access to the application. **Add** a user or group that you can use later for testing, otherwise all access will be denied![Graphical user interface, text, application, email
When a user successfully authenticates to Azure AD, it issues a SAML token with
As our AD infrastructure is based on a .com domain suffix used both, internally and externally, we donΓÇÖt require any additional attributes to achieve a functional KCD SSO implementation. See the [advanced tutorial](f5-big-ip-kerberos-advanced.md) for cases where you have multiple domains or userΓÇÖs login using an alternate suffix.
-![Screenshot for user attributes and claims](./media/f5-big-ip-kerberos-easy-button/user-attributes-claims.png)
+ ![Screenshot for user attributes and claims](./media/f5-big-ip-kerberos-easy-button/user-attributes-claims.png)
#### Additional User Attributes The **Additional User Attributes** tab can support a variety of distributed systems requiring attributes stored in other directories, for session augmentation. Attributes fetched from an LDAP source can then be injected as additional SSO headers to further control access based on roles, Partner IDs, etc.
-![Screenshot for additional user attributes](./media/f5-big-ip-kerberos-easy-button/additional-user-attributes.png)
+ ![Screenshot for additional user attributes](./media/f5-big-ip-kerberos-easy-button/additional-user-attributes.png)
>[!NOTE] >This feature has no correlation to Azure AD but is another source of attributes. #### Conditional Access Policy
-You can further protect the published application with policies returned from your Azure AD tenant. These policies are enforced after the first-factor authentication has been completed and uses signals from conditions like device platform, location, user or group membership, or application to determine access.
+CA policies are enforced post Azure AD pre-authentication, to control access based on device, application, location, and risk signals.
-The **Available Policies** by default, lists all CA policies defined without user based actions.
+The **Available Policies** view, by default, will list all CA policies that do not include user based actions.
-The **Selected Policies**, by default, displays all policies targeting All cloud apps. These policies cannot be deselected or moved to the Available Policies list.
+The **Selected Policies** view, by default, displays all policies targeting All cloud apps. These policies cannot be deselected or moved to the Available Policies list as they are enforced at a tenant level.
To select a policy to be applied to the application being published:
-1. Select the desired policy in the **Available Policies** list
+1. Select the desired policy in the **Available Policies** list
+2. Select the right arrow and move it to the **Selected Policies** list
-2. Select the right arrow and move it to the **Selected Policies** list
+Selected policies should either have an **Include** or **Exclude** option checked. If both options are checked, the selected policy is not enforced.
-Selected policies should either have an **Include** or **Exclude** option checked. If both options are checked, the selected policy is not enforced. Excluding all policies may ease testing, you can go back and enable them later.
-
- ![Screenshot for CA policies](./media/f5-big-ip-kerberos-easy-button/conditional-access-policy.png)
+ ![Screenshot for CA policies](./media/f5-big-ip-kerberos-easy-button/conditional-access-policy.png)
>[!NOTE] >The policy list is enumerated only once when first switching to this tab. A refresh button is available to manually force the wizard to query your tenant, but this button is displayed only when the application has been deployed.
Selected policies should either have an **Include** or **Exclude** option checke
A virtual server is a BIG-IP data plane object represented by a virtual IP address listening for client requests to the application. Any received traffic is processed and evaluated against the APM profile associated with the virtual server, before being directed according to the policy results and settings.
-1. Enter **Destination Address.** This is any available IPv4/IPv6 address that the BIG-IP can use to receive client traffic
+1. Enter **Destination Address**. This is any available IPv4/IPv6 address that the BIG-IP can use to receive client traffic. A corresponding record should also exist in DNS, enabling clients to resolve the external URL of your BIG-IP published application to this IP.
2. Enter **Service Port** as *443* for HTTPS
A virtual server is a BIG-IP data plane object represented by a virtual IP addre
4. Select **Client SSL Profile** to enable the virtual server for HTTPS so that client connections are encrypted over TLS. Select the client SSL profile you created as part of the prerequisites or leave the default if testing
- ![Screenshot for Virtual server](./media/f5-big-ip-kerberos-easy-button/virtual-server.png)
+ ![Screenshot for Virtual server](./media/f5-big-ip-kerberos-easy-button/virtual-server.png)
### Pool Properties
The **Application Pool tab** details the services behind a BIG-IP, represented a
3. Update **Pool Servers.** Select an existing server node or specify an IP and port for the backend node hosting the header-based application
- ![Screenshot for Application pool](./media/f5-big-ip-kerberos-easy-button/application-pool.png)
+ ![Screenshot for Application pool](./media/f5-big-ip-oracle/application-pool.png)
Our backend application runs on HTTP port 80. You can switch this to 443 if your application runs on HTTPS.
Enable **Kerberos** and **Show Advanced Setting** to enter the following:
![Screenshot for SSO method configuration](./media/f5-big-ip-kerberos-easy-button/sso-method-config.png) - ### Session Management The BIG-IPs session management settings are used to define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and error pages. For more details, consult [F5 documentation](https://support.f5.com/csp/article/K18390492).
For more information, see [Kerberos Constrained Delegation across domains](/prev
From a browser, **connect** to the applicationΓÇÖs external URL or select the **applicationΓÇÖs icon** in the [Microsoft MyApps portal](https://myapps.microsoft.com/). After authenticating to Azure AD, youΓÇÖll be redirected to the BIG-IP virtual server for the application and automatically signed in through SSO.
-![Screenshot for App views](./media/f5-big-ip-kerberos-easy-button/app-view.png)
+ ![Screenshot for App views](./media/f5-big-ip-kerberos-easy-button/app-view.png)
For increased security, organizations using this pattern could also consider blocking all direct access to the application, thereby forcing a strict path through the BIG-IP.
SHA also supports [Azure AD B2B guest access](../external-identities/hybrid-clou
## Advanced deployment
-There may be cases where the Guided Configuration templates lack the flexibility to achieve a particular set of requirements. Or even a need to fast track a proof of concept. For those scenarios, the BIG-IP offers the ability to disable the Guided ConfigurationΓÇÖs strict management mode. That way the bulk of your configurations can be deployed through the wizard-based templates, and any tweaks or additional settings applied manually.
+There may be cases where the Guided Configuration templates lacks the flexibility to achieve more specific requirements. For those scenarios, see [Advanced Configuration for kerberos-based SSO](./f5-big-ip-kerberos-advanced.md).
-For those scenarios, go ahead and deploy using the Guided Configuration. Then navigate to **Access > Guided Configuration** and select the small padlock icon on the far right of the row for your applicationsΓÇÖ configs. At that point, changes via the wizard UI are no longer possible, but all BIG-IP objects associated with the published instance of the application will be unlocked for direct management.
+Alternatively, the BIG-IP gives you the option to disable **Guided ConfigurationΓÇÖs strict management mode**. This allows you to manually tweak your configurations, even though bulk of your configurations are automated through the wizard-based templates.
-For more information, see [Advanced Configuration for kerberos-based SSO](./f5-big-ip-kerberos-advanced.md).
+You can navigate to **Access > Guided Configuration** and select the **small padlock icon** on the far right of the row for your applicationsΓÇÖ configs.
+
+ ![Screenshot for Configure Easy Button - Strict Management](./media/f5-big-ip-oracle/strict-mode-padlock.png)
->[!NOTE]
->Re-enabling strict mode and deploying a configuration will overwrite any settings performed outside of the Guided Configuration UI, therefore we recommend the manual approach for production services.
+At that point, changes via the wizard UI are no longer possible, but all BIG-IP objects associated with the published instance of the application will be unlocked for direct management.
+
+[!NOTE] Re-enabling strict mode and deploying a configuration will overwrite any settings performed outside of the Guided Configuration UI, therefore we recommend the advanced configuration method for production services.
## Troubleshooting
Consider the following points while troubleshooting any issue.
* Ensure there are no duplicate SPNs in your environment by executing the following query at the command line: setspn -q HTTP/my_target_SPN
->[!NOTE]
->You can refer to our [App Proxy guidance to validate an IIS application ](../app-proxy/application-proxy-back-end-kerberos-constrained-delegation-how-to.md)is configured appropriately for KCD. F5ΓÇÖs article on [how the APM handles Kerberos SSO](https://techdocs.f5.com/en-us/bigip-15-1-0/big-ip-access-policy-manager-single-sign-on-concepts-configuration/kerberos-single-sign-on-method.html) is also a valuable resource.
+You can refer to our [App Proxy guidance](../app-proxy/application-proxy-back-end-kerberos-constrained-delegation-how-to.md) to validate an IIS application is configured appropriately for KCD. F5ΓÇÖs article on [how the APM handles Kerberos SSO](https://techdocs.f5.com/en-us/bigip-15-1-0/big-ip-access-policy-manager-single-sign-on-concepts-configuration/kerberos-single-sign-on-method.html) is also a valuable resource.
-### Authentication and SSO issues
+### Log analysis
BIG-IP logs are a great source of information for isolating all sorts of authentication & SSO issues. When troubleshooting you should increase the log verbosity level.
If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related
2. Select the link for your active session. The **View Variables** link in this location may also help determine root cause KCD issues, particularly if the BIG-IP APM fails to obtain the right user and domain identifiers.
-F5 provides a great BIG-IP specific paper to help diagnose KCD related issues, see the deployment guide on [Configuring Kerberos Constrained Delegation](https://www.f5.com/pdf/deployment-guides/kerberos-constrained-delegation-dg.pdf).
-
-## Additional resources
-
-* [BIG-IP Advanced configuration](https://techdocs.f5.com/kb/en-us/products/big-ip_apm/manuals/product/apm-authentication-single-sign-on-11-5-0/2.html)
-
-* [The end of passwords, go password-less](https://www.microsoft.com/security/business/identity/passwordless)
-
-* [What is Conditional Access?](../conditional-access/overview.md)
-
-* [Microsoft Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/)
+See [BIG-IP APM variable assign examples]( https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference]( https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
active-directory F5 Big Ip Ldap Header Easybutton https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-ldap-header-easybutton.md
The secure hybrid access solution for this scenario is made up of:
SHA for this scenario supports both SP and IdP initiated flows. The following image illustrates the SP initiated flow.
-![Secure hybrid access - SP initiated flow](./media/f5-big-ip-easy-button-ldap/sp-initiated-flow.png)
+ ![Secure hybrid access - SP initiated flow](./media/f5-big-ip-easy-button-ldap/sp-initiated-flow.png)
| Steps| Description | | -- |-|
Prior BIG-IP experience isn't necessary, but you'll need:
- An account with Azure AD application admin [permissions](/azure/active-directory/users-groups-roles/directory-assign-admin-roles#application-administrator) -- A [SSL certificate](./f5-bigip-deployment-guide.md#ssl-profile) for publishing services over HTTPS, or use default certificates while testing
+- An [SSL certificate](./f5-bigip-deployment-guide.md#ssl-profile) for publishing services over HTTPS, or use default certificates while testing
- An existing header-based application or [setup a simple IIS header app](/previous-versions/iis/6.0-sdk/ms525396(v=vs.90)) for testing -- A user directory that supports LDAP, including Windows Active Directory Lightweight Directory Services (AD LDS), OpenLDAP etc.
+- A user directory that supports LDAP, such as Windows Active Directory Lightweight Directory Services (AD LDS), OpenLDAP etc.
## BIG-IP configuration methods
For scenarios where the Guided Configuration lacks the flexibility to achieve a
## Register Easy Button
-Before a client or service can access Microsoft Graph, it must be trusted by the Microsoft identity platform.
+Before a client or service can access Microsoft Graph, it must be [trusted by the Microsoft identity platform.](/develop/quickstart-register-app)
-The Easy Button client must also be registered as a client in Azure AD, before it is allowed to establish a trust relationship between each SAML SP instance of a BIG-IP published applications, and the IdP.
+The Easy Button client must also be registered in Azure AD, before it is allowed to establish a trust between each SAML SP instance of a BIG-IP published application, and Azure AD as the SAML IdP.
1. Sign-in to the [Azure AD portal](https://portal.azure.com) using an account with Application Administrative rights
Next, step through the Easy Button configurations to federate and publish the EB
2. Navigate to **System > Certificate Management > Traffic Certificate Management SSL Certificate List > Import** 3. Select **PKCS 12 (IIS)** and import your certificate along with its private key
- Once provisioned, the certificate can be used for every application published through Easy Button. You can also choose to upload a separate certificate for individual applications.
+ Once provisioned, the certificate can be used for every application published through Easy Button. You can also choose to upload a separate certificate for individual applications.
- ![Screenshot for Configure Easy Button- Import SSL certificates and keys](./media/f5-big-ip-easy-button-ldap/configure-easy-button.png)
+
+ ![Screenshot for Configure Easy Button- Import SSL certificates and keys](./media/f5-big-ip-easy-button-ldap/configure-easy-button.png)
4. Navigate to **Access > Guided Configuration > Microsoft Integration** and select **Azure AD Application** You can now access the Easy Button functionality that provides quick configuration steps to set up the APM as a SAML Service Provider (SP) and Azure AD as an Identity Provider (IdP) for your application.
- ![Screenshot for Configure Easy Button- Install the template](./media/f5-big-ip-easy-button-ldap/easy-button-template.png)
+ ![Screenshot for Configure Easy Button- Install the template](./media/f5-big-ip-easy-button-ldap/easy-button-template.png)
5. Review the list of configuration steps and select **Next**
- ![Screenshot for Configure Easy Button - List configuration steps](./media/f5-big-ip-easy-button-ldap/config-steps.png)
+ ![Screenshot for Configure Easy Button - List configuration steps](./media/f5-big-ip-easy-button-ldap/config-steps.png)
## Configuration steps The **Easy Button** template will display the sequence of steps required to publish your application.
-![Configuration steps flow](./media/f5-big-ip-easy-button-ldap/config-steps-flow.png)
+ ![Configuration steps flow](./media/f5-big-ip-easy-button-ldap/config-steps-flow.png)
### Configuration Properties
-These are general and service account properties. The **Configuration Properties tab** creates up a new application config and SSO object that will be managed through the BIG-IPΓÇÖs Guided Configuration UI. The configuration can then be reused for publishing more applications through the Easy Button template.
-
-Consider the **Azure Service Account Details** be the BIG-IP client application you registered in your Azure AD tenant earlier. This section allows the BIG-IP to programmatically register a SAML application directly in your tenant, along with the other properties you would normally configure manually, in the portal. Easy Button will do this for every BIG-IP APM service being published and enabled for secure hybrid access.
+These are general and service account properties. The **Configuration Properties** tab creates up a new application config and SSO object that will be managed through the BIG-IPΓÇÖs Guided Configuration UI. This configuration can then be reused for publishing more applications through the Easy Button template.
-Some of these are global settings that can be reused for publishing more applications, further reducing deployment time and effort.
+Consider the **Azure Service Account Details** be the BIG-IP client application you registered in your Azure AD tenant earlier. This section allows the BIG-IP to programmatically register a SAML application directly in your tenant, along with the other properties you would normally configure manually in the portal. Easy Button will do this for every BIG-IP APM service being published and enabled for SHA.
1. Enter a unique **Configuration Name** so admins can easily distinguish between Easy Button configurations.
Some of these are global settings that can be reused for publishing more applica
5. Confirm the BIG-IP can successfully connect to your tenant, and then select **Next**
-![Screenshot for Configuration General and Service Account properties](./media/f5-big-ip-easy-button-ldap/config-properties.png)
+ ![Screenshot for Configuration General and Service Account properties](./media/f5-big-ip-easy-button-ldap/config-properties.png)
### Service Provider
The Service Provider settings define the SAML SP properties for the APM instance
2. Enter **Entity ID**. This is the identifier Azure AD will use to identify the SAML SP requesting a token
- ![Screenshot for Service Provider settings](./media/f5-big-ip-easy-button-ldap/service-provider.png)
+ ![Screenshot for Service Provider settings](./media/f5-big-ip-easy-button-ldap/service-provider.png)
+
+ The optional **Security Settings** specify whether Azure AD should encrypt issued SAML assertions. Encrypting assertions between Azure AD and the BIG-IP APM provides additional assurance that the content tokens canΓÇÖt be intercepted, and personal or corporate data be compromised.
++
+3. From the **Assertion Decryption Private Key** list, select **Create New**
+
+
+ ![Screenshot for Configure Easy Button- Create New import](./media/f5-big-ip-oracle/configure-security-create-new.png)
+
+4. Select **OK**. This opens the **Import SSL Certificate and Keys** dialog in a new tab
++
+6. Select **PKCS 12 (IIS)** to import your certificate and private key. Once provisioned close the browser tab to return to the main tab.
++
+ ![Screenshot for Configure Easy Button- Import new cert](./media/f5-big-ip-oracle/import-ssl-certificates-and-keys.png)
+
+6. Check **Enable Encrypted Assertion**.
+
-Next, under security settings, enter information for Azure AD to encrypt issued SAML assertions. Encrypting assertions between Azure AD and the BIG-IP APM provides additional assurance that the content tokens can't be intercepted, and personal or corporate data be compromised.
+8. If you have enabled encryption, select your certificate from the **Assertion Decryption Private Key** list. This is the private key for the certificate that BIG-IP APM will use to decrypt Azure AD assertions.
-3. Check **Enable Encrypted Assertion (Optional)**. Enable to request Azure AD to encrypt SAML assertions
-4. Select **Assertion Decryption Private Key**. The private key for the certificate that BIG-IP APM will use to decrypt Azure AD assertions
+9. If you have enabled encryption, select your certificate from the **Assertion Decryption Certificate** list. This is the certificate that BIG-IP will upload to Azure AD for encrypting the issued SAML assertions.
-5. Select **Assertion Decryption Certificate**. This is the certificate that BIG-IP will upload to Azure AD for encrypting the issued SAML assertions. This can be the certificate you provisioned earlier
- ![Screenshot for Service Provider security settings](./media/f5-big-ip-easy-button-ldap/service-provider-security-settings.png)
+ ![Screenshot for Service Provider security settings](./media/f5-big-ip-easy-button-ldap/service-provider-security-settings.png)
### Azure Active Directory
This section defines all properties that you would normally use to manually conf
The Easy Button wizard provides a set of pre-defined application templates for Oracle PeopleSoft, Oracle E-business Suite, Oracle JD Edwards, SAP ERP, but weΓÇÖll use the generic secure hybrid access template by selecting **F5 BIG-IP APM Azure AD Integration > Add**.
-![Screenshot for Azure configuration add BIG-IP application](./media/f5-big-ip-easy-button-ldap/azure-config-add-app.png)
+ ![Screenshot for Azure configuration add BIG-IP application](./media/f5-big-ip-easy-button-ldap/azure-config-add-app.png)
#### Azure Configuration
The Easy Button wizard provides a set of pre-defined application templates for O
2. Do not enter anything in the **Sign On URL (optional)** to enable IdP initiated sign-on
- ![Screenshot for Azure configuration add display info](./media/f5-big-ip-easy-button-ldap/azure-configuration-properties.png)
+ ![Screenshot for Azure configuration add display info](./media/f5-big-ip-easy-button-ldap/azure-configuration-properties.png)
-3. Select **Signing key**. The IdP SAML signing certificate you provisioned earlier
-
-4. Select the same certificate for **Singing Certificate**
-
-5. Enter the certificateΓÇÖs password in **Passphrase**
+3. Select the refresh icon next to the **Signing Key** and **Signing Certificate** to locate the certificate you imported earlier
+
+5. Enter the certificateΓÇÖs password in **Signing Key Passphrase**
-6. Select **Signing Options**. It can be enabled optionally to ensure the BIG-IP only accepts tokens and claims that have been signed by your Azure AD tenant
+6. Enable **Signing Option** (optional). This ensures that BIG-IP only accepts tokens and claims that are signed by Azure AD
- ![Screenshot for Azure configuration - Add signing certificates info](./media/f5-big-ip-easy-button-ldap/azure-configuration-sign-certificates.png)
+ ![Screenshot for Azure configuration - Add signing certificates info](./media/f5-big-ip-easy-button-ldap/azure-configuration-sign-certificates.png)
7. **User and User Groups** are dynamically queried from your Azure AD tenant and used to authorize access to the application. **Add** a user or group that you can use later for testing, otherwise all access will be denied
- ![Screenshot for Azure configuration - Add users and groups](./media/f5-big-ip-easy-button-ldap/azure-configuration-add-user-groups.png)
+ ![Screenshot for Azure configuration - Add users and groups](./media/f5-big-ip-easy-button-ldap/azure-configuration-add-user-groups.png)
#### User Attributes & Claims
For this example, you can include one more attribute:
2. Enter **Source Attribute** as *user.employeeid*
-![Screenshot for user attributes and claims](./media/f5-big-ip-easy-button-ldap/user-attributes-claims.png)
+ ![Screenshot for user attributes and claims](./media/f5-big-ip-easy-button-ldap/user-attributes-claims.png)
#### Additional User Attributes
In the **Additional User Attributes tab**, you can enable session augmentation r
6. Enter the **Base Search DN** to the exact distinguished name of the location containing the account the APM will authenticate with for LDAP service queries
- ![Screenshot for additional user attributes](./media/f5-big-ip-easy-button-ldap/additional-user-attributes.png)
+ ![Screenshot for additional user attributes](./media/f5-big-ip-easy-button-ldap/additional-user-attributes.png)
7. Set the **Base Search DN** to the exact distinguished name of the location containing the user account objects that the APM will query via LDAP 8. Set both membership options to **None** and add the name of the user object attribute that must be returned from the LDAP directory. For our scenario, this is **eventroles**
- ![Screenshot for LDAP query properties](./media/f5-big-ip-easy-button-ldap/user-properties-ldap.png)
+ ![Screenshot for LDAP query properties](./media/f5-big-ip-easy-button-ldap/user-properties-ldap.png)
#### Conditional Access Policy
-You can further protect the published application with policies returned from your Azure AD tenant. These policies are enforced after the first-factor authentication has been completed and uses signals from conditions like device platform, location, user or group membership, or application to determine access.
+CA policies are enforced post Azure AD pre-authentication, to control access based on device, application, location, and risk signals.
-The **Available Policies** by default, lists all CA policies defined without user based actions.
+The **Available Policies** view, by default, will list all CA policies that do not include user based actions.
-The **Selected Policies** list, by default, displays all policies targeting All cloud apps. These policies cannot be deselected or moved to the Available Policies list.
+The **Selected Policies** view, by default, displays all policies targeting All cloud apps. These policies cannot be deselected or moved to the Available Policies list as they are enforced at a tenant level.
To select a policy to be applied to the application being published:
-1. Select the desired policy in the **Available Policies** list
+1. Select the desired policy in the **Available Policies** list.
+2. Select the right arrow and move it to the **Selected Policies** list.
-2. Select the right arrow and move it to the **Selected Policies** list
-Selected policies should either have an **Include** or **Exclude** option checked. If both options are checked, the selected policy is not enforced. Excluding all policies may ease testing, you can go back and enable them later.
+Selected policies should either have an **Include** or **Exclude** option checked. If both options are checked, the selected policy is not enforced.
- ![Screenshot for CA policies](./media/f5-big-ip-easy-button-ldap/conditional-access-policy.png)
+ ![Screenshot for CA policies](./media/f5-big-ip-kerberos-easy-button/conditional-access-policy.png)
>[!NOTE] >The policy list is enumerated only once when first switching to this tab. A refresh button is available to manually force the wizard to query your tenant, but this button is displayed only when the application has been deployed.
Selected policies should either have an **Include** or **Exclude** option checke
A virtual server is a BIG-IP data plane object represented by a virtual IP address listening for clients requests to the application. Any received traffic is processed and evaluated against the APM profile associated with the virtual server, before being directed according to the policy results and settings.
-1. Enter **Destination Address**. This is any available IPv4/IPv6 address that the BIG-IP can use to receive client traffic
+1. Enter **Destination Address**. This is any available IPv4/IPv6 address that the BIG-IP can use to receive client traffic. A corresponding record should also exist in DNS, enabling clients to resolve the external URL of your BIG-IP published application to this IP.
2. Enter **Service Port** as *443* for HTTPS
A virtual server is a BIG-IP data plane object represented by a virtual IP addre
4. Select **Client SSL Profile** to enable the virtual server for HTTPS so that client connections are encrypted over TLS. Select the client SSL profile you created as part of the prerequisites or leave the default if testing
- ![Screenshot for Virtual server](./media/f5-big-ip-easy-button-ldap/virtual-server.png)
+ ![Screenshot for Virtual server](./media/f5-big-ip-easy-button-ldap/virtual-server.png)
### Pool Properties
The **Application Pool tab** details the services behind a BIG-IP that are repre
3. Update **Pool Servers**. Select an existing node or specify an IP and port for the server hosting the header-based application
- ![Screenshot for Application pool](./media/f5-big-ip-easy-button-ldap/application-pool.png)
+ ![Screenshot for Application pool](./media/f5-big-ip-oracle/application-pool.png)
Our backend application sits on HTTP port 80 but obviously switch to 443 if yours is HTTPS.
Enabling SSO allows users to access BIG-IP published services without having to
* **Header Name:** eventroles * **Header Value:** %{session.ldap.last.attr.eventroles}
-![Screenshot for SSO and HTTP headers](./media/f5-big-ip-easy-button-ldap/sso-headers.png)
+ ![Screenshot for SSO and HTTP headers](./media/f5-big-ip-easy-button-ldap/sso-headers.png)
>[!NOTE] >APM session variables defined within curly brackets are CASE sensitive. If you enter EventRoles when the Azure AD attribute name is being defined as eventroles, it will cause an attribute mapping failure.
If making a change to the app is a no go, then consider having the BIG-IP listen
## Summary
-Select **Deploy** to commit all settings and verify that the application has appeared in your tenant. This last step provides break down of all applied settings before theyΓÇÖre committed.
+This last step provides a breakdown of your configurations. Select **Deploy** to commit all settings and verify that the application now exists in your tenants list of ΓÇÿEnterprise applications.
Your application should now be published and accessible via SHA, either directly via its URL or through MicrosoftΓÇÖs application portals. For increased security, organizations using this pattern could also consider blocking all direct access to the application, thereby forcing a strict path through the BIG-IP. ## Next steps
-From a browser, **connect** to the applicationΓÇÖs external URL or select the **applicationΓÇÖs icon** in the MyApps portal. After authenticating against Azure AD, youΓÇÖll be redirected to the BIG-IP virtual server for the application and automatically signed in through SSO.
+From a browser, **connect** to the applicationΓÇÖs external URL or select the **applicationΓÇÖs icon** in the [Microsoft MyApps portal](https://myapplications.microsoft.com/). After authenticating against Azure AD, youΓÇÖll be redirected to the BIG-IP virtual server for the application and automatically signed in through SSO.
This shows the output of the injected headers displayed by our headers-based application.
-![Screenshot for App views](./media/f5-big-ip-easy-button-ldap/app-view.png)
+ ![Screenshot for App views](./media/f5-big-ip-easy-button-ldap/app-view.png)
For increased security, organizations using this pattern could also consider blocking all direct access to the application, thereby forcing a strict path through the BIG-IP. ## Advanced deployment
-There may be cases where the Guided Configuration templates lack the flexibility to achieve a particular set of requirements. Or even a need to fast track a proof of concept. For those scenarios, the BIG-IP offers the ability to disable the Guided ConfigurationΓÇÖs strict management mode. That way the bulk of your configurations can be deployed through the wizard-based templates, and any tweaks or additional settings applied manually.
+There may be cases where the Guided Configuration templates lacks the flexibility to achieve more specific requirements.
-For those scenarios, go ahead and deploy using the Guided Configuration. Then navigate to **Access > Guided Configuration** and select the small padlock icon on the far right of the row for your applicationsΓÇÖ configs. At that point, changes via the wizard UI are no longer possible, but all BIG-IP objects associated with the published instance of the application will be unlocked for direct management.
+The BIG-IP gives you the option to disable **Guided ConfigurationΓÇÖs strict management mode**. This allows you to manually tweak your configurations, even though bulk of your configurations are automated through the wizard-based templates.
+
+You can navigate to **Access > Guided Configuration** and select the **small padlock icon** on the far right of the row for your applicationsΓÇÖ configs.
+
+ ![Screenshot for Configure Easy Button - Strict Management](./media/f5-big-ip-oracle/strict-mode-padlock.png)
+
+At that point, changes via the wizard UI are no longer possible, but all BIG-IP objects associated with the published instance of the application will be unlocked for direct management.
+
+> [!NOTE]
+> Re-enabling strict mode and deploying a configuration overwrites any settings performed outside of the Guided Configuration UI. We recommend the advanced configuration method for production services.
->[!NOTE]
->Re-enabling strict mode and deploying a configuration will overwrite any settings performed outside of the Guided Configuration UI, therefore we recommend the manual approach for production services.
## Troubleshooting
If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related
```ldapsearch -xLLL -H 'ldap://192.168.0.58' -b "CN=partners,dc=contoso,dc=lds" -s sub -D "CN=f5-apm,CN=partners,DC=contoso,DC=lds" -w 'P@55w0rd!' "(cn=testuser)" ``` For more information, visit this F5 knowledge article [Configuring LDAP remote authentication for Active Directory](https://support.f5.com/csp/article/K11072). ThereΓÇÖs also a great BIG-IP reference table to help diagnose LDAP-related issues in this F5 knowledge article on [LDAP Query](https://techdocs.f5.com/kb/en-us/products/big-ip_apm/manuals/product/apm-authentication-single-sign-on-11-5-0/5.html).-
-
-## Additional resources
-
-* [The end of passwords, go password-less](https://www.microsoft.com/security/business/identity/passwordless)
-
-* [What is Conditional Access?](../conditional-access/overview.md)
-
-* [Microsoft Zero Trust framework to enable remote work](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/)
active-directory F5 Big Ip Oracle Enterprise Business Suite Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/f5-big-ip-oracle-enterprise-business-suite-easy-button.md
The **Easy Button wizard** supports Kerberos, OAuth Bearer, and HTTP authorizati
* **Header Name:** USER_NAME * **Header Value:** %{session.sso.token.last.username}
-
* **Header Operation:** replace * **Header Name:** USER_ORCLGUID * **Header Value:** %{session.ldap.last.attr.orclguid}
- ![ Screenshot for SSO and HTTP headers](./media/f5-big-ip-oracle/sso-and-http-headers.png)
+ ![ Screenshot for SSO and HTTP headers](./media/f5-big-ip-oracle/sso-and-http-headers.png)
>[!NOTE] >APM session variables defined within curly brackets are CASE sensitive. If you enter OrclGUID when the Azure AD attribute name is being defined as orclguid, it will cause an attribute mapping failure.
During deployment, the SAML federation metadata for the published application is
## Summary
-Select **Deploy** to commit all settings and verify that the application has appeared in your tenant. This last step provides breakdown of all applied settings before theyΓÇÖre committed. Your application should now be published and accessible via SHA, either directly via its URL or through MicrosoftΓÇÖs application portals.
+This last step provides a breakdown of your configurations. Select **Deploy** to commit all settings and verify that the application now exists in your tenants list of ΓÇÿEnterprise applications.
## Next steps
For increased security, organizations using this pattern could also consider blo
## Advanced deployment
-There may be cases where the Guided Configuration templates lack the flexibility to achieve more specific requirements. For those scenarios, see ![Advanced Configuration for headers-based SSO](./f5-big-ip-header-advanced.md). Alternatively, the BIG-IP gives the option to disable **Guided ConfigurationΓÇÖs strict management mode**. This allows you to manually tweak your configurations, even though bulk of your configurations are automated through the wizard-based templates.
+There may be cases where the Guided Configuration templates lack the flexibility to achieve more specific requirements. For those scenarios, see [Advanced Configuration for headers-based SSO](./f5-big-ip-header-advanced.md). Alternatively, the BIG-IP gives the option to disable **Guided ConfigurationΓÇÖs strict management mode**. This allows you to manually tweak your configurations, even though bulk of your configurations are automated through the wizard-based templates.
You can navigate to **Access > Guided Configuration** and select the **small padlock icon** on the far right of the row for your applicationsΓÇÖ configs.
active-directory Concept Sign In Diagnostics Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/concept-sign-in-diagnostics-scenarios.md
For more information, see [How to block legacy authentication to Azure AD with C
This diagnostic scenario detects a blocked or interrupted sign-in due to the user being from another organization-a B2B sign-in-where a Conditional Access policy requires that the client's device is joined to the resource tenant.
-For more information, see [Conditional Access for B2B collaboration users](../external-identities/conditional-access.md).
+For more information, see [Conditional Access for B2B collaboration users](../external-identities/authentication-conditional-access.md).
### Blocked by risk policy
active-directory Workbook Cross Tenant Access Activity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/reports-monitoring/workbook-cross-tenant-access-activity.md
++
+ Title: Cross-tenant access activity workbook in Azure AD | Microsoft Docs
+description: Learn how to use the cross-tenant access activity workbook.
+
+documentationcenter: ''
++
+editor: ''
+++++ Last updated : 02/04/2022+++++
+# Cross-tenant access activity workbook
+
+As an IT administrator, you want insights into how your users are collaborating with other organizations. The cross-tenant access activity workbook helps you understand which external users are accessing resources in your organization, and which organizationsΓÇÖ resources your users are accessing. This workbook combines all your organizationΓÇÖs inbound and outbound collaboration into a single view.
+
+This article provides you with an overview of this workbook.
++
+## Description
+
+![Image showing this workbook is found under the Usage category](./media/workbook-cross-tenant-access-activity/workbook-category.png)
+
+Tenant administrators who are making changes to policies governing cross-tenant access can use this workbook to visualize and review existing access activity patterns before making policy changes. For example, you can identify the apps your users are accessing in external organizations so that you don't inadvertently block critical business processes. Understanding how external users access resources in your tenant (inbound access) and how users in your tenant access resources in external tenants (outbound access) will help ensure you have the right cross-tenant policies in place.
+
+For more information, see the [Azure AD External Identities documentation](../external-identities/index.yml).
+
+## Sections
+
+This workbook has four sections:
+
+- All inbound and outbound activity by tenant ID
+
+- Sign-in status summary by tenant ID for inbound and outbound collaboration
+
+- Applications accessed for inbound and outbound collaboration by tenant ID
+
+- Individual users for inbound and outbound collaboration by tenant ID
+
+![Screenshot showing list of external tenants with sign-in data](./media/workbook-cross-tenant-access-activity/external-tenants-list.png)
+
+## Filters
+
+This workbook supports multiple filters:
+
+- Time range (up to 90 days)
+
+- External tenant ID
+
+- User principal name
+
+- Application
+
+- Status of the sign-in (success or failure)
+
+![Screenshot showing workbook filters](./media/workbook-cross-tenant-access-activity/workbook-filters.png)
+
+## Best practices
+
+Use this workbook to:
+
+- Get the information you need to manage your cross-tenant access settings effectively, without breaking legitimate collaborations
+
+- Identify all inbound sign-ins from external Azure AD organizations
+
+- Identify all outbound sign-ins by your users to external Azure AD organizations
+
+## Next steps
+
+- [How to use Azure AD workbooks](howto-use-azure-monitor-workbooks.md)
app-service Troubleshoot Domain Ssl Certificates https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/troubleshoot-domain-ssl-certificates.md
This problem occurs for one of the following reasons:
- You're not the subscription owner, so you don't have permission to purchase a domain. **Solution**: [Assign the Owner role](../role-based-access-control/role-assignments-portal.md) to your account. Or contact the subscription administrator to get permission to purchase a domain.-- You have reached the limit for purchasing domains on your subscription. The current limit is 20.
- **Solution**: To request an increase to the limit, contact [Azure support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade).
- Your Azure subscription type does not support the purchase of an App Service domain. **Solution**: Upgrade your Azure subscription to another subscription type, such as a Pay-As-You-Go subscription.
automation Automation Dsc Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/automation-dsc-getting-started.md
To complete the examples in this article, the following are required:
You create a simple [DSC configuration](/powershell/dsc/configurations/configurations) that ensures either the presence or absence of the **Web-Server** Windows Feature (IIS), depending on how you assign nodes.
+Configuration names in Azure Automation must be limited to no more than 100 characters.
+ 1. Start [VSCode](https://code.visualstudio.com/docs) (or any text editor). 1. Type the following text:
automation Dsc Linux Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/dsc-linux-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
## Create a configuration
-Review the code below and note the presence of two node [configurations](/powershell/dsc/configurations/configurations): `IsPresent` and `IsNotPresent`. This configuration calls one resource in each node block: the [nxPackage resource](/powershell/dsc/reference/resources/linux/lnxpackageresource). This resource manages the presence of the **apache2** package. Then, in a text editor, copy the following code to a local file and name it `LinuxConfig.ps1`:
+Review the code below and note the presence of two node [configurations](/powershell/dsc/configurations/configurations): `IsPresent` and `IsNotPresent`. This configuration calls one resource in each node block: the [nxPackage resource](/powershell/dsc/reference/resources/linux/lnxpackageresource). This resource manages the presence of the **apache2** package. Configuration names in Azure Automation must be limited to no more than 100 characters.
+
+Then, in a text editor, copy the following code to a local file and name it `LinuxConfig.ps1`:
```powershell Configuration LinuxConfig
The following steps help you delete the resources created for this tutorial that
In this tutorial, you applied an Azure Automation State Configuration with PowerShell to an Azure Linux VM to check whether it complied with a desired state. For a more thorough explanation of configuration composition, see: > [!div class="nextstepaction"]
-> [Compose DSC configurations](./compose-configurationwithcompositeresources.md)
+> [Compose DSC configurations](./compose-configurationwithcompositeresources.md)
automation Tutorial Configure Servers Desired State https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/tutorial-configure-servers-desired-state.md
configuration TestConfig {
``` > [!NOTE]
+> Configuration names in Azure Automation must be limited to no more than 100 characters.
+>
> In more advanced scenarios where you require multiple modules to be imported that provide DSC Resources, > make sure each module has a unique `Import-DscResource` line in your configuration.
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automation/whats-new.md
Azure Automation receives improvements on an ongoing basis. To stay up to date w
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Automation](whats-new-archive.md).
+## December 2021
+
+### New scripts added for Azure VM management based on Azure Monitor Alert
+
+**Type:** New feature
+
+New scripts are added to the Azure Automation [GitHub repository](https://github.com/azureautomation) to address one of Azure Automation's key scenarios of VM management based on Azure Monitor alert. For more information, see [Trigger runbook from Azure alert](/azure/automation/automation-create-alert-triggered-runbook).
+
+- Stop-Azure-VM-On-Alert
+- Restart-Azure-VM-On-Alert
+- Delete-Azure-VM-On-Alert
+- ScaleDown-Azure-VM-On-Alert
+- ScaleUp-Azure-VM-On-Alert
## November 2021
availability-zones Business Continuity Management Program https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/availability-zones/business-continuity-management-program.md
For more information on certifications, see the [Microsoft Trust Center](https:/
## Next steps - [Regions that support availability zones in Azure](az-overview.md)
+- [Azure Resiliency whitepaper](https://azure.microsoft.com/resources/resilience-in-azure-whitepaper/)
- [Quickstart templates](https://aka.ms/azqs)
availability-zones Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/availability-zones/overview.md
Building resilient systems on Azure is a shared responsibility. Microsoft is res
- [Regions and availability zones in Azure](az-overview.md) - [Azure services that support availability zones](az-region.md)-- [Resilience in Azure whitepaper](https://azure.microsoft.com/mediahandler/files/resourcefiles/resilience-in-azure-whitepaper/Resilience%20in%20Azure.pdf)
+- [Azure Resiliency whitepaper](https://azure.microsoft.com/resources/resilience-in-azure-whitepaper/)
- [Azure Well-Architected Framework](https://www.aka.ms/WellArchitected/Framework) - [Azure architecture guidance](/azure/architecture/high-availability/building-solutions-for-high-availability)
azure-app-configuration Enable Dynamic Configuration Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/enable-dynamic-configuration-dotnet-core.md
description: In this tutorial, you learn how to dynamically update the configuration data for .NET Core apps documentationcenter: ''-+ editor: ''
ms.devlang: csharp
Last updated 07/01/2019-+ #Customer intent: I want to dynamically update my app to use the latest configuration data in App Configuration.
azure-app-configuration Quickstart Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-app-configuration/quickstart-resource-manager.md
Title: Create an Azure App Configuration store by using Azure Resource Manager template (ARM template) description: Learn how to create an Azure App Configuration store by using Azure Resource Manager template (ARM template).--++ Last updated 06/09/2021
azure-arc Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/servers/private-link-security.md
Once your Azure Arc Private Link Scope (preview) is created, you need to connect
1. On the **Configuration** page,
- a. Choose the **virtual network** and **subnet** that you want to connect to your Azure Monitor resources.
+ a. Choose the **virtual network** and **subnet** that you want to connect to your Azure-Arc enabled server.
b. Choose **Yes** for **Integrate with private DNS zone**, and let it automatically create a new Private DNS Zone. The actual DNS zones may be different from what is shown in the screenshot below.
azure-cache-for-redis Cache How To Scale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-cache-for-redis/cache-how-to-scale.md
Title: Scale an Azure Cache for Redis instance description: Learn how to scale your Azure Cache for Redis instances using the Azure portal, and tools such as Azure PowerShell, and Azure CLI -
You can monitor the following metrics to help determine if you need to scale.
- Each cache size has a limit to the number of client connections it can support. If your client connections are close to the limit for the cache size, consider scaling up to a larger tier, or scaling out to enable clustering and increase shard count. Your choice depends on the Redis server load and memory usage. - For more information on connection limits by cache size, see [Azure Cache for Redis planning FAQs](./cache-planning-faq.yml). - Network Bandwidth
- - If the Redis server exceeds the available bandwidth, clients requests could time out because the server can't push data to the client fast enough. Check "Cache Read" and "Cache Write" metrics to see how much server-side bandwidth is being used. If you Redis server is exceeding available network bandwidth, you should consider scaling up to a larger cache size with higher network bandwidth.
+ - If the Redis server exceeds the available bandwidth, clients requests could time out because the server can't push data to the client fast enough. Check "Cache Read" and "Cache Write" metrics to see how much server-side bandwidth is being used. If your Redis server is exceeding available network bandwidth, you should consider scaling up to a larger cache size with higher network bandwidth.
- For more information on network available bandwidth by cache size, see [Azure Cache for Redis planning FAQs](./cache-planning-faq.yml). If you determine your cache is no longer meeting your application's requirements, you can scale to an appropriate cache pricing tier for your application. You can choose a larger or smaller cache to match your needs.
No, your cache name and keys are unchanged during a scaling operation.
- When you scale a **Standard** cache to a different size or to a **Premium** cache, one of the replicas is shut down and reprovisioned to the new size and the data transferred over, and then the other replica does a failover before it's reprovisioned, similar to the process that occurs during a failure of one of the cache nodes. - When you scale out a clustered cache, new shards are provisioned and added to the Redis server cluster. Data is then resharded across all shards. - When you scale in a clustered cache, data is first resharded and then cluster size is reduced to required shards.-- In some cases, such as scaling or migrating your cache to a different cluster, the underlying IP address of the cache can change. The DNS records for the cache changes and is transparent to most applications. However, if you use an IP address to configure the connection to your cache, or to configure NSGs, or firewalls allowing traffic to the cache, your application might have trouble connecting sometime after that the DNS record updates.
+- In some cases, such as scaling or migrating your cache to a different cluster, the underlying IP address of the cache can change. The DNS record for the cache changes and is transparent to most applications. However, if you use an IP address to configure the connection to your cache, or to configure NSGs, or firewalls allowing traffic to the cache, your application might have trouble connecting sometime after that the DNS record updates.
### Will I lose data from my cache during scaling? - When you scale a **Basic** cache to a new size, all data is lost and the cache is unavailable during the scaling operation. - When you scale a **Basic** cache to a **Standard** cache, the data in the cache is typically preserved.-- When you scale a **Standard** cache to a larger size or tier, or a **Premium** cache is scaled to a larger size, all data is typically preserved. When scaling a Standard or Premium cache to a smaller size, data can be lost if the data size exceeds the new smaller size when it's scaled down. If data is lost when scaling down, keys are evicted using the [allkeys-lru](https://redis.io/topics/lru-cache) eviction policy.
+- When you scale a **Standard** cache to a larger size or tier, or a **Premium** cache is scaled to a larger size, all data is typically preserved. When you scale a Standard or Premium cache to a smaller size, data can be lost if the data size exceeds the new smaller size when it's scaled down. If data is lost when scaling down, keys are evicted using the [allkeys-lru](https://redis.io/topics/lru-cache) eviction policy.
### Is my custom databases setting affected during scaling? If you configured a custom value for the `databases` setting during cache creation, keep in mind that some pricing tiers have different [databases limits](cache-configure.md#databases). Here are some considerations when scaling in this scenario: -- When scaling to a pricing tier with a lower `databases` limit than the current tier:
+- When you scale to a pricing tier with a lower `databases` limit than the current tier:
- If you're using the default number of `databases`, which is 16 for all pricing tiers, no data is lost. - If you're using a custom number of `databases` that falls within the limits for the tier to which you're scaling, this `databases` setting is kept and no data is lost. - If you're using a custom number of `databases` that exceeds the limits of the new tier, the `databases` setting is lowered to the limits of the new tier and all data in the removed databases is lost.-- When scaling to a pricing tier with the same or higher `databases` limit than the current tier, your `databases` setting is kept and no data is lost.
+- When you scale to a pricing tier with the same or higher `databases` limit than the current tier, your `databases` setting is kept and no data is lost.
While Standard and Premium caches have a 99.9% SLA for availability, there's no SLA for data loss.
While Standard and Premium caches have a 99.9% SLA for availability, there's no
### Are there scaling limitations with geo-replication?
-With geo-replication configured, you might notice that you cannot scale a cache or change the shards in a cluster. A geo-replication link between two caches prevents you from scaling operation or changing the number of shards in a cluster. You must unlink the cache to issue these commands. For more information, see [Configure Geo-replication](cache-how-to-geo-replication.md).
+With geo-replication configured, you might notice that you canΓÇÖt scale a cache or change the shards in a cluster. A geo-replication link between two caches prevents you from scaling operation or changing the number of shards in a cluster. You must unlink the cache to issue these commands. For more information, see [Configure Geo-replication](cache-how-to-geo-replication.md).
### Operations that aren't supported
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compare-azure-government-global-azure.md
This section outlines variations and considerations when using Identity services
The following features have known limitations in Azure Government: - Limitations with B2B Collaboration in supported Azure US Government tenants:
- - For more information about B2B collaboration limitations in Azure Government and to find out if B2B collaboration is available in your Azure Government tenant, see [Limitations of Azure AD B2B collaboration](../active-directory/external-identities/current-limitations.md#azure-us-government-clouds).
+ - For more information about B2B collaboration limitations in Azure Government and to find out if B2B collaboration is available in your Azure Government tenant, see [Azure AD B2B in government and national clouds](../active-directory/external-identities/b2b-government-national-clouds.md).
- B2B collaboration via Power BI is not supported. When you invite a guest user from within Power BI, the B2B flow is not used and the guest user won't appear in the tenant's user list. If a guest user is invited through other means, they'll appear in the Power BI user list, but any sharing request to the user will fail and display a 403 Forbidden error. - Microsoft 365 Groups are not supported for B2B users and can't be enabled.
azure-maps Geocoding Coverage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/geocoding-coverage.md
The ability to geocode in a country/region is dependent upon the road data cover
| Canada | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | Cayman Islands | | | Γ£ô | Γ£ô | Γ£ô | | Chile | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Clipperton Island | | | | ✓ | ✓ |
+| Clipperton Island | | | | Γ£ô | Γ£ô |
| Colombia | ✓ | ✓ | ✓ | ✓ | ✓ | | Costa Rica | | | ✓ | ✓ | ✓ | | Cuba | | | ✓ | ✓ | ✓ | | Curaçao | | | | ✓ | ✓ | | Dominica | | | ✓ | ✓ | ✓ |
-| Dominican Republic | | | ✓ | ✓ | ✓ |
+| Dominican Republic | | | Γ£ô | Γ£ô | Γ£ô |
| Ecuador | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | El Salvador | | | Γ£ô | Γ£ô | Γ£ô | | Falkland Islands | | | | Γ£ô | Γ£ô |
The ability to geocode in a country/region is dependent upon the road data cover
| South Georgia & the South Sandwich Islands | | | | Γ£ô | Γ£ô | | Suriname | | | Γ£ô | Γ£ô | Γ£ô | | Trinidad & Tobago | | | Γ£ô | Γ£ô | Γ£ô |
-| Turks & Caicos Islands | | | | ✓ | ✓ |
+| Turks & Caicos Islands | | | | Γ£ô | Γ£ô |
+| U.S. Outlying Islands | | | | Γ£ô | Γ£ô |
| U.S. Virgin Islands | | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| United States Minor Outlying Islands | | | | ✓ | ✓ |
-| United States of America | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| United States Minor Outlying Islands | | | | ✓ | ✓ |
+| United States | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
| Uruguay | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | Venezuela | | | Γ£ô | Γ£ô | Γ£ô |
The ability to geocode in a country/region is dependent upon the road data cover
| Cook Islands | | | | Γ£ô | Γ£ô | | Fiji | | | Γ£ô | Γ£ô | Γ£ô | | French Polynesia | | | Γ£ô | Γ£ô | Γ£ô |
-| French Southern Territories | | | | ✓ | ✓ |
| Heard Island & McDonald Islands | | | | Γ£ô | Γ£ô | | Hong Kong SAR | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | India | Γ£ô | | Γ£ô | Γ£ô | Γ£ô |
The ability to geocode in a country/region is dependent upon the road data cover
| Samoa | | | | Γ£ô | Γ£ô | | Singapore | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | Solomon Islands | | | | Γ£ô | Γ£ô |
-| South Korea | | | | ✓ | ✓ |
+| South Korea | | | | Γ£ô | Γ£ô |
| Sri Lanka | | | | Γ£ô | Γ£ô | | Taiwan | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | Thailand | Γ£ô | | Γ£ô | Γ£ô | Γ£ô |
The ability to geocode in a country/region is dependent upon the road data cover
| Hungary | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | Iceland | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | Ireland | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
-| Isle Of Man | | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
+| Isle of Man | | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
| Italy | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | Jan Mayen | | | | Γ£ô | Γ£ô | | Jersey | | Γ£ô | Γ£ô | Γ£ô | Γ£ô |
The ability to geocode in a country/region is dependent upon the road data cover
| Central African Republic | | | Γ£ô | Γ£ô | Γ£ô | | Chad | | | | Γ£ô | Γ£ô | | Congo | | | | Γ£ô | Γ£ô |
+| Congo (DRC) | | | Γ£ô | Γ£ô | Γ£ô |
| C├┤te d'Ivoire | | | Γ£ô | Γ£ô | Γ£ô |
-| Democratic Republic of the Congo | | | Γ£ô | Γ£ô | Γ£ô |
| Djibouti | | | Γ£ô | Γ£ô | Γ£ô | | Egypt | Γ£ô | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | Equatorial Guinea | | | | Γ£ô | Γ£ô |
The ability to geocode in a country/region is dependent upon the road data cover
| Libya | | | | Γ£ô | Γ£ô | | Madagascar | | | Γ£ô | Γ£ô | Γ£ô | | Malawi | | | Γ£ô | Γ£ô | Γ£ô |
-| Maldives | | | | Γ£ô | Γ£ô |
| Mali | | | | Γ£ô | Γ£ô | | Mauritania | | | | Γ£ô | Γ£ô | | Mauritius | | | Γ£ô | Γ£ô | Γ£ô |
azure-maps Weather Coverage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-maps/weather-coverage.md
The following table refers to the *Other* column and provides a list containing
| Bouvet Island | Γ£ô | | | Γ£ô | | Burkina Faso | Γ£ô | | | Γ£ô | | Burundi | Γ£ô | | | Γ£ô |
-| Cabo Verde | Γ£ô | | | Γ£ô |
| Cameroon | Γ£ô | | | Γ£ô |
+| Cape Verde | Γ£ô | | | Γ£ô |
| Central African Republic | Γ£ô | | | Γ£ô | | Chad | Γ£ô | | | Γ£ô | | Comoros | Γ£ô | | | Γ£ô |
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/convert-classic-resource.md
To write queries against the [new workspace-based table structure/schema](apm-ta
When you query directly from the Log Analytics UI within your workspace, you will only see the data that is ingested post migration. To see both your classic Application Insights data + new data ingested after migration in a unified query experience use the Logs (Analytics) query view from within your migrated Application Insights resource.
+> [!NOTE]
+> If you rename your Application Insights resource after migrating to workspace-based model, the Application Insights Logs tab will no longer show the telemetry collected before renaming. You will be able to see all data (old and new) on the Logs tab of the associated Log Analytics resource.
+ ## Programmatic resource migration ### Azure CLI
azure-monitor Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/insights/azure-sql.md
Title: Azure SQL Analytics (preview) solution in Azure Monitor description: Azure SQL Analytics solution helps you manage your Azure SQL databases --++ Last updated 11/22/2021-+
azure-monitor Vminsights Enable Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/vminsights-enable-hybrid.md
You can download the Dependency agent from these locations:
| File | OS | Version | SHA-256 | |:--|:--|:--|:--|
-| [InstallDependencyAgent-Windows.exe](https://aka.ms/dependencyagentwindows) | Windows | 9.10.12.18430 | 9CE3B53D3A67A2C3239E1162364BF94B772764B4ADD78C48559E56F46B98C484 |
-| [InstallDependencyAgent-Linux64.bin](https://aka.ms/dependencyagentlinux) | Linux | 9.10.12.18430 | 04BD3D2F449220B19DD1DA47A6995087123140B13E45747C743BAD79A312ACE6 |
+| [InstallDependencyAgent-Windows.exe](https://aka.ms/dependencyagentwindows) | Windows | 9.10.13.19190 | 0882504FE5828C4C4BA0A869BD9F6D5B0020A52156DDBD21D55AAADA762923C4 |
+| [InstallDependencyAgent-Linux64.bin](https://aka.ms/dependencyagentlinux) | Linux | 9.10.13.19190 | 7D90A2A7C6F1D7FB2BCC274ADC4C5D6C118E832FF8A620971734AED4F446B030 |
## Install the Dependency agent on Windows
azure-netapp-files Azacsnap Installation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/azacsnap-installation.md
na Previously updated : 09/08/2021 Last updated : 02/05/2022
Create RBAC Service Principal
1. Cut and Paste the output content into a file called `azureauth.json` stored on the same system as the `azacsnap` command and secure the file with appropriate system permissions.
+
+ > [!WARNING]
+ > Make sure the format of the JSON file is exactly as described above. Especially with the URLs enclosed in double quotes (").
# [Azure Large Instance (Bare Metal)](#tab/azure-large-instance)
azure-resource-manager Move Support Resources https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/move-support-resources.md
Jump to a resource provider namespace:
> | servers / elasticpools | Yes | Yes | Yes <br/><br/> [Learn more](../../azure-sql/database/move-resources-across-regions.md) about moving elastic pools across regions.<br/><br/> [Learn more](../../resource-mover/tutorial-move-region-sql.md) about using Azure Resource Mover to move Azure SQL elastic pools. | > | servers / jobaccounts | Yes | Yes | No | > | servers / jobagents | Yes | Yes | No |
-> | virtualclusters | Yes | Yes | Yes |
+> | virtualclusters | No | No | No |
## Microsoft.SqlVirtualMachine
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/overview.md
Title: Azure Resource Manager overview description: Describes how to use Azure Resource Manager for deployment, management, and access control of resources on Azure. Previously updated : 08/27/2021 Last updated : 02/03/2022 # What is Azure Resource Manager? Azure Resource Manager is the deployment and management service for Azure. It provides a management layer that enables you to create, update, and delete resources in your Azure account. You use management features, like access control, locks, and tags, to secure and organize your resources after deployment.
-To learn about Azure Resource Manager templates (ARM templates), see the [template deployment overview](../templates/overview.md).
+To learn about Azure Resource Manager templates (ARM templates), see the [ARM template overview](../templates/overview.md). To learn about Bicep, see [Bicep overview](../bicep/overview.md).
## Consistent management layer
If you're new to Azure Resource Manager, there are some terms you might not be f
* **resource** - A manageable item that is available through Azure. Virtual machines, storage accounts, web apps, databases, and virtual networks are examples of resources. Resource groups, subscriptions, management groups, and tags are also examples of resources. * **resource group** - A container that holds related resources for an Azure solution. The resource group includes those resources that you want to manage as a group. You decide which resources belong in a resource group based on what makes the most sense for your organization. See [Resource groups](#resource-groups). * **resource provider** - A service that supplies Azure resources. For example, a common resource provider is `Microsoft.Compute`, which supplies the virtual machine resource. `Microsoft.Storage` is another common resource provider. See [Resource providers and types](resource-providers-and-types.md).
-* **Resource Manager template** - A JavaScript Object Notation (JSON) file that defines one or more resources to deploy to a resource group, subscription, management group, or tenant. The template can be used to deploy the resources consistently and repeatedly. See [Template deployment overview](../templates/overview.md).
-* **declarative syntax** - Syntax that lets you state "Here is what I intend to create" without having to write the sequence of programming commands to create it. The Resource Manager template is an example of declarative syntax. In the file, you define the properties for the infrastructure to deploy to Azure. See [Template deployment overview](../templates/overview.md).
+* **declarative syntax** - Syntax that lets you state "Here's what I intend to create" without having to write the sequence of programming commands to create it. ARM templates and Bicep files are examples of declarative syntax. In those files, you define the properties for the infrastructure to deploy to Azure.
+* **ARM template** - A JavaScript Object Notation (JSON) file that defines one or more resources to deploy to a resource group, subscription, management group, or tenant. The template can be used to deploy the resources consistently and repeatedly. See [Template deployment overview](../templates/overview.md).
+* **Bicep file** - A file for declaratively deploying Azure resources. Bicep is a language that's been designed to provide the best authoring experience for infrastructure as code solutions in Azure. See [Bicep overview](../bicep/overview.md).
## The benefits of using Resource Manager
There are some important factors to consider when defining your resource group:
* When you create a resource group, you need to provide a location for that resource group.
- You may be wondering, "Why does a resource group need a location? And, if the resources can have different locations than the resource group, why does the resource group location matter at all?"
+ You may be wondering, "Why does a resource group need a location? And, if the resources can have different locations than the resource group, why does the resource group location matter at all?"
- The resource group stores metadata about the resources. When you specify a location for the resource group, you're specifying where that metadata is stored. For compliance reasons, you may need to ensure that your data is stored in a particular region.
+ The resource group stores metadata about the resources. When you specify a location for the resource group, you're specifying where that metadata is stored. For compliance reasons, you may need to ensure that your data is stored in a particular region.
- Except in global resources like Azure Content Delivery Network, Azure DNS, Azure Traffic Manager, and Azure Front Door, if a resource group's region is temporarily unavailable, you can't update resources in the resource group because the metadata is unavailable. The resources in other regions will still function as expected, but you can't update them.
+ If a resource group's region is temporarily unavailable, you can't update resources in the resource group because the metadata is unavailable. The resources in other regions will still function as expected, but you can't update them. This condition doesn't apply to global resources like Azure Content Delivery Network, Azure DNS, Azure Traffic Manager, and Azure Front Door.
For more information about building reliable applications, see [Designing reliable Azure applications](/azure/architecture/checklist/resiliency-per-service).
The Azure Resource Manager service is designed for resiliency and continuous ava
* Distributed across regions. Some services are regional.
-* Distributed across Availability Zones (as well as regions) in locations that have multiple Availability Zones.
+* Distributed across Availability Zones (and regions) in locations that have multiple Availability Zones.
* Not dependent on a single logical data center.
azure-resource-manager Tag Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/management/tag-support.md
Title: Tag support for resources description: Shows which Azure resource types support tags. Provides details for all Azure services. Previously updated : 11/30/2021 Last updated : 02/04/2022 # Tag support for Azure resources
Jump to a resource provider namespace:
> - [Microsoft.Cdn](#microsoftcdn) > - [Microsoft.CertificateRegistration](#microsoftcertificateregistration) > - [Microsoft.ChangeAnalysis](#microsoftchangeanalysis)
+> - [Microsoft.Chaos](#microsoftchaos)
> - [Microsoft.ClassicCompute](#microsoftclassiccompute) > - [Microsoft.ClassicInfrastructureMigrate](#microsoftclassicinfrastructuremigrate) > - [Microsoft.ClassicNetwork](#microsoftclassicnetwork)
Jump to a resource provider namespace:
> - [Microsoft.CognitiveServices](#microsoftcognitiveservices) > - [Microsoft.Commerce](#microsoftcommerce) > - [Microsoft.Compute](#microsoftcompute)
+> - [Microsoft.Communication](#microsoftcommunication)
> - [Microsoft.ConfidentialLedger](#microsoftconfidentialledger) > - [Microsoft.ConnectedCache](#microsoftconnectedcache) > - [Microsoft.ConnectedVehicle](#microsoftconnectedvehicle)
Jump to a resource provider namespace:
> - [Microsoft.Insights](#microsoftinsights) > - [Microsoft.Intune](#microsoftintune) > - [Microsoft.IoTCentral](#microsoftiotcentral)
+> - [Microsoft.IoTFirmwareDefense](#microsoftiotfirmwaredefense)
> - [Microsoft.IoTSecurity](#microsoftiotsecurity) > - [Microsoft.IoTSpaces](#microsoftiotspaces) > - [Microsoft.KeyVault](#microsoftkeyvault)
Jump to a resource provider namespace:
> - [Microsoft.ResourceHealth](#microsoftresourcehealth) > - [Microsoft.Resources](#microsoftresources) > - [Microsoft.SaaS](#microsoftsaas)
+> - [Microsoft.Scheduler](#microsoftscheduler)
> - [Microsoft.Scom](#microsoftscom) > - [Microsoft.ScVmm](#microsoftscvmm) > - [Microsoft.Search](#microsoftsearch)
Jump to a resource provider namespace:
> | alertsSummary | No | No | > | alertsSummaryList | No | No | > | migrateFromSmartDetection | No | No |
+> | prometheusRuleGroups | Yes | Yes |
> | resourceHealthAlertRules | Yes | Yes | > | smartDetectorAlertRules | Yes | Yes | > | smartGroups | No | No |
Jump to a resource provider namespace:
> | configurationStores | Yes | No | > | configurationStores / eventGridFilters | No | No | > | configurationStores / keyValues | No | No |
+> | deletedConfigurationStores | No | No |
## Microsoft.AppPlatform
Jump to a resource provider namespace:
> | - | -- | -- | > | accounts | No | No | > | accounts / devices | No | No |
+> | accounts / devices / sensors | No | No |
+> | accounts / solutioninstances | No | No |
+> | accounts / solutions | No | No |
+> | accounts / targets | No | No |
## Microsoft.AzureSphere
Jump to a resource provider namespace:
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | clusters | No | No |
-> | clusters / arcsettings | No | No |
-> | clusters / arcsettings / extensions | No | No |
-> | galleryImages | No | No |
-> | networkInterfaces | No | No |
-> | virtualHardDisks | No | No |
+> | clusters / arcSettings | No | No |
+> | clusters / arcSettings / extensions | No | No |
+> | galleryimages | No | No |
+> | networkinterfaces | No | No |
+> | virtualharddisks | No | No |
> | virtualmachines | No | No | > | virtualmachines / extensions | No | No | > | virtualmachines / hybrididentitymetadata | No | No |
-> | virtualNetworks | No | No |
+> | virtualnetworks | No | No |
## Microsoft.BackupSolutions
Jump to a resource provider namespace:
> | savingsPlanOrderAliases | No | No | > | savingsPlanOrders | No | No | > | savingsPlanOrders / savingsPlans | No | No |
+> | savingsPlans | No | No |
> | validate | No | No | ## Microsoft.Blockchain
Jump to a resource provider namespace:
> | profile | No | No | > | resourceChanges | No | No |
+## Microsoft.Chaos
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | artifactSetDefinitions | No | No |
+> | artifactSetSnapshots | No | No |
+> | chaosExperiments | Yes | Yes |
+> | chaosProviderConfigurations | No | No |
+> | chaosTargets | No | No |
+> | experiments | Yes | Yes |
+> | targets | No | No |
+ ## Microsoft.ClassicCompute > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | accounts | Yes | Yes |
+> | accounts / networkSecurityPerimeterAssociationProxies | No | No |
> | accounts / privateEndpointConnectionProxies | No | No | > | accounts / privateEndpointConnections | No | No | > | accounts / privateLinkResources | No | No |
Jump to a resource provider namespace:
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | availabilitySets | Yes | Yes |
+> | capacityReservationGroups | Yes | Yes |
+> | capacityReservationGroups / capacityReservations | Yes | Yes |
> | cloudServices | Yes | Yes | > | cloudServices / networkInterfaces | No | No | > | cloudServices / publicIPAddresses | No | No |
Jump to a resource provider namespace:
> | proximityPlacementGroups | Yes | Yes | > | restorePointCollections | Yes | Yes | > | restorePointCollections / restorePoints | No | No |
+> | restorePointCollections / restorePoints / diskRestorePoints | No | No |
> | sharedVMExtensions | Yes | Yes | > | sharedVMExtensions / versions | No | No | > | sharedVMImages | Yes | Yes |
Jump to a resource provider namespace:
> | virtualMachineScaleSets / networkInterfaces | No | No | > | virtualMachineScaleSets / publicIPAddresses | Yes | No | > | virtualMachineScaleSets / virtualMachines | No | No |
+> | virtualMachineScaleSets / virtualMachines / extensions | No | No |
> | virtualMachineScaleSets / virtualMachines / networkInterfaces | No | No | > [!NOTE] > You can't add a tag to a virtual machine that has been marked as generalized. You mark a virtual machine as generalized with [Set-AzVm -Generalized](/powershell/module/Az.Compute/Set-AzVM) or [az vm generalize](/cli/azure/vm#az-vm-generalize).
+## Microsoft.Communication
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | CommunicationServices | No | No |
+> | CommunicationServices / eventGridFilters | No | No |
+> | EmailServices | No | No |
+> | EmailServices / Domains | No | No |
+> | registeredSubscriptions | No | No |
+ ## Microsoft.ConfidentialLedger > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | domains / topics | No | No | > | eventSubscriptions | No | No | > | extensionTopics | No | No |
+> | partnerDestinations | Yes | Yes |
> | partnerNamespaces | Yes | Yes | > | partnerNamespaces / channels | No | No | > | partnerNamespaces / eventChannels | No | No |
Jump to a resource provider namespace:
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | instances | No | No |
+> | instances / chambers | No | No |
+> | instances / chambers / accessProfiles | No | No |
+> | instances / chambers / workloads | No | No |
+> | instances / consortiums | No | No |
## Microsoft.HybridCompute
Jump to a resource provider namespace:
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | provisionedClusters | No | No |
+> | provisionedClusters / agentPools | No | No |
+> | provisionedClusters / hybridIdentityMetadata | No | No |
## Microsoft.HybridData
Jump to a resource provider namespace:
> | appTemplates | No | No | > | IoTApps | Yes | Yes |
+## Microsoft.IoTFirmwareDefense
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | firmwareGroups | No | No |
+> | firmwareGroups / firmwares | No | No |
+ ## Microsoft.IoTSecurity > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | - | -- | -- | > | extensions | No | No | > | fluxConfigurations | No | No |
+> | namespaces | No | No |
> | sourceControlConfigurations | No | No | ## Microsoft.Kusto
Jump to a resource provider namespace:
> | privateStores / collections / offers | No | No | > | privateStores / collections / transferOffers | No | No | > | privateStores / collectionsToSubscriptionsMapping | No | No |
+> | privateStores / fetchAllSubscriptionsInTenant | No | No |
> | privateStores / offers | No | No | > | privateStores / offers / acknowledgeNotification | No | No | > | privateStores / queryApprovedPlans | No | No |
Jump to a resource provider namespace:
> | bastionHosts | Yes | No | > | bgpServiceCommunities | No | No | > | connections | Yes | Yes |
+> | customIpPrefixes | Yes | Yes |
> | ddosCustomPolicies | Yes | Yes | > | ddosProtectionPlans | Yes | Yes | > | dnsOperationStatuses | No | No |
Jump to a resource provider namespace:
> | dnszones / SOA | No | No | > | dnszones / SRV | No | No | > | dnszones / TXT | No | No |
+> | dscpConfigurations | Yes | Yes |
> | expressRouteCircuits | Yes | Yes | > | expressRouteCrossConnections | Yes | Yes | > | expressRouteGateways | Yes | Yes |
Jump to a resource provider namespace:
> | frontdoorWebApplicationFirewallPolicies | Yes, but limited (see [note below](#network-limitations)) | Yes | > | getDnsResourceReference | No | No | > | internalNotify | No | No |
+> | ipAllocations | Yes | Yes |
> | ipGroups | Yes, see [note below](#network-limitations) | Yes | > | loadBalancers | Yes | Yes | > | localNetworkGateways | Yes | Yes | > | natGateways | Yes | Yes | > | networkIntentPolicies | Yes | Yes | > | networkInterfaces | Yes | Yes |
+> | networkManagers | Yes | Yes |
> | networkProfiles | Yes | Yes | > | networkSecurityGroups | Yes | Yes |
+> | networkVirtualAppliances | Yes | Yes |
> | networkWatchers | Yes | Yes | > | networkWatchers / connectionMonitors | Yes | No | > | networkWatchers / flowLogs | Yes | No |
Jump to a resource provider namespace:
> | publicIPPrefixes | Yes | Yes | > | routeFilters | Yes | Yes | > | routeTables | Yes | Yes |
+> | securityPartnerProviders | Yes | Yes |
> | serviceEndpointPolicies | Yes | Yes | > | trafficManagerGeographicHierarchies | No | No | > | trafficmanagerprofiles | Yes, see [note below](#network-limitations) | Yes |
Jump to a resource provider namespace:
> | virtualNetworkTaps | Yes | Yes | > | virtualWans | Yes | No | > | vpnGateways | Yes | Yes |
+> | vpnServerConfigurations | Yes | Yes |
> | vpnSites | Yes | Yes | > | webApplicationFirewallPolicies | Yes | Yes |
Jump to a resource provider namespace:
> | Resource type | Supports tags | Tag in cost report | > | - | -- | -- | > | accounts | Yes | Yes |
+> | accounts / kafkaConfigurations | No | No |
> | deletedAccounts | No | No | > | getDefaultAccount | No | No | > | removeDefaultAccount | No | No |
Jump to a resource provider namespace:
> | resources | Yes | Yes | > | saasresources | No | No |
+## Microsoft.Scheduler
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Supports tags | Tag in cost report |
+> | - | -- | -- |
+> | jobcollections | Yes | Yes |
+ ## Microsoft.Scom > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | applicationWhitelistings | No | No | > | assessmentMetadata | No | No | > | assessments | No | No |
+> | assessments / governanceAssignments | No | No |
> | assignments | Yes | Yes |
+> | attackPaths | No | No |
> | autoDismissAlertsRules | No | No | > | automations | Yes | Yes | > | AutoProvisioningSettings | No | No |
Jump to a resource provider namespace:
> | customAssessmentAutomations | Yes | Yes | > | customEntityStoreAssignments | Yes | Yes | > | dataCollectionAgents | No | No |
+> | dataScanners | Yes | Yes |
> | deviceSecurityGroups | No | No | > | discoveredSecuritySolutions | No | No | > | externalSecuritySolutions | No | No |
+> | governanceRules | No | No |
> | InformationProtectionPolicies | No | No | > | ingestionSettings | No | No | > | insights | No | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
+> | dryruns | No | No |
> | linkers | No | No | ## Microsoft.Services
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
+> | instancePools | Yes | Yes |
> | longtermRetentionManagedInstance / longtermRetentionDatabase / longtermRetentionBackup | No | No | > | longtermRetentionServer / longtermRetentionDatabase / longtermRetentionBackup | No | No | > | managedInstances | Yes | Yes |
+> | managedInstances / administrators | No | No |
> | managedInstances / databases | No | No |
+> | managedInstances / databases / backupLongTermRetentionPolicies | No | No |
> | managedInstances / databases / backupShortTermRetentionPolicies | No | No | > | managedInstances / databases / schemas / tables / columns / sensitivityLabels | No | No | > | managedInstances / databases / vulnerabilityAssessments | No | No |
Jump to a resource provider namespace:
> | managedInstances / encryptionProtector | No | No | > | managedInstances / keys | No | No | > | managedInstances / restorableDroppedDatabases / backupShortTermRetentionPolicies | No | No |
+> | managedInstances / sqlAgent | No | No |
> | managedInstances / vulnerabilityAssessments | No | No | > | servers | Yes | Yes | > | servers / administrators | No | No |
+> | servers / advisors | No | No |
+> | servers / auditingSettings | No | No |
> | servers / communicationLinks | No | No | > | servers / databases | Yes (see [note below](#sqlnote)) | Yes |
+> | servers / databases / advisors | No | No |
+> | servers / databases / auditingSettings | No | No |
+> | servers / databases / backupLongTermRetentionPolicies | No | No |
+> | servers / databases / backupShortTermRetentionPolicies | No | No |
+> | servers / databases / dataMaskingPolicies | No | No |
+> | servers / databases / extensions | No | No |
+> | servers / databases / securityAlertPolicies | No | No |
+> | servers / databases / syncGroups | No | No |
+> | servers / databases / syncGroups / syncMembers | No | No |
+> | servers / databases / transparentDataEncryption | No | No |
+> | servers / databases / workloadGroups | No | No |
+> | servers / elasticpools | Yes | Yes |
> | servers / encryptionProtector | No | No |
+> | servers / failoverGroups | No | No |
> | servers / firewallRules | No | No |
+> | servers / jobAgents | Yes | Yes |
+> | servers / jobAgents / jobs | No | No |
+> | servers / jobAgents / jobs / steps | No | No |
+> | servers / jobAgents / jobs / executions | No | No |
> | servers / keys | No | No | > | servers / restorableDroppedDatabases | No | No | > | servers / serviceobjectives | No | No | > | servers / tdeCertificates | No | No |
+> | servers / virtualNetworkRules | No | No |
> | virtualClusters | No | No | <a id="sqlnote"></a>
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Supports tags | Tag in cost report | > | - | -- | -- |
+> | dataMovers | Yes | Yes |
+> | dataMovers / agents | No | No |
+> | dataMovers / endpoints | No | No |
+> | dataMovers / projects | No | No |
+> | dataMovers / projects / jobDefinitions | No | No |
+> | dataMovers / projects / jobDefinitions / jobRuns | No | No |
> | deletedAccounts | No | No | > | storageAccounts | Yes | Yes | > | storageAccounts / blobServices | No | No |
azure-resource-manager Deployment Complete Mode Deletion https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/templates/deployment-complete-mode-deletion.md
Title: Complete mode deletion description: Shows how resource types handle complete mode deletion in Azure Resource Manager templates. Previously updated : 12/02/2021 Last updated : 02/04/2022 # Deletion of Azure resources for complete mode deployments
Jump to a resource provider namespace:
> - [Microsoft.Cdn](#microsoftcdn) > - [Microsoft.CertificateRegistration](#microsoftcertificateregistration) > - [Microsoft.ChangeAnalysis](#microsoftchangeanalysis)
+> - [Microsoft.Chaos](#microsoftchaos)
> - [Microsoft.ClassicCompute](#microsoftclassiccompute) > - [Microsoft.ClassicInfrastructureMigrate](#microsoftclassicinfrastructuremigrate) > - [Microsoft.ClassicNetwork](#microsoftclassicnetwork)
Jump to a resource provider namespace:
> - [Microsoft.CognitiveServices](#microsoftcognitiveservices) > - [Microsoft.Compute](#microsoftcompute) > - [Microsoft.Commerce](#microsoftcommerce)
+> - [Microsoft.Communication](#microsoftcommunication)
> - [Microsoft.ConfidentialLedger](#microsoftconfidentialledger) > - [Microsoft.ConnectedCache](#microsoftconnectedcache) > - [Microsoft.ConnectedVehicle](#microsoftconnectedvehicle)
Jump to a resource provider namespace:
> - [Microsoft.Insights](#microsoftinsights) > - [Microsoft.Intune](#microsoftintune) > - [Microsoft.IoTCentral](#microsoftiotcentral)
+> - [Microsoft.IoTFirmwareDefense](#microsoftiotfirmwaredefense)
> - [Microsoft.IoTSecurity](#microsoftiotsecurity) > - [Microsoft.IoTSpaces](#microsoftiotspaces) > - [Microsoft.KeyVault](#microsoftkeyvault)
Jump to a resource provider namespace:
> - [Microsoft.ResourceHealth](#microsoftresourcehealth) > - [Microsoft.Resources](#microsoftresources) > - [Microsoft.SaaS](#microsoftsaas)
+> - [Microsoft.Scheduler](#microsoftscheduler)
> - [Microsoft.Scom](#microsoftscom) > - [Microsoft.ScVmm](#microsoftscvmm) > - [Microsoft.Search](#microsoftsearch)
Jump to a resource provider namespace:
> | alertsSummary | No | > | alertsSummaryList | No | > | migrateFromSmartDetection | No |
+> | prometheusRuleGroups | Yes |
> | resourceHealthAlertRules | Yes | > | smartDetectorAlertRules | Yes | > | smartGroups | No |
Jump to a resource provider namespace:
> | configurationStores | Yes | > | configurationStores / eventGridFilters | No | > | configurationStores / keyValues | No |
+> | deletedConfigurationStores | No |
## Microsoft.AppPlatform
Jump to a resource provider namespace:
> | - | -- | > | accounts | No | > | accounts / devices | No |
+> | accounts / devices / sensors | No |
+> | accounts / solutioninstances | No |
+> | accounts / solutions | No |
+> | accounts / targets | No |
## Microsoft.AzureSphere
Jump to a resource provider namespace:
> | Resource type | Complete mode deletion | > | - | -- | > | clusters | No |
-> | clusters / arcsettings | No |
-> | clusters / arcsettings / extensions | No |
-> | galleryImages | No |
-> | networkInterfaces | No |
-> | virtualHardDisks | No |
+> | clusters / arcSettings | No |
+> | clusters / arcSettings / extensions | No |
+> | galleryimages | No |
+> | networkinterfaces | No |
+> | virtualharddisks | No |
> | virtualmachines | No | > | virtualmachines / extensions | No | > | virtualmachines / hybrididentitymetadata | No |
-> | virtualNetworks | No |
+> | virtualnetworks | No |
## Microsoft.BackupSolutions
Jump to a resource provider namespace:
> | savingsPlanOrderAliases | No | > | savingsPlanOrders | No | > | savingsPlanOrders / savingsPlans | No |
+> | savingsPlans | No |
> | validate | No | ## Microsoft.Blockchain
Jump to a resource provider namespace:
> | profile | No | > | resourceChanges | No |
+## Microsoft.Chaos
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | artifactSetDefinitions | No |
+> | artifactSetSnapshots | No |
+> | chaosExperiments | Yes |
+> | chaosProviderConfigurations | No |
+> | chaosTargets | No |
+> | experiments | Yes |
+> | targets | No |
+ ## Microsoft.ClassicCompute > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | Resource type | Complete mode deletion | > | - | -- | > | accounts | Yes |
+> | accounts / networkSecurityPerimeterAssociationProxies | No |
> | accounts / privateEndpointConnectionProxies | No | > | accounts / privateEndpointConnections | No | > | accounts / privateLinkResources | No |
Jump to a resource provider namespace:
> | Resource type | Complete mode deletion | > | - | -- | > | availabilitySets | Yes |
+> | capacityReservationGroups | Yes |
+> | capacityReservationGroups / capacityReservations | Yes |
> | cloudServices | Yes | > | cloudServices / networkInterfaces | No | > | cloudServices / publicIPAddresses | No |
Jump to a resource provider namespace:
> | proximityPlacementGroups | Yes | > | restorePointCollections | Yes | > | restorePointCollections / restorePoints | No |
+> | restorePointCollections / restorePoints / diskRestorePoints | No |
> | sharedVMExtensions | Yes | > | sharedVMExtensions / versions | No | > | sharedVMImages | Yes |
Jump to a resource provider namespace:
> | virtualMachineScaleSets / networkInterfaces | No | > | virtualMachineScaleSets / publicIPAddresses | No | > | virtualMachineScaleSets / virtualMachines | No |
+> | virtualMachineScaleSets / virtualMachines / extensions | No |
> | virtualMachineScaleSets / virtualMachines / networkInterfaces | No | ## Microsoft.Commerce
Jump to a resource provider namespace:
> | RateCard | No | > | UsageAggregates | No |
+## Microsoft.Communication
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | CommunicationServices | No |
+> | CommunicationServices / eventGridFilters | No |
+> | EmailServices | No |
+> | EmailServices / Domains | No |
+> | registeredSubscriptions | No |
+ ## Microsoft.ConfidentialLedger > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | domains / topics | No | > | eventSubscriptions | No | > | extensionTopics | No |
+> | partnerDestinations | Yes |
> | partnerNamespaces | Yes | > | partnerNamespaces / channels | No | > | partnerNamespaces / eventChannels | No |
Jump to a resource provider namespace:
> | Resource type | Complete mode deletion | > | - | -- | > | instances | No |
+> | instances / chambers | No |
+> | instances / chambers / accessProfiles | No |
+> | instances / chambers / workloads | No |
+> | instances / consortiums | No |
## Microsoft.HybridCompute
Jump to a resource provider namespace:
> | Resource type | Complete mode deletion | > | - | -- | > | provisionedClusters | No |
+> | provisionedClusters / agentPools | No |
+> | provisionedClusters / hybridIdentityMetadata | No |
## Microsoft.HybridData
Jump to a resource provider namespace:
> | appTemplates | No | > | IoTApps | Yes |
+## Microsoft.IoTFirmwareDefense
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | firmwareGroups | No |
+> | firmwareGroups / firmwares | No |
+ ## Microsoft.IoTSecurity > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | - | -- | > | extensions | No | > | fluxConfigurations | No |
+> | namespaces | No |
> | sourceControlConfigurations | No | ## Microsoft.Kusto
Jump to a resource provider namespace:
> | privateStores / collections / offers | No | > | privateStores / collections / transferOffers | No | > | privateStores / collectionsToSubscriptionsMapping | No |
+> | privateStores / fetchAllSubscriptionsInTenant | No |
> | privateStores / offers | No | > | privateStores / offers / acknowledgeNotification | No | > | privateStores / queryApprovedPlans | No |
Jump to a resource provider namespace:
> | bastionHosts | Yes | > | bgpServiceCommunities | No | > | connections | Yes |
+> | customIpPrefixes | Yes |
> | ddosCustomPolicies | Yes | > | ddosProtectionPlans | Yes | > | dnsOperationStatuses | No |
Jump to a resource provider namespace:
> | dnszones / SOA | No | > | dnszones / SRV | No | > | dnszones / TXT | No |
+> | dscpConfigurations | Yes |
> | expressRouteCircuits | Yes | > | expressRouteCrossConnections | Yes | > | expressRouteGateways | Yes |
Jump to a resource provider namespace:
> | frontdoorWebApplicationFirewallPolicies | Yes | > | getDnsResourceReference | No | > | internalNotify | No |
+> | ipAllocations | Yes |
> | ipGroups | Yes | > | loadBalancers | Yes | > | localNetworkGateways | Yes | > | natGateways | Yes | > | networkIntentPolicies | Yes | > | networkInterfaces | Yes |
+> | networkManagers | Yes |
> | networkProfiles | Yes | > | networkSecurityGroups | Yes |
+> | networkVirtualAppliances | Yes |
> | networkWatchers | Yes | > | networkWatchers / connectionMonitors | Yes | > | networkWatchers / flowLogs | Yes |
Jump to a resource provider namespace:
> | publicIPPrefixes | Yes | > | routeFilters | Yes | > | routeTables | Yes |
+> | securityPartnerProviders | Yes |
> | serviceEndpointPolicies | Yes | > | trafficManagerGeographicHierarchies | No | > | trafficmanagerprofiles | Yes |
Jump to a resource provider namespace:
> | virtualNetworkTaps | Yes | > | virtualWans | Yes | > | vpnGateways | Yes |
+> | vpnServerConfigurations | Yes |
> | vpnSites | Yes | > | webApplicationFirewallPolicies | Yes |
Jump to a resource provider namespace:
> | Resource type | Complete mode deletion | > | - | -- | > | accounts | Yes |
+> | accounts / kafkaConfigurations | No |
> | deletedAccounts | No | > | getDefaultAccount | No | > | removeDefaultAccount | No |
Jump to a resource provider namespace:
> | resources | Yes | > | saasresources | No |
+## Microsoft.Scheduler
+
+> [!div class="mx-tableFixed"]
+> | Resource type | Complete mode deletion |
+> | - | -- |
+> | jobcollections | Yes |
+ ## Microsoft.Scom > [!div class="mx-tableFixed"]
Jump to a resource provider namespace:
> | applicationWhitelistings | No | > | assessmentMetadata | No | > | assessments | No |
+> | assessments / governanceAssignments | No |
> | assignments | Yes |
+> | attackPaths | No |
> | autoDismissAlertsRules | No | > | automations | Yes | > | AutoProvisioningSettings | No |
Jump to a resource provider namespace:
> | customAssessmentAutomations | Yes | > | customEntityStoreAssignments | Yes | > | dataCollectionAgents | No |
+> | dataScanners | Yes |
> | deviceSecurityGroups | No | > | discoveredSecuritySolutions | No | > | externalSecuritySolutions | No |
+> | governanceRules | No |
> | InformationProtectionPolicies | No | > | ingestionSettings | No | > | insights | No |
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
+> | dryruns | No |
> | linkers | No | ## Microsoft.Services
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
+> | instancePools | Yes |
> | managedInstances | Yes |
+> | managedInstances / administrators | No |
> | managedInstances / databases | Yes |
+> | managedInstances / databases / backupLongTermRetentionPolicies | No |
> | managedInstances / databases / backupShortTermRetentionPolicies | No | > | managedInstances / databases / schemas / tables / columns / sensitivityLabels | No | > | managedInstances / databases / vulnerabilityAssessments | No |
Jump to a resource provider namespace:
> | managedInstances / encryptionProtector | No | > | managedInstances / keys | No | > | managedInstances / restorableDroppedDatabases / backupShortTermRetentionPolicies | No |
+> | managedInstances / sqlAgent | No |
> | managedInstances / vulnerabilityAssessments | No | > | servers | Yes | > | servers / administrators | No |
+> | servers / advisors | No |
+> | servers / auditingSettings | No |
> | servers / communicationLinks | No | > | servers / databases | Yes |
+> | servers / databases / advisors | No |
+> | servers / databases / auditingSettings | No |
+> | servers / databases / backupLongTermRetentionPolicies | No |
+> | servers / databases / backupShortTermRetentionPolicies | No |
+> | servers / databases / dataMaskingPolicies | No |
+> | servers / databases / extensions | No |
+> | servers / databases / securityAlertPolicies | No |
+> | servers / databases / syncGroups | No |
+> | servers / databases / syncGroups / syncMembers | No |
+> | servers / databases / transparentDataEncryption | No |
+> | servers / databases / workloadGroups | No |
+> | servers / elasticpools | Yes |
> | servers / encryptionProtector | No |
+> | servers / failoverGroups | No |
> | servers / firewallRules | No |
+> | servers / jobAgents | Yes |
+> | servers / jobAgents / jobs | No |
+> | servers / jobAgents / jobs / steps | No |
+> | servers / jobAgents / jobs / executions | No |
> | servers / keys | No | > | servers / restorableDroppedDatabases | No | > | servers / serviceobjectives | No | > | servers / tdeCertificates | No |
+> | servers / virtualNetworkRules | No |
> | virtualClusters | No | ## Microsoft.SqlVirtualMachine
Jump to a resource provider namespace:
> [!div class="mx-tableFixed"] > | Resource type | Complete mode deletion | > | - | -- |
+> | dataMovers | Yes |
+> | dataMovers / agents | No |
+> | dataMovers / endpoints | No |
+> | dataMovers / projects | No |
+> | dataMovers / projects / jobDefinitions | No |
+> | dataMovers / projects / jobDefinitions / jobRuns | No |
> | deletedAccounts | No | > | storageAccounts | Yes | > | storageAccounts / blobServices | No |
azure-sql Cost Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/cost-management.md
+ Last updated 06/30/2021
Billing depends on the SKU of your product, the generation hardware of your SKU,
- And for storage: geo-redundant storage (GRS), locally redundant storage (LRS), and zone-redundant storage (ZRS) - It's also possible to have a deprecated SKU from deprecated resource offerings
-To learn more, see [service tiers](service-tiers-general-purpose-business-critical.md).
+For more information, see [vCore-based purchasing model](service-tiers-vcore.md), [DTU-based purchasing model](service-tiers-dtu.md), or [compare purchasing models](purchasing-models.md).
+ The following table shows the most common billing meters and their possible SKUs for **single databases**:
azure-sql Doc Changes Updates Release Notes Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/doc-changes-updates-release-notes-whats-new.md
ms.devlang: Previously updated : 09/21/2021 Last updated : 12/15/2021 # What's new in Azure SQL Database? [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
-This article summarizes the documentation changes associated with new features and improvements in the recent releases of [Azure SQL Database](https://azure.microsoft.com/products/azure-sql/database/). To learn more about Azure SQL Database, see the [overview](sql-database-paas-overview.md).
+> [!div class="op_single_selector"]
+> * [Azure SQL Database](doc-changes-updates-release-notes-whats-new.md)
+> * [Azure SQL Managed Instance](../managed-instance/doc-changes-updates-release-notes-whats-new.md)
-For Azure SQL Managed Instance, see [What's new](../managed-instance/doc-changes-updates-release-notes-whats-new.md).
+This article summarizes the documentation changes associated with new features and improvements in the recent releases of [Azure SQL Database](https://azure.microsoft.com/products/azure-sql/database/). To learn more about Azure SQL Database, see the [overview](sql-database-paas-overview.md).
## Preview
azure-sql High Availability Sla https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/high-availability-sla.md
Previously updated : 1/20/2022 Last updated : 1/27/2022 # High availability for Azure SQL Database and SQL Managed Instance
Zone-redundant configuration for the general purpose service tier is offered for
Zone-redundant configuration for the general purpose tier has two layers: -- A stateful data layer with the database files (.mdf/.ldf) that are stored in ZRS(zone-redundant storage). Using [ZRS](../../storage/common/storage-redundancy.md) the data and log files are synchronously copied across three physically-isolated Azure availability zones.
+- A stateful data layer with the database files (.mdf/.ldf) that are stored in ZRS(zone-redundant storage). Using [ZRS](../../storage/common/storage-redundancy.md) the data and log files are synchronously copied across three physically isolated Azure availability zones.
- A stateless compute layer that runs the sqlservr.exe process and contains only transient and cached data, such as TempDB, model databases on the attached SSD, and plan cache, buffer pool, and columnstore pool in memory. This stateless node is operated by Azure Service Fabric that initializes sqlservr.exe, controls health of the node, and performs failover to another node if necessary. For zone-redundant serverless and provisioned general purpose databases, nodes with spare capacity are readily available in other Availability Zones for failover. The zone-redundant version of the high availability architecture for the general purpose service tier is illustrated by the following diagram:
The zone-redundant version of the high availability architecture for the general
![Zone redundant configuration for general purpose](./media/high-availability-sla/zone-redundant-for-general-purpose.png) > [!IMPORTANT]
-> Zone-redundant configuration is only available when the Gen5 compute hardware is selected. This feature is not available in SQL Managed Instance. Zone-redundant configuration for serverless and provisioned general purpose tier is only available in the following regions: East US, East US 2, West US 2, North Europe, West Europe, Southeast Asia, Australia East, Japan East, UK South, and France Central.
+> Zone-redundant configuration is not available in SQL Managed Instance. In SQL Database this feature is only available when the Gen5 compute hardware is selected. Additionally, for serverless and provisioned general purpose tier, the zone-redundant configuration is only available in the following regions: East US, East US 2, West US 2, North Europe, West Europe, Southeast Asia, Australia East, Japan East, UK South, and France Central.
> [!NOTE] > General Purpose databases with a size of 80 vcore may experience performance degradation with zone-redundant configuration. Additionally, operations such as backup, restore, database copy, setting up Geo-DR relationships, and downgrading a zone-redundant database from Business Critical to General Purpose may experience slower performance for any single databases larger than 1 TB. Please see our [latency documentation on scaling a database](single-database-scale.md) for more information.
By default, the cluster of nodes for the premium availability model is created i
Because the zone-redundant databases have replicas in different datacenters with some distance between them, the increased network latency may increase the commit time and thus impact the performance of some OLTP workloads. You can always return to the single-zone configuration by disabling the zone-redundancy setting. This process is an online operation similar to the regular service tier upgrade. At the end of the process, the database or pool is migrated from a zone-redundant ring to a single zone ring or vice versa. > [!IMPORTANT]
-> When using the Business Critical tier, zone-redundant configuration is only available when the Gen5 compute hardware is selected. For up to date information about the regions that support zone-redundant databases, see [Services support by region](../../availability-zones/az-region.md).
-
-> [!NOTE]
-> This feature is not available in SQL Managed Instance.
+> This feature is not available in SQL Managed Instance. In SQL Database, when using the Business Critical tier, zone-redundant configuration is only available when the Gen5 compute hardware is selected. For up to date information about the regions that support zone-redundant databases, see [Services support by region](../../availability-zones/az-region.md).
The zone-redundant version of the high availability architecture is illustrated by the following diagram:
azure-sql High Cpu Diagnose Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/high-cpu-diagnose-troubleshoot.md
You can quickly identify the vCore count for a database in the Azure portal if y
For databases in the [serverless](serverless-tier-overview.md) compute tier, vCore count will always be equivalent to the max vCore setting for the database. VCore count will show in the **pricing tier** listed for the database on its **Overview** page. For example, a database's pricing tier might be 'General Purpose: Serverless, Gen5, 16 vCores'.
-If you're using a database under the [DTU-based purchase model](service-tiers-dtu.md), you will need to use Transact-SQL to query the database's vCore count.
+If you're using a database under the [DTU-based purchasing model](service-tiers-dtu.md), you will need to use Transact-SQL to query the database's vCore count.
### Identify vCore count with Transact-SQL
Consider experimenting with small changes in the MAXDOP configuration at the dat
You may find that your workload's queries and indexes are properly tuned, or that performance tuning requires changes that you cannot make in the short term due to internal processes or other reasons. Adding more CPU resources may be beneficial for these databases. You can [scale database resources with minimal downtime](scale-resources.md).
-You can add more CPU resources to your Azure SQL Database by configuring the vCore count or the [hardware generation](service-tiers-sql-database-vcore.md#hardware-generations) for databases using the [vCore purchase model](service-tiers-sql-database-vcore.md).
+You can add more CPU resources to your Azure SQL Database by configuring the vCore count or the [hardware generation](service-tiers-sql-database-vcore.md#hardware-generations) for databases using the [vCore purchasing model](service-tiers-sql-database-vcore.md).
-Under the [DTU-based purchase model](service-tiers-dtu.md), you can raise your service tier and increase the number of database transaction units (DTUs). A DTU represents a blended measure of CPU, memory, reads, and writes. One benefit of the vCore purchase model is that it allows more granular control over the hardware in use and the number of vCores. You can [migrate Azure SQL Database from the DTU-based model to the vCore-based model](migrate-dtu-to-vcore.md) to transition between purchase models.
+Under the [DTU-based purchasing model](service-tiers-dtu.md), you can raise your service tier and increase the number of database transaction units (DTUs). A DTU represents a blended measure of CPU, memory, reads, and writes. One benefit of the vCore purchasing model is that it allows more granular control over the hardware in use and the number of vCores. You can [migrate Azure SQL Database from the DTU-based model to the vCore-based model](migrate-dtu-to-vcore.md) to transition between purchasing models.
## Next steps
azure-sql Migrate Dtu To Vcore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/migrate-dtu-to-vcore.md
To choose the service objective, or compute size, for the migrated database in t
> [!TIP] > This rule is approximate because it does not consider the hardware generation used for the DTU database or elastic pool.
-In the DTU model, the system may select any available [hardware generation](purchasing-models.md#hardware-generations-in-the-dtu-based-purchasing-model) for your database or elastic pool. Further, in the DTU model you have only indirect control over the number of vCores (logical CPUs) by choosing higher or lower DTU or eDTU values.
+In the DTU model, the system may select any available [hardware generation](service-tiers-dtu.md#hardware-generations) for your database or elastic pool. Further, in the DTU model you have only indirect control over the number of vCores (logical CPUs) by choosing higher or lower DTU or eDTU values.
In the vCore model, customers must make an explicit choice of both the hardware generation and the number of vCores (logical CPUs). While DTU model does not offer these choices, the hardware generation and the number of logical CPUs used for every database and elastic pool are exposed via dynamic management views. This makes it possible to determine the matching vCore service objective more precisely.
azure-sql Purchasing Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/purchasing-models.md
Title: Purchasing models-
-description: Learn about the purchasing models that are available for Azure SQL Database and Azure SQL Managed Instance.
+
+description: "Learn about the purchasing models that are available for Azure SQL Database: the vCore purchasing model and the DTU purchasing model."
-+ ms.devlang:
Previously updated : 05/28/2020 Last updated : 02/02/2022
-# Choose between the vCore and DTU purchasing models - Azure SQL Database and SQL Managed Instance
+# Compare vCore and DTU-based purchasing models of Azure SQL Database
-Azure SQL Database and Azure SQL Managed Instance let you easily purchase a fully managed platform as a service (PaaS) database engine that fits your performance and cost needs. Depending on the deployment model you've chosen for Azure SQL Database, you can select the purchasing model that works for you:
+Azure SQL Database lets you easily purchase a fully managed platform as a service (PaaS) database engine that fits your performance and cost needs. Depending on the deployment model you've chosen for Azure SQL Database, you can select the purchasing model that works for you:
-- [Virtual core (vCore)-based purchasing model](service-tiers-vcore.md) (recommended). This purchasing model provides a choice between a provisioned compute tier and a serverless compute tier. With the provisioned compute tier, you choose the exact amount of compute resources that are always provisioned for your workload. With the serverless compute tier, you specify the autoscaling of the compute resources over a configurable compute range. With this compute tier, you can also automatically pause and resume the database based on workload activity. The vCore unit price per unit of time is lower in the provisioned compute tier than it is in the serverless compute tier.
+- [Virtual core (vCore)-based purchasing model](service-tiers-sql-database-vcore.md) (recommended). This purchasing model provides a choice between a provisioned compute tier and a serverless compute tier. With the provisioned compute tier, you choose the exact amount of compute resources that are always provisioned for your workload. With the serverless compute tier, you specify the autoscaling of the compute resources over a configurable compute range. The serverless compute tier automatically pauses databases during inactive periods when only storage is billed and automatically resumes databases when activity returns. The vCore unit price per unit of time is lower in the provisioned compute tier than it is in the serverless compute tier. The [Hyperscale service tier](service-tier-hyperscale.md) is available for single databases that are using the [vCore-based purchasing model](service-tiers-vcore.md).
- [Database transaction unit (DTU)-based purchasing model](service-tiers-dtu.md). This purchasing model provides bundled compute and storage packages balanced for common workloads.
+## Purchasing models
There are two purchasing models: - [vCore-based purchasing model](service-tiers-vcore.md) is available for both [Azure SQL Database](sql-database-paas-overview.md) and [Azure SQL Managed Instance](../managed-instance/sql-managed-instance-paas-overview.md). The [Hyperscale service tier](service-tier-hyperscale.md) is available for single databases that are using the [vCore-based purchasing model](service-tiers-vcore.md). - [DTU-based purchasing model](service-tiers-dtu.md) is available for [Azure SQL Database](single-database-manage.md).
-The following table and chart compare and contrast the vCore-based and the DTU-based purchasing models:
+The following table and chart compares and contrasts the vCore-based and the DTU-based purchasing models:
|**Purchasing model**|**Description**|**Best for**| ||||
-|DTU-based|This model is based on a bundled measure of compute, storage, and I/O resources. Compute sizes are expressed in DTUs for single databases and in elastic database transaction units (eDTUs) for elastic pools. For more information about DTUs and eDTUs, see [What are DTUs and eDTUs?](purchasing-models.md#dtu-based-purchasing-model).|Customers who want simple, preconfigured resource options|
-|vCore-based|This model allows you to independently choose and scale compute and storage resources. The vCore-based purchasing model allows you to use [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/) for SQL Server to save costs. Newer capabilities (e.g. Hyperscale, serverless) are only available in the vCore model.|Customers who value flexibility, control, and transparency|
+|DTU-based|This model is based on a bundled measure of compute, storage, and I/O resources. Compute sizes are expressed in DTUs for single databases and in elastic database transaction units (eDTUs) for elastic pools. For more information about DTUs and eDTUs, see [What are DTUs and eDTUs?](purchasing-models.md#dtu-purchasing-model).|Customers who want simple, preconfigured resource options|
+|vCore-based|This model allows you to independently choose compute and storage resources. The vCore-based purchasing model also allows you to use [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/) for SQL Server to save costs.|Customers who value flexibility, control, and transparency|
||||
-![Pricing model comparison](./media/purchasing-models/pricing-model.png)
-Want to optimize and save on your cloud spending?
-
-## Compute costs
-
-### Provisioned compute costs
-
-In the provisioned compute tier, the compute cost reflects the total compute capacity that is provisioned for the application.
-
-In the Business Critical service tier, we automatically allocate at least three replicas. To reflect this additional allocation of compute resources, the price in the vCore-based purchasing model is approximately 2.7 times higher in the Business Critical service tier than it is in the General Purpose service tier. Likewise, the higher storage price per GB in the Business Critical service tier reflects the higher IO limits and lower latency of the SSD storage.
-
-The cost of backup storage is the same for the Business Critical service tier and the General Purpose service tier because both tiers use standard storage for backups.
-
-### Serverless compute costs
-
-For a description of how compute capacity is defined and costs are calculated for the serverless compute tier, see [SQL Database serverless tier](serverless-tier-overview.md).
-
-## Storage costs
-
-Different types of storage are billed differently. For data storage, you're charged for the provisioned storage based upon the maximum database or pool size you select. The cost doesn't change unless you reduce or increase that maximum. Backup storage is associated with automated backups of your instance and is allocated dynamically. Increasing your backup-retention period increases the backup storage that's consumed by your instance.
-
-By default, seven days of automated backups of your databases are copied to a read-access geo-redundant storage (RA-GRS) standard Blob storage account. This storage is used by weekly full backups, daily differential backups, and transaction log backups, which are copied every five minutes. The size of the transaction logs depends on the rate of change of the database. A minimum storage amount equal to 100 percent of the database size is provided at no extra charge. Additional consumption of backup storage is charged in GB per month.
-
-For more information about storage prices, see the [pricing](https://azure.microsoft.com/pricing/details/sql-database/single/) page.
-
-## vCore-based purchasing model
+## vCore purchasing model
A virtual core (vCore) represents a logical CPU and offers you the option to choose between generations of hardware and the physical characteristics of the hardware (for example, the number of cores, the memory, and the storage size). The vCore-based purchasing model gives you flexibility, control, transparency of individual resource consumption, and a straightforward way to translate on-premises workload requirements to the cloud. This model allows you to choose compute, memory, and storage resources based on your workload needs.
-In the vCore-based purchasing model, you can choose between the [General Purpose](high-availability-sla.md#basic-standard-and-general-purpose-service-tier-locally-redundant-availability) and [Business Critical](high-availability-sla.md#premium-and-business-critical-service-tier-locally-redundant-availability) service tiers for SQL Database and SQL Managed Instance. For single databases, you can also choose the [Hyperscale service tier](service-tier-hyperscale.md).
+In the vCore-based purchasing model for SQL Database, you can choose between the General Purpose and Business Critical service tiers. Review [service tiers](service-tiers-sql-database-vcore.md#service-tiers) to learn more. For single databases, you can also choose the [Hyperscale service tier](service-tier-hyperscale.md).
-The vCore-based purchasing model lets you independently choose compute and storage resources, match on-premises performance, and optimize price. In the vCore-based purchasing model, you pay for:
+In the vCore-based purchasing model, you pay for:
- Compute resources (the service tier + the number of vCores and the amount of memory + the generation of hardware). - The type and amount of data and log storage.-- Backup storage (RA-GRS).-
-> [!IMPORTANT]
-> Compute resources, I/O, and data and log storage are charged per database or elastic pool. Backup storage is charged per each database. For more information about SQL Managed Instance charges, see [SQL Managed Instance](../managed-instance/sql-managed-instance-paas-overview.md).
-> **Region limitations:** For the current list of supported regions, see [products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=sql-database&regions=all). To create a managed instance in a region that currently isn't supported, [send a support request via the Azure portal](quota-increase-request.md).
+- Backup storage.
-If your database consumes more than 300 DTUs, converting to the vCore-based purchasing model might reduce your costs. You can convert by using your API of choice or by using the Azure portal, with no downtime. However, conversion isn't required and isn't done automatically. If the DTU-based purchasing model meets your performance and business requirements, you should continue using it.
-To convert from the DTU-based purchasing model to the vCore-based purchasing model, see [Migrate from DTU to vCore](migrate-dtu-to-vcore.md).
-
-## DTU-based purchasing model
-
-A database transaction unit (DTU) represents a blended measure of CPU, memory, reads, and writes. The DTU-based purchasing model offers a set of preconfigured bundles of compute resources and included storage to drive different levels of application performance. If you prefer the simplicity of a preconfigured bundle and fixed payments each month, the DTU-based model might be more suitable for your needs.
-
-In the DTU-based purchasing model, you can choose between the basic, standard, and premium service tiers for Azure SQL Database. The DTU-based purchasing model isn't available for Azure SQL Managed Instance.
-
-### Database transaction units (DTUs)
-
-For a single database at a specific compute size within a [service tier](single-database-scale.md), Azure guarantees a certain level of resources for that database (independent of any other database in the Azure cloud). This guarantee provides a predictable level of performance. The amount of resources allocated for a database is calculated as a number of DTUs and is a bundled measure of compute, storage, and I/O resources.
-
-The ratio among these resources is originally determined by an [online transaction processing (OLTP) benchmark workload](service-tiers-dtu.md) designed to be typical of real-world OLTP workloads. When your workload exceeds the amount of any of these resources, your throughput is throttled, resulting in slower performance and time-outs.
-
-The resources used by your workload don't impact the resources available to other databases in the Azure cloud. Likewise, the resources used by other workloads don't impact the resources available to your database.
-
-![Bounding box](./media/purchasing-models/bounding-box.png)
+## DTU purchasing model
-DTUs are most useful for understanding the relative resources that are allocated for databases at different compute sizes and service tiers. For example:
+The DTU-based purchasing model uses a database transaction unit (DTU) to calculate and bundle compute costs. A database transaction unit (DTU) represents a blended measure of CPU, memory, reads, and writes. The DTU-based purchasing model offers a set of preconfigured bundles of compute resources and included storage to drive different levels of application performance. If you prefer the simplicity of a preconfigured bundle and fixed payments each month, the DTU-based model might be more suitable for your needs.
-- Doubling the DTUs by increasing the compute size of a database equates to doubling the set of resources available to that database.-- A premium service tier P11 database with 1750 DTUs provides 350 times more DTU compute power than a basic service tier database with 5 DTUs.
+In the DTU-based purchasing model, you can choose between the basic, standard, and premium service tiers for Azure SQL Database. Review [DTU service tiers](service-tiers-dtu.md#compare-service-tiers) to learn more.
-To gain deeper insight into the resource (DTU) consumption of your workload, use [query-performance insights](query-performance-insight-use.md) to:
-- Identify the top queries by CPU/duration/execution count that can potentially be tuned for improved performance. For example, an I/O-intensive query might benefit from [in-memory optimization techniques](../in-memory-oltp-overview.md) to make better use of the available memory at a certain service tier and compute size.-- Drill down into the details of a query to view its text and its history of resource usage.-- Access performance-tuning recommendations that show actions taken by [SQL Database Advisor](database-advisor-implement-performance-recommendations.md).-
-### Elastic database transaction units (eDTUs)
-
-For databases that are always available, rather than provide a dedicated set of resources (DTUs) that might not always be needed, you can place these databases into an [elastic pool](elastic-pool-overview.md). The databases in an elastic pool are on a single server and share a pool of resources.
-
-The shared resources in an elastic pool are measured by elastic database transaction units (eDTUs). Elastic pools provide a simple, cost-effective solution to manage performance goals for multiple databases that have widely varying and unpredictable usage patterns. An elastic pool guarantees that all the resources can't be consumed by one database in the pool, while ensuring that each database in the pool always has a minimum amount of necessary resources available.
+To convert from the DTU-based purchasing model to the vCore-based purchasing model, see [Migrate from DTU to vCore](migrate-dtu-to-vcore.md).
-A pool is given a set number of eDTUs for a set price. In the elastic pool, individual databases can autoscale within the configured boundaries. A database under a heavier load will consume more eDTUs to meet demand. Databases under lighter loads will consume fewer eDTUs. Databases with no load will consume no eDTUs. Because resources are provisioned for the entire pool, rather than per database, elastic pools simplify your management tasks and provide a predictable budget for the pool.
-You can add additional eDTUs to an existing pool with no database downtime and with no impact on the databases in the pool. Similarly, if you no longer need extra eDTUs, remove them from an existing pool at any time. You can also add databases to or subtract databases from a pool at any time. To reserve eDTUs for other databases, limit the number of eDTUs a database can use under a heavy load. If a database consistently underuses resources, move it out of the pool and configure it as a single database with a predictable amount of required resources.
+## Compute costs
-### Determine the number of DTUs needed by a workload
+Compute costs are calculated differently based on each purchasing model.
-If you want to migrate an existing on-premises or SQL Server virtual machine workload to SQL Database, use the [DTU calculator](https://dtucalculator.azurewebsites.net/) to approximate the number of DTUs needed. For an existing SQL Database workload, use [query-performance insights](query-performance-insight-use.md) to understand your database-resource consumption (DTUs) and gain deeper insights for optimizing your workload. The [sys.dm_db_resource_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-resource-stats-azure-sql-database) dynamic management view (DMV) lets you view resource consumption for the last hour. The [sys.resource_stats](/sql/relational-databases/system-catalog-views/sys-resource-stats-azure-sql-database) catalog view displays resource consumption for the last 14 days, but at a lower fidelity of five-minute averages.
+### DTU compute costs
-### Determine DTU utilization
+In the DTU purchasing model, DTUs are offered in preconfigured bundles of compute resources and included storage to drive different levels of application performance. You are billed by the number of DTUs you allocate to your database for your application.
-To determine the average percentage of DTU/eDTU utilization relative to the DTU/eDTU limit of a database or an elastic pool, use the following formula:
+### vCore compute costs
-`avg_dtu_percent = MAX(avg_cpu_percent, avg_data_io_percent, avg_log_write_percent)`
+In the vCore-based purchasing model, choose between the provisioned compute tier, or the [serverless compute tier](serverless-tier-overview.md). In the provisioned compute tier, the compute cost reflects the total compute capacity that is provisioned for the application. In the serverless compute tier, compute resources are auto-scaled based on workload capacity and billed for the amount of compute used, per second.
-The input values for this formula can be obtained from [sys.dm_db_resource_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-resource-stats-azure-sql-database), [sys.resource_stats](/sql/relational-databases/system-catalog-views/sys-resource-stats-azure-sql-database), and [sys.elastic_pool_resource_stats](/sql/relational-databases/system-catalog-views/sys-elastic-pool-resource-stats-azure-sql-database) DMVs. In other words, to determine the percentage of DTU/eDTU utilization toward the DTU/eDTU limit of a database or an elastic pool, pick the largest percentage value from the following: `avg_cpu_percent`, `avg_data_io_percent`, and `avg_log_write_percent` at a given point in time.
+For single databases, compute resources, I/O, and data and log storage are charged per database. For elastic pools, these resources are charged per pool. However, backup storage is always charged per database.
-> [!NOTE]
-> The DTU limit of a database is determined by CPU, reads, writes, and memory available to the database. However, because the SQL Database engine typically uses all available memory for its data cache to improve performance, the `avg_memory_usage_percent` value will usually be close to 100 percent, regardless of current database load. Therefore, even though memory does indirectly influence the DTU limit, it is not used in the DTU utilization formula.
+Since three additional replicas are automatically allocated in the Business Critical service tier, the price is approximately 2.7 times higher than it is in the General Purpose service tier. Likewise, the higher storage price per GB in the Business Critical service tier reflects the higher IO limits and lower latency of the local SSD storage.
-### Workloads that benefit from an elastic pool of resources
+## Storage costs
-Pools are well suited for databases with a low resource-utilization average and relatively infrequent utilization spikes. For more information, see [When should you consider a SQL Database elastic pool?](elastic-pool-overview.md).
+Storage costs are calculated differently based on each purchasing model.
-### Hardware generations in the DTU-based purchasing model
+### DTU storage costs
-In the DTU-based purchasing model, customers cannot choose the hardware generation used for their databases. While a given database usually stays on a specific hardware generation for a long time (commonly for multiple months), there are certain events that can cause a database to be moved to another hardware generation.
+Storage is included in the price of the DTU. It's possible to add extra storage in the standard and premium tiers. See the [Azure SQL Database pricing options](https://azure.microsoft.com/pricing/details/sql-database/single/) for details on provisioning extra storage. [Long-term backup retention](long-term-retention-overview.md) is not included, and is billed separately.
-For example, a database can be moved to a different hardware generation if it's scaled up or down to a different service objective, or if the current infrastructure in a datacenter is approaching its capacity limits, or if the currently used hardware is being decommissioned due to its end of life.
+## vCore storage costs
-If a database is moved to different hardware, workload performance can change. The DTU model guarantees that the throughput and response time of the [DTU benchmark](./service-tiers-dtu.md#dtu-benchmark) workload will remain substantially identical as the database moves to a different hardware generation, as long as its service objective (the number of DTUs) stays the same.
+Different types of storage are billed differently. For data storage, you're charged for the provisioned storage based upon the maximum database or pool size you select. The cost doesn't change unless you reduce or increase that maximum. Backup storage is associated with automated backups of your databases and is allocated dynamically. Increasing your backup retention period may increase the backup storage that's consumed by your databases.
-However, across the wide spectrum of customer workloads running in Azure SQL Database, the impact of using different hardware for the same service objective can be more pronounced. Different workloads will benefit from different hardware configuration and features. Therefore, for workloads other than the DTU benchmark, it's possible to see performance differences if the database moves from one hardware generation to another.
+By default, seven days of automated backups of your databases are copied to a storage account. This storage is used by full backups, differential backups, and transaction log backups. The size of differential and transaction log backups depends on the rate of change of the database. A minimum storage amount equal to 100 percent of the maximum data size for the database is provided at no extra charge. Additional consumption of backup storage is charged in GB per month.
-For example, an application that is sensitive to network latency can see better performance on Gen5 hardware vs. Gen4 due to the use of Accelerated Networking in Gen5, but an application using intensive read IO can see better performance on Gen4 hardware versus Gen5 due to a higher memory per core ratio on Gen4.
+The cost of backup storage is the same for the Business Critical service tier and the General Purpose service tier because both tiers use standard storage for backups.
-Customers with workloads that are sensitive to hardware changes or customers who wish to control the choice of hardware generation for their database can use the [vCore](service-tiers-vcore.md) model to choose their preferred hardware generation during database creation and scaling. In the vCore model, resource limits of each service objective on each hardware generation are documented, for both [single databases](resource-limits-vcore-single-databases.md) and [elastic pools](resource-limits-vcore-elastic-pools.md). For more information about hardware generations in the vCore model, see [Hardware generations for SQL Database](./service-tiers-sql-database-vcore.md#hardware-generations) or [Hardware generations for SQL Managed Instance](../managed-instance/service-tiers-managed-instance-vcore.md#hardware-generations).
+For more information about storage prices, see the [pricing](https://azure.microsoft.com/pricing/details/sql-database/single/) page.
## Frequently asked questions (FAQs)
azure-sql Resource Limits Dtu Elastic Pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-limits-dtu-elastic-pools.md
For the same number of DTUs, resources provided to an elastic pool may exceed th
If all DTUs of an elastic pool are used, then each database in the pool receives an equal amount of resources to process queries. The SQL Database service provides resource sharing fairness between databases by ensuring equal slices of compute time. Elastic pool resource sharing fairness is in addition to any amount of resource otherwise guaranteed to each database when the DTU min per database is set to a non-zero value. > [!NOTE]
-> For `tempdb` limits, see [tempdb limits](/sql/relational-databases/databases/tempdb-database#tempdb-database-in-sql-database).
->
> For additional information on storage limits in the Premium service tier, see [Storage space governance](resource-limits-logical-server.md#storage-space-governance). ### Database properties for pooled databases
While the per database properties are expressed in DTUs, they also govern consum
Min and max per database DTU values apply to resource consumption by user workloads, but not to resource consumption by internal processes. For example, for a database with a per database max DTU set to half of the pool eDTU, user workload cannot consume more than one half of the buffer pool memory. However, this database can still take advantage of pages in the buffer pool that were loaded by internal processes. For more information, see [Resource consumption by user workloads and internal processes](resource-limits-logical-server.md#resource-consumption-by-user-workloads-and-internal-processes).
+## Tempdb sizes
+
+The following table lists tempdb sizes for single databases in Azure SQL Database:
+
+|Service-level objective|Maximum `tempdb` data file size (GB)|Number of `tempdb` data files|Maximum `tempdb` data size (GB)|
+||:|:|:|
+|Basic Elastic Pools (all DTU configurations)|13.9|12|166.7|
+|Standard Elastic Pools (50 eDTU)|13.9|12|166.7|
+|Standard Elastic Pools (100 eDTU)|32|1|32|
+|Standard Elastic Pools (200 eDTU)|32|2|64|
+|Standard Elastic Pools (300 eDTU)|32|3|96|
+|Standard Elastic Pools (400 eDTU)|32|3|96|
+|Standard Elastic Pools (800 eDTU)|32|6|192|
+|Standard Elastic Pools (1200 eDTU)|32|10|320|
+|Standard Elastic Pools (1600-3000 eDTU)|32|12|384|
+|Premium Elastic Pools (all DTU configurations)|13.9|12|166.7|
+||||
+ ## Next steps * For vCore resource limits for a single database, see [resource limits for single databases using the vCore purchasing model](resource-limits-vcore-single-databases.md)
azure-sql Resource Limits Dtu Single Databases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-limits-dtu-single-databases.md
Previously updated : 01/18/2022 Last updated : 01/31/2022 # Resource limits for single databases using the DTU purchasing model - Azure SQL Database
The following tables show the resources available for a single database at each
> More than 1 TB of storage in the Premium tier is currently available in all regions except: China East, China North, Germany Central, and Germany Northeast. In these regions, the storage max in the Premium tier is limited to 1 TB. For more information, see [P11-P15 current limitations](single-database-scale.md#p11-and-p15-constraints-when-max-size-greater-than-1-tb). > [!NOTE]
-> For `tempdb` limits, see [tempdb limits](/sql/relational-databases/databases/tempdb-database#tempdb-database-in-sql-database).
->
> For additional information on storage limits in the Premium service tier, see [Storage space governance](resource-limits-logical-server.md#storage-space-governance).
+## Tempdb sizes
+
+The following table lists tempdb sizes for single databases in Azure SQL Database:
+
+|Service-level objective|Maximum `tempdb` data file size (GB)|Number of `tempdb` data files|Maximum `tempdb` data size (GB)|
+||:|:|:|
+|Basic|13.9|1|13.9|
+|S0|13.9|1|13.9|
+|S1|13.9|1|13.9|
+|S2|13.9|1|13.9|
+|S3|32|1|32
+|S4|32|2|64|
+|S6|32|3|96|
+|S7|32|6|192|
+|S9|32|12|384|
+|S12|32|12|384|
+|P1|13.9|12|166.7|
+|P2|13.9|12|166.7|
+|P4|13.9|12|166.7|
+|P6|13.9|12|166.7|
+|P11|13.9|12|166.7|
+|P15|13.9|12|166.7|
+||||
+ ## Next steps - For vCore resource limits for a single database, see [resource limits for single databases using the vCore purchasing model](resource-limits-vcore-single-databases.md)
azure-sql Resource Limits Logical Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-limits-logical-server.md
Previously updated : 01/18/2022 Last updated : 01/31/2022 # Resource management in Azure SQL Database [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
-This article provides an overview of resource management in Azure SQL Database. It provides information on what happens when resource limits are reached, and describes resource governance mechanisms that are used to enforce these limits.
+> [!div class="op_single_selector"]
+> * [Azure SQL Database](resource-limits-logical-server.md)
+> * [Azure SQL Managed Instance](../managed-instance/resource-limits.md)
+
+This article provides an overview of resource management in Azure SQL Database. It provides information on what happens when resource limits are reached, and describes resource governance mechanisms that are used to enforce these limits.
For specific resource limits per pricing tier (also known as service objective) for single databases, refer to either [DTU-based single database resource limits](resource-limits-dtu-single-databases.md) or [vCore-based single database resource limits](resource-limits-vcore-single-databases.md). For elastic pool resource limits, refer to either [DTU-based elastic pool resource limits](resource-limits-dtu-elastic-pools.md) or [vCore-based elastic pool resource limits](resource-limits-vcore-elastic-pools.md). > [!TIP]
-> For Azure SQL Managed Instance limits, see [resource limits for managed instances](../managed-instance/resource-limits.md).
->
> For Azure Synapse Analytics dedicated SQL pool limits, see [capacity limits](../../synapse-analytics/sql-data-warehouse/sql-data-warehouse-service-capacity-limits.md) and [memory and concurrency limits](../../synapse-analytics/sql-data-warehouse/memory-concurrency-limits.md). ## Logical server limits
+<!
+vCore resource limits are listed in the following articles, please be sure to update all of them:
+/database/resource-limits-vcore-single-databases.md
+/database/resource-limits-vcore-elastic-pools.md
+/database/resource-limits-logical-server.md
+/database/service-tier-general-purpose.md
+/database/service-tier-business-critical.md
+/database/service-tier-hyperscale.md
+/managed-instance/resource-limits.md
+>
+ | Resource | Limit | | : | : | | Databases per [logical server](logical-servers.md) | 5000 |
Because all data is copied to local storage volumes on different machines, movin
> [!NOTE] > Database movement due to insufficient local storage only occurs in the Premium or Business Critical service tiers. It does not occur in the Hyperscale, General Purpose, Standard, and Basic service tiers, because in those tiers data files are not stored in local storage.
+## Tempdb sizes
+
+Size limits for tempdb in Azure SQL Database depend on the purchasing and deployment model.
+
+To learn more, review tempdb size limits for:
+- vCore purchasing model: [single databases](resource-limits-vcore-single-databases.md), [pooled databases](resource-limits-vcore-elastic-pools.md)
+- DTU purchasing model: [single databases](resource-limits-dtu-single-databases.md#tempdb-sizes), [pooled databases](resource-limits-dtu-elastic-pools.md#tempdb-sizes).
+ ## Next steps - For information about general Azure limits, see [Azure subscription and service limits, quotas, and constraints](../../azure-resource-manager/management/azure-subscription-service-limits.md).-- For information about DTUs and eDTUs, see [DTUs and eDTUs](purchasing-models.md#dtu-based-purchasing-model).-- For information about tempdb size limits, see [TempDB in Azure SQL Database](/sql/relational-databases/databases/tempdb-database#tempdb-database-in-sql-database).
+- For information about DTUs and eDTUs, see [DTUs and eDTUs](purchasing-models.md#dtu-purchasing-model).
+- For information about tempdb size limits, see [single vCore databases](resource-limits-vcore-single-databases.md), [pooled vCore databases](resource-limits-vcore-elastic-pools.md), [single DTU databases](resource-limits-dtu-single-databases.md#tempdb-sizes), and [pooled DTU databases](resource-limits-dtu-elastic-pools.md#tempdb-sizes).
azure-sql Resource Limits Vcore Elastic Pools https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-limits-vcore-elastic-pools.md
If all vCores of an elastic pool are busy, then each database in the pool receiv
## General purpose - provisioned compute - Gen4
+<!
+vCore resource limits are listed in the following articles, please be sure to update all of them:
+/database/resource-limits-vcore-single-databases.md
+/database/resource-limits-vcore-elastic-pools.md
+/database/resource-limits-logical-server.md
+/database/service-tier-general-purpose.md
+/database/service-tier-business-critical.md
+/database/service-tier-hyperscale.md
+/managed-instance/resource-limits.md
+>
+ > [!IMPORTANT] > New Gen4 databases are no longer supported in the Australia East or Brazil South regions.
azure-sql Resource Limits Vcore Single Databases https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/resource-limits-vcore-single-databases.md
This article provides the detailed resource limits for single databases in Azure
* For DTU purchasing model limits for single databases on a server, see [Overview of resource limits on a server](resource-limits-logical-server.md). * For DTU purchasing model resource limits for Azure SQL Database, see [DTU resource limits single databases](resource-limits-dtu-single-databases.md) and [DTU resource limits elastic pools](resource-limits-dtu-elastic-pools.md).
-* For vCore resource limits, see [vCore resource limits - Azure SQL Database](resource-limits-vcore-single-databases.md) and [vCore resource limits - elastic pools](resource-limits-vcore-elastic-pools.md).
+* For elastic pool vCore resource limits, [vCore resource limits - elastic pools](resource-limits-vcore-elastic-pools.md).
* For more information regarding the different purchasing models, see [Purchasing models and service tiers](purchasing-models.md). > [!IMPORTANT]
You can set the service tier, compute size (service objective), and storage amou
## General purpose - serverless compute - Gen5
+<!
+vCore resource limits are listed in the following articles, please be sure to update all of them:
+/database/resource-limits-vcore-single-databases.md
+/database/resource-limits-vcore-elastic-pools.md
+/database/resource-limits-logical-server.md
+/database/service-tier-general-purpose.md
+/database/service-tier-business-critical.md
+/database/service-tier-hyperscale.md
+/managed-instance/resource-limits.md
+>
+ The [serverless compute tier](serverless-tier-overview.md) is currently available on Gen5 hardware only. ### Gen5 compute generation (part 1)
The [serverless compute tier](serverless-tier-overview.md) is currently availabl
|Read Scale-out|N/A|N/A|N/A|N/A|N/A| |Included backup storage|1X DB size|1X DB size|1X DB size|1X DB size|1X DB size|
-<sup>1</sup> Service objectives with smaller max vcore configurations may have insufficient memory for creating and using columnstore indexes. If encountering performance problems with columnstore, increase the max vcore configuration to increase the max memory available.
+<sup>1</sup> Service objectives with smaller max vCore configurations may have insufficient memory for creating and using columnstore indexes. If encountering performance problems with columnstore, increase the max vCore configuration to increase the max memory available.
<sup>2</sup> For documented max data size values. Reducing max data size reduces max log size proportionally.
azure-sql Saas Dbpertenant Performance Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/saas-dbpertenant-performance-monitoring.md
Last updated 01/25/2019
In this tutorial, several key performance management scenarios used in SaaS applications are explored. Using a load generator to simulate activity across all tenant databases, the built-in monitoring and alerting features of SQL Database and elastic pools are demonstrated.
-The Wingtip Tickets SaaS Database Per Tenant app uses a single-tenant data model, where each venue (tenant) has their own database. Like many SaaS applications, the anticipated tenant workload pattern is unpredictable and sporadic. In other words, ticket sales may occur at any time. To take advantage of this typical database usage pattern, tenant databases are deployed into elastic pools. Elastic pools optimize the cost of a solution by sharing resources across many databases. With this type of pattern, it's important to monitor database and pool resource usage to ensure that loads are reasonably balanced across pools. You also need to ensure that individual databases have adequate resources, and that pools are not hitting their [eDTU](purchasing-models.md#dtu-based-purchasing-model) limits. This tutorial explores ways to monitor and manage databases and pools, and how to take corrective action in response to variations in workload.
+The Wingtip Tickets SaaS Database Per Tenant app uses a single-tenant data model, where each venue (tenant) has their own database. Like many SaaS applications, the anticipated tenant workload pattern is unpredictable and sporadic. In other words, ticket sales may occur at any time. To take advantage of this typical database usage pattern, tenant databases are deployed into elastic pools. Elastic pools optimize the cost of a solution by sharing resources across many databases. With this type of pattern, it's important to monitor database and pool resource usage to ensure that loads are reasonably balanced across pools. You also need to ensure that individual databases have adequate resources, and that pools are not hitting their [eDTU](purchasing-models.md#dtu-purchasing-model) limits. This tutorial explores ways to monitor and manage databases and pools, and how to take corrective action in response to variations in workload.
In this tutorial you learn how to:
azure-sql Saas Multitenantdb Performance Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/saas-multitenantdb-performance-monitoring.md
Last updated 01/25/2019
In this tutorial, several key performance management scenarios used in SaaS applications are explored. Using a load generator to simulate activity across sharded multi-tenant databases, the built-in monitoring and alerting features of Azure SQL Database are demonstrated.
-The Wingtip Tickets SaaS Multi-tenant Database app uses a sharded multi-tenant data model, where venue (tenant) data is distributed by tenant ID across potentially multiple databases. Like many SaaS applications, the anticipated tenant workload pattern is unpredictable and sporadic. In other words, ticket sales may occur at any time. To take advantage of this typical database usage pattern, databases can be scaled up and down to optimize the cost of a solution. With this type of pattern, it's important to monitor database resource usage to ensure that loads are reasonably balanced across potentially multiple databases. You also need to ensure that individual databases have adequate resources and are not hitting their [DTU](purchasing-models.md#dtu-based-purchasing-model) limits. This tutorial explores ways to monitor and manage databases, and how to take corrective action in response to variations in workload.
+The Wingtip Tickets SaaS Multi-tenant Database app uses a sharded multi-tenant data model, where venue (tenant) data is distributed by tenant ID across potentially multiple databases. Like many SaaS applications, the anticipated tenant workload pattern is unpredictable and sporadic. In other words, ticket sales may occur at any time. To take advantage of this typical database usage pattern, databases can be scaled up and down to optimize the cost of a solution. With this type of pattern, it's important to monitor database resource usage to ensure that loads are reasonably balanced across potentially multiple databases. You also need to ensure that individual databases have adequate resources and are not hitting their [DTU](purchasing-models.md#dtu-purchasing-model) limits. This tutorial explores ways to monitor and manage databases, and how to take corrective action in response to variations in workload.
In this tutorial you learn how to:
azure-sql Service Tier Business Critical https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tier-business-critical.md
Title: Business Critical service tier
-description: Learn about the business critical service tier for Azure SQL Database and Azure SQL Managed Instance.
+description: Learn about the Business Critical service tier for Azure SQL Database and Azure SQL Managed Instance.
Previously updated : 12/04/2018 Last updated : 02/02/2022 # Business Critical tier - Azure SQL Database and Azure SQL Managed Instance [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
-> [!NOTE]
-> Business Critical tier is called Premium in the DTU purchasing model. For a comparison of the vCore-based purchasing model with the DTU-based purchasing model, see [Azure SQL Database purchasing models and resources](purchasing-models.md).
+Azure SQL Database and Azure SQL Managed Instance are both based on the SQL Server database engine architecture adjusted for the cloud environment in order to ensure default SLA availability even in cases of infrastructure failures.
-Azure SQL Database and Azure SQL Managed Instance are both based on SQL Server database engine architecture that is adjusted for the cloud environment in order to ensure 99.99% availability even in the cases of infrastructure failures. There are three architectural models that are used:
-- General Purpose/Standard -- Business Critical/Premium-- Hyperscale
+This article describes and compares the Business Critical service tier used by Azure SQL Database and Azure SQL Managed instance. The Business Critical service tier is best used for applications requiring high transaction rate, low IO latency, and high IO throughput. This service tier offers the highest resilience to failures and fast failovers using multiple synchronously updated replicas.
-Premium/Business Critical service tier model is based on a cluster of database engine processes. This architectural model relies on a fact that there is always a quorum of available database engine nodes and has minimal performance impact on your workload even during maintenance activities. The hyperscale service tier is currently only available for Azure SQL Database (not SQL Managed Instance), and is a highly scalable storage and compute performance tier that leverages the Azure architecture to scale out the storage and compute resources for a database in Azure SQL Database substantially beyond the limits available for the General Purpose and Business Critical service tiers.
+## Overview
+
+The Business Critical service tier model is based on a cluster of database engine processes. This architectural model relies on a fact that there is always a quorum of available database engine nodes and has minimal performance impact on your workload even during maintenance activities.
Azure upgrades and patches underlying operating system, drivers, and SQL Server database engine transparently with the minimal down-time for end users.
-Premium availability is enabled in Premium and Business Critical service tiers and it is designed for intensive workloads that cannot tolerate any performance impact due to the ongoing maintenance operations.
+Premium availability is enabled in the Business Critical service tier and is designed for intensive workloads that cannot tolerate reduced availability due to the ongoing maintenance operations.
Compute and storage is integrated on the single node in the premium model. High availability in this architectural model is achieved by replication of compute (SQL Server database engine process) and storage (locally attached SSD) deployed to a four node cluster, using technology similar to SQL Server [Always On availability groups](/sql/database-engine/availability-groups/windows/overview-of-always-on-availability-groups-sql-server).
Compute and storage is integrated on the single node in the premium model. High
Both the SQL Server database engine process and underlying .mdf/.ldf files are placed on the same node with locally attached SSD storage providing low latency to your workload. High availability is implemented using technology similar to SQL Server [Always On availability groups](/sql/database-engine/availability-groups/windows/overview-of-always-on-availability-groups-sql-server). Every database is a cluster of database nodes with one primary database that is accessible for customer workloads, and a three secondary processes containing copies of data. The primary node constantly pushes changes to the secondary nodes in order to ensure that the data is available on secondary replicas if the primary node fails for any reason. Failover is handled by the SQL Server database engine ΓÇô one secondary replica becomes the primary node and a new secondary replica is created to ensure there are enough nodes in the cluster. The workload is automatically redirected to the new primary node.
-In addition, Business Critical cluster has built-in [Read Scale-Out](read-scale-out.md) capability that provides free-of charge built-in read-only node that can be used to run read-only queries (for example reports) that should not affect performance of your primary workload.
+In addition, the Business Critical cluster has built-in [Read Scale-Out](read-scale-out.md) capability that provides free-of charge built-in read-only replica that can be used to run read-only queries (for example reports) that should not affect performance of your primary workload.
## When to choose this service tier
-Business Critical service tier is designed for applications that require low-latency responses from the underlying SSD storage (1-2 ms in average), fast recovery if the underlying infrastructure fails, or need to off-load reports, analytics, and read-only queries to the free of charge readable secondary replica of the primary database.
+The Business Critical service tier is designed for applications that require low-latency responses from the underlying SSD storage (1-2 ms in average), fast recovery if the underlying infrastructure fails, or need to off-load reports, analytics, and read-only queries to the free of charge readable secondary replica of the primary database.
The key reasons why you should choose Business Critical service tier instead of General Purpose tier are: - **Low I/O latency requirements** ΓÇô workloads that need a fast response from the storage layer (1-2 milliseconds in average) should use Business Critical tier. -- **Frequent communication between application and database**. Applications that cannot leverage application-layer caching or [request batching](../performance-improve-use-batching.md) and need to send many SQL queries that must be quickly processed are good candidates for the Business Critical tier.-- **Large number of updates** ΓÇô insert, update, and delete operations modify the data pages in memory (dirty page) that must be saved to data files with `CHECKPOINT` operation. Potential database engine process crash or a failover of the database with a large number of dirty pages might increase recovery time in General Purpose tier. Use Business Critical tier if you have a workload that causes many in-memory changes. -- **Long running transactions that modify data**. Transactions that are opened for a longer time prevent log file truncation, which might increase log size and number of [Virtual log files (VLF)](/sql/relational-databases/sql-server-transaction-log-architecture-and-management-guide#physical_arch). High number of VLFs can slow down recovery of database after failover. - **Workload with reporting and analytic queries** that can be redirected to the free-of-charge secondary read-only replica. - **Higher resiliency and faster recovery from failures**. In a case of system failure, the database on primary instance will be disabled and one of the secondary replicas will be immediately became new read-write primary database that is ready to process queries. The database engine doesn't need to analyze and redo transactions from the log file and load all data in the memory buffer.-- **Advanced data corruption protection**. Business Critical tier leverages database replicas behind-the-scenes for business continuity purposes, and so the service also then leverages automatic page repair, which is the same technology used for SQL Server database [mirroring and availability groups](/sql/sql-server/failover-clusters/automatic-page-repair-availability-groups-database-mirroring). In the event that a replica cannot read a page due to a data integrity issue, a fresh copy of the page will be retrieved from another replica, replacing the unreadable page without data loss or customer downtime. This functionality is applicable in General Purpose tier if the database has geo-secondary replica.-- **Higher availability** - Business Critical tier in Multi-AZ configuration guarantees 99.995% availability, compared to 99.99% of General Purpose tier.-- **Fast geo-recovery** - Business Critical tier configured with geo-replication has a guaranteed Recovery point objective (RPO) of 5 sec and Recovery time objective (RTO) of 30 sec for 100% of deployed hours.
+- **Advanced data corruption protection**. The Business Critical tier leverages database replicas behind-the-scenes for business continuity purposes, and so the service also then leverages automatic page repair, which is the same technology used for SQL Server database [mirroring and availability groups](/sql/sql-server/failover-clusters/automatic-page-repair-availability-groups-database-mirroring). In the event that a replica cannot read a page due to a data integrity issue, a fresh copy of the page will be retrieved from another replica, replacing the unreadable page without data loss or customer downtime. This functionality is applicable in General Purpose tier if the database has geo-secondary replica.
+- **Higher availability** - The Business Critical tier in Multi-AZ configuration provides resiliency to zonal failures and a higher availability SLA.
+- **Fast geo-recovery** - The Business Critical tier configured with geo-replication has a guaranteed Recovery Point Objective (RPO) of 5 seconds and Recovery Time Objective (RTO) of 30 seconds for 100% of deployed hours.
+
+## Compare Business Critical resource limits
+
+<!
+vCore resource limits are listed in the following articles, please be sure to update all of them:
+/database/resource-limits-vcore-single-databases.md
+/database/resource-limits-vcore-elastic-pools.md
+/database/resource-limits-logical-server.md
+/database/service-tier-general-purpose.md
+/database/service-tier-business-critical.md
+/database/service-tier-hyperscale.md
+/managed-instance/resource-limits.md
+>
+
+Review the table in this section for a brief overview comparison of the resource limits for Azure SQL Database and Azure SQL managed Instance in the Business Critical service tier.
+
+For comprehensive details about resource limits, review:
+- Azure SQL Database: [vCore single database](resource-limits-vcore-single-databases.md), [vCore pooled database ](resource-limits-vcore-elastic-pools.md), [Hyperscale](service-tier-hyperscale.md), [DTU single database](resource-limits-dtu-single-databases.md) and [DTU pooled databases](resource-limits-dtu-elastic-pools.md)
+- Azure SQL Managed Instance: [vCore instance limits](../managed-instance/resource-limits.md)
+
+To compare features between SQL Database and SQL Managed Instance, see the [database engine features](features-comparison.md).
+
+The following table shows resource limits for both Azure SQL Database and Azure SQL Managed Instance in the Business Critical service tier.
+
+| **Category** | **Azure SQL Database** | **Azure SQL Managed Instance** |
+|:--|:--|:--|
+| **Compute size**|1 to 128 vCores | 4, 8, 16, 24, 32, 40, 64, 80 vCores|
+| **Storage type** |Local SSD storage|Local SSD storage |
+| **Storage size** | 1 GB ΓÇô 4 TB |32 GB ΓÇô 16 TB |
+| **Tempdb size** | [32 GB per vCore](resource-limits-vcore-single-databases.md) |Up to 4 TB - [limited by storage size](../managed-instance/resource-limits.md#service-tier-characteristics) |
+| **Log write throughput** | Single databases: [12 MB/s per vCore (max 96 MB/s)](resource-limits-vcore-single-databases.md) <br> Elastic pools: [15 MB/s per vCore (max 120 MB/s)](resource-limits-vcore-elastic-pools.md) | [4 MB/s per vCore (max 48 MB/s)](../managed-instance/resource-limits.md#service-tier-characteristics) |
+| **Availability** | [Default SLA](https://azure.microsoft.com/support/legal/sla/azure-sql-database/) | [Default SLA](https://azure.microsoft.com/support/legal/sla/azure-sql-sql-managed-instance/)|
+| **Backups** | RA-GRS, 1-35 days (7 days by default) | RA-GRS, 1-35 days (7 days by default)|
+| **Read-only replicas** |1 built-in, included in price <br> 0 - 4 using [geo-replication](active-geo-replication-overview.md) |1 built-in, included in price <br> 0 - 1 using [auto-failover groups](auto-failover-group-overview.md#best-practices-for-sql-managed-instance) |
+| **Pricing/Billing** |[vCore, reserved storage, and backup storage](https://azure.microsoft.com/pricing/details/sql-database/single/) are charged. <br/>IOPS is not charged. |[vCore, reserved storage, and backup storage](https://azure.microsoft.com/pricing/details/sql-database/managed/) is charged. <br/>IOPS is not charged. |
+| **Discount models** |[Reserved instances](reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions|[Reserved instances](reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions |
+| | |
+ ## Next steps -- Find resource characteristics (number of cores, I/O, memory) of Business Critical tier in [SQL Managed Instance](../managed-instance/resource-limits.md#service-tier-characteristics), Single database in [vCore model](resource-limits-vcore-single-databases.md#business-criticalprovisioned-computegen4) or [DTU model](resource-limits-dtu-single-databases.md#premium-service-tier), or Elastic pool in [vCore model](resource-limits-vcore-elastic-pools.md#business-criticalprovisioned-computegen4) and [DTU model](resource-limits-dtu-elastic-pools.md#premium-elastic-pool-limits).-- Learn about [General Purpose](service-tier-general-purpose.md) and [Hyperscale](service-tier-hyperscale.md) tiers.
+- Find resource characteristics (number of cores, I/O, memory) of Business Critical tier in [SQL Managed Instance](../managed-instance/resource-limits.md#service-tier-characteristics), Single database in [vCore model](resource-limits-vcore-single-databases.md) or [DTU model](resource-limits-dtu-single-databases.md#premium-service-tier), or Elastic pool in [vCore model](resource-limits-vcore-elastic-pools.md) and [DTU model](resource-limits-dtu-elastic-pools.md#premium-elastic-pool-limits).
+- Learn about [General Purpose](service-tier-general-purpose.md) and [Hyperscale](service-tier-hyperscale.md) service tiers.
- Learn about [Service Fabric](../../service-fabric/service-fabric-overview.md).-- For more options for high availability and disaster recovery, see [Business Continuity](business-continuity-high-availability-disaster-recover-hadr-overview.md).
+- For more options for high availability and disaster recovery, see [Business Continuity](business-continuity-high-availability-disaster-recover-hadr-overview.md).
azure-sql Service Tier General Purpose https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tier-general-purpose.md
Previously updated : 02/07/2019 Last updated : 02/02/2022 # General Purpose service tier - Azure SQL Database and Azure SQL Managed Instance [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
-> [!NOTE]
-> The General Purpose service tier in the vCore-based purchasing model is called the standard service tier in the DTU-based purchasing model. For a comparison of the vCore-based purchasing model with the DTU-based purchasing model, see [purchasing models and resources](purchasing-models.md).
+Azure SQL Database and Azure SQL Managed Instance are based on the SQL Server database engine architecture adapted for the cloud environment in order to ensure default availability even in the cases of infrastructure failures.
-Azure SQL Database and Azure SQL Managed Instance are based on the SQL Server database engine architecture adapted for the cloud environment in order to ensure 99.99% availability even in the cases of infrastructure failures.
+This article describes and compares the General Purpose service tier used by Azure SQL Database and Azure SQL Managed instance. The General Purpose service tier is best used for budget-oriented, balanced compute and storage options.
-There are two service tiers used by Azure SQL Database and SQL Managed Instance:
-- General Purpose-- Business Critical-
-Azure SQL Database also has a third service tier, which is currently unavailable for Azure SQL Managed Instance:
--- Hyperscale
+## Overview
The architectural model for the General Purpose service tier is based on a separation of compute and storage. This architectural model relies on high availability and reliability of Azure Blob storage that transparently replicates database files and guarantees no data loss if underlying infrastructure failure happens.
Whenever the database engine or operating system is upgraded, some part of under
## When to choose this service tier
-The General Purpose service tier is a default service tier in Azure SQL Database and Azure SQL Managed Instance that is designed for most of generic workloads. If you need a fully managed database engine with 99.99% SLA with storage latency between 5 and 10 ms that match SQL Server on an Azure virtual machine in most of the cases, the General Purpose tier is the option for you.
+The General Purpose service tier is a default service tier in Azure SQL Database and Azure SQL Managed Instance that is designed for most of generic workloads. If you need a fully managed database engine with a default SLA and storage latency between 5 and 10 ms, the General Purpose tier is the option for you.
+
+## Compare General Purpose resource limits
+
+<!
+vCore resource limits are listed in the following articles, please be sure to update all of them:
+/database/resource-limits-vcore-single-databases.md
+/database/resource-limits-vcore-elastic-pools.md
+/database/resource-limits-logical-server.md
+/database/service-tier-general-purpose.md
+/database/service-tier-business-critical.md
+/database/service-tier-hyperscale.md
+/managed-instance/resource-limits.md
+>
+
+Review the table in this section for a brief overview comparison of the resource limits for Azure SQL Database and Azure SQL managed Instance in the General Purpose service tier.
+
+For comprehensive details about resource limits, review:
+- Azure SQL Database: [vCore single database](resource-limits-vcore-single-databases.md), [vCore pooled database ](resource-limits-vcore-elastic-pools.md), [Hyperscale](service-tier-hyperscale.md), [DTU single database](resource-limits-dtu-single-databases.md) and [DTU pooled databases](resource-limits-dtu-elastic-pools.md)
+- Azure SQL Managed Instance: [vCore instance limits](../managed-instance/resource-limits.md)
++
+To compare features between SQL Database and SQL Managed Instance, see the [database engine features](features-comparison.md).
+
+The following table shows resource limits for both Azure SQL Database and Azure SQL Managed Instance in the General Purpose service tier:
+
+| **Category** | **Azure SQL Database** | **Azure SQL Managed Instance** |
+|:--|:--|:--|
+| **Compute size**| 1 - 80 vCores | 4, 8, 16, 24, 32, 40, 64, 80 vCores|
+| **Storage type** | Remote storage | Remote storage|
+| **Storage size** | 1 GB - 4 TB | 2 GB - 16 TB|
+| **Tempdb size** | [32 GB per vCore](resource-limits-vcore-single-databases.md) | [24 GB per vCore](../managed-instance/resource-limits.md#service-tier-characteristics) |
+| **Log write throughput** | Single databases: [4.5 MB/s per vCore (max 50 MB/s)](resource-limits-vcore-single-databases.md) <br> Elastic pools: [6 MB/s per vCore (max 62.5 MB/s)](resource-limits-vcore-elastic-pools.md) | [3 MB/s per vCore (max 22 MB/s)](../managed-instance/resource-limits.md#service-tier-characteristics)|
+| **Availability** | [Default SLA](https://azure.microsoft.com/support/legal/sla/azure-sql-database/) | [Default SLA](https://azure.microsoft.com/support/legal/sla/azure-sql-sql-managed-instance/)|
+| **Backups** | 1-35 days (7 days by default) | 1-35 days (7 days by default)|
+| **Read-only replicas** | 0 built-in </br> 0 - 4 using [geo-replication](active-geo-replication-overview.md) | 0 built-in </br> 0 - 1 using [auto-failover groups](auto-failover-group-overview.md#best-practices-for-sql-managed-instance) |
+| **Pricing/Billing** | [vCore, reserved storage, and backup storage](https://azure.microsoft.com/pricing/details/sql-database/single/) are charged. <br/>IOPS is not charged.| [vCore, reserved storage, and backup storage](https://azure.microsoft.com/pricing/details/sql-database/managed/) is charged. <br/>IOPS is not charged. |
+| **Discount models** |[Reserved instances](reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions | [Reserved instances](reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions|
+| | |
+
## Next steps -- Find resource characteristics (number of cores, I/O, memory) of the General Purpose/standard tier in [SQL Managed Instance](../managed-instance/resource-limits.md#service-tier-characteristics), single database in [vCore model](resource-limits-vcore-single-databases.md#general-purposeprovisioned-computegen4) or [DTU model](resource-limits-dtu-single-databases.md#single-database-storage-sizes-and-compute-sizes), or elastic pool in [vCore model](resource-limits-vcore-elastic-pools.md#general-purposeprovisioned-computegen4) and [DTU model](resource-limits-dtu-elastic-pools.md#standard-elastic-pool-limits).-- Learn about [Business Critical](service-tier-business-critical.md) and [Hyperscale](service-tier-hyperscale.md) tiers.
+- Find resource characteristics (number of cores, I/O, memory) of the General Purpose/standard tier in [SQL Managed Instance](../managed-instance/resource-limits.md#service-tier-characteristics), single database in [vCore model](resource-limits-vcore-single-databases.md) or [DTU model](resource-limits-dtu-single-databases.md#single-database-storage-sizes-and-compute-sizes), or elastic pool in [vCore model](resource-limits-vcore-elastic-pools.md) and [DTU model](resource-limits-dtu-elastic-pools.md#standard-elastic-pool-limits).
+- Learn about [Business Critical](service-tier-business-critical.md) and [Hyperscale](service-tier-hyperscale.md) service tiers.
- Learn about [Service Fabric](../../service-fabric/service-fabric-overview.md). - For more options for high availability and disaster recovery, see [Business Continuity](business-continuity-high-availability-disaster-recover-hadr-overview.md).
azure-sql Service Tier Hyperscale https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tier-hyperscale.md
Hyperscale service tier is only available in [vCore model](service-tiers-vcore.m
For more information about Hyperscale pricing, see [Azure SQL Database Pricing](https://azure.microsoft.com/pricing/details/sql-database/single/)
+## Compare resource limits
+
+<!
+vCore resource limits are listed in the following articles, please be sure to update all of them:
+/database/resource-limits-vcore-single-databases.md
+/database/resource-limits-vcore-elastic-pools.md
+/database/resource-limits-logical-server.md
+/database/service-tier-general-purpose.md
+/database/service-tier-business-critical.md
+/database/service-tier-hyperscale.md
+/managed-instance/resource-limits.md
+>
+
+The vCore-based service tiers are differentiated based on database availability and storage type, performance, and maximum storage size, as described in the following table:
+
+|| **General Purpose** | **Hyperscale** | **Business Critical** |
+|::|::|::|::|
+| **Best for** |Offers budget oriented balanced compute and storage options.|Most business workloads. Autoscaling storage size up to 100 TB,fast vertical and horizontal compute scaling, fast database restore.|OLTP applications with high transaction rate and low IO latency. Offers highest resilience to failures and fast failovers using multiple synchronously updated replicas.|
+| **Resource type** |SQL Database / SQL Managed Instance | Single database | SQL Database / SQL Managed Instance |
+| **Compute size** | 1 to 80 vCores | 1 to 80 vCores<sup>1</sup>| 1 to 80 vCores |
+| **Storage type** | Premium remote storage (per instance) | De-coupled storage with local SSD cache (per instance) | Super-fast local SSDstorage (per instance) |
+| **Storage size**<sup>1</sup> | 5 GB ΓÇô 4 TB | Up to 100 TB | 5 GB ΓÇô 4 TB |
+| **IOPS** | 500 IOPS per vCore with 7000 maximum IOPS | Hyperscale is a multi-tiered architecture with caching at multiplelevels. Effective IOPS will depend on the workload. | 5000 IOPS with 200,000 maximum IOPS|
+|**Availability**| 1 replica, no Read Scale-out, zone-redundant HA (preview), no local cache | Multiple replicas, up to 4 Read Scale-out, partiallocal cache | 3 replicas, 1 Read Scale-out, zone-redundant HA, full local storage |
+|**Backups** | A choice of geo-redundant, zone-redundant <sup>2</sup> , or locally-redundant<sup>2</sup> backup storage, 1-35 day retention (default 7 days) | A choice of geo-redundant, zone-redundant <sup>3</sup>, or locally-redundant<sup>3</sup> backup storage, 7 day retention. | A choice of geo-redundant,zone-redundant<sup>2</sup>, or locally-redundant<sup>2</sup> backup storage, 1-35 day retention (default 7 days) |
+
+<sup>1</sup> Elastic pools are not supported in the Hyperscale service tier
+<sup>2</sup> In preview
+<sup>3</sup> In preview, for new Hyperscale databases only
+ ## Distributed functions architecture Unlike traditional database engines that have centralized all of the data management functions in one location/process (even so called distributed databases in production today have multiple copies of a monolithic data engine), a Hyperscale database separates the query processing engine, where the semantics of various data engines diverge, from the components that provide long-term storage and durability for the data. In this way, the storage capacity can be smoothly scaled out as far as needed (initial target is 100 TB). High-availability and named replicas share the same storage components so no data copy is required to spin up a new replica. The following diagram illustrates the different types of nodes in a Hyperscale database:
-![architecture](./media/service-tier-hyperscale/hyperscale-architecture.png)
+![architecture](./media/service-tier-Hyperscale/Hyperscale-architecture.png)
A Hyperscale database contains the following different types of components:
Page servers are systems representing a scaled-out storage engine. Each page se
### Log service
-The log service accepts transaction log records from the primary compute replica, persists them in a durable cache, and forwards the log records to the rest of compute replicas (so they can update their caches) as well as the relevant page server(s), so that the data can be updated there. In this way, all data changes from the primary compute replica are propagated through the log service to all the secondary compute replicas and page servers. Finally, transaction log records are pushed out to long-term storage in Azure Storage, which is a virtually infinite storage repository. This mechanism removes the need for frequent log truncation. The log service also has local memory and SSD caches to speed up access to log records. The log on hyperscale is practically infinite with the restriction that a single transaction cannot generate more than 1TB of log.
+The log service accepts transaction log records from the primary compute replica, persists them in a durable cache, and forwards the log records to the rest of compute replicas (so they can update their caches) as well as the relevant page server(s), so that the data can be updated there. In this way, all data changes from the primary compute replica are propagated through the log service to all the secondary compute replicas and page servers. Finally, transaction log records are pushed out to long-term storage in Azure Storage, which is a virtually infinite storage repository. This mechanism removes the need for frequent log truncation. The log service also has local memory and SSD caches to speed up access to log records. The log on hyperscale is practically infinite, with the restriction that a single transaction cannot generate more than 1TB of log. Additionally , if using [Change Data Capture](/sql/relational-databases/track-changes/about-change-data-capture-sql-server), at most 1TB of log can be generated since the start of the oldest active transaction. It is recommended to avoid unnecessarily large transactions to stay below this limit.
### Azure storage
azure-sql Service Tiers Dtu https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tiers-dtu.md
Title: Service tiers - DTU-based purchase model
-description: Learn about service tiers in the DTU-based purchase model for Azure SQL Database to provide compute and storage sizes.
+ Title: DTU-based purchasing model
+description: Learn about the DTU-based purchasing model for Azure SQL Database and compare compute and storage sizes based on service tiers.
Previously updated : 8/12/2021 Last updated : 02/02/2022
-# Service tiers in the DTU-based purchase model
+# DTU-based purchasing model overview
[!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
-Service tiers in the DTU-based purchase model are differentiated by a range of compute sizes with a fixed amount of included storage, fixed retention period for backups, and fixed price. All service tiers in the DTU-based purchase model provide flexibility of changing compute sizes with minimal [downtime](https://azure.microsoft.com/support/legal/sla/azure-sql-database); however, there is a switch over period where connectivity is lost to the database for a short amount of time, which can be mitigated using retry logic. Single databases and elastic pools are billed hourly based on service tier and compute size.
+In this article, learn about the DTU-based purchasing model for Azure SQL Database.
-> [!IMPORTANT]
-> [Azure SQL Managed Instance](../managed-instance/sql-managed-instance-paas-overview.md) does not support a DTU-based purchasing model.
+To learn more, review [vCore-based purchasing model](service-tiers-vcore.md) and [compare purchasing models](purchasing-models.md).
++
+## Database transaction units (DTUs)
+
+A database transaction unit (DTU) represents a blended measure of CPU, memory, reads, and writes. Service tiers in the DTU-based purchasing model are differentiated by a range of compute sizes with a fixed amount of included storage, fixed retention period for backups, and fixed price. All service tiers in the DTU-based purchasing model provide flexibility of changing compute sizes with minimal [downtime](https://azure.microsoft.com/support/legal/sla/azure-sql-database); however, there is a switch over period where connectivity is lost to the database for a short amount of time, which can be mitigated using retry logic. Single databases and elastic pools are billed hourly based on service tier and compute size.
+
+For a single database at a specific compute size within a [service tier](single-database-scale.md), Azure SQL Database guarantees a certain level of resources for that database (independent of any other database). This guarantee provides a predictable level of performance. The amount of resources allocated for a database is calculated as a number of DTUs and is a bundled measure of compute, storage, and I/O resources.
+
+The ratio among these resources is originally determined by an [online transaction processing (OLTP) benchmark workload](service-tiers-dtu.md) designed to be typical of real-world OLTP workloads. When your workload exceeds the amount of any of these resources, your throughput is throttled, resulting in slower performance and time-outs.
+
+For single databases, the resources used by your workload don't impact the resources available to other databases in the Azure cloud. Likewise, the resources used by other workloads don't impact the resources available to your database.
+
+![Bounding box](./media/purchasing-models/bounding-box.png)
+
+DTUs are most useful for understanding the relative resources that are allocated for databases at different compute sizes and service tiers. For example:
+
+- Doubling the DTUs by increasing the compute size of a database equates to doubling the set of resources available to that database.
+- A premium service tier P11 database with 1750 DTUs provides 350 times more DTU compute power than a basic service tier database with 5 DTUs.
+
+To gain deeper insight into the resource (DTU) consumption of your workload, use [query-performance insights](query-performance-insight-use.md) to:
+
+- Identify the top queries by CPU/duration/execution count that can potentially be tuned for improved performance. For example, an I/O-intensive query might benefit from [in-memory optimization techniques](../in-memory-oltp-overview.md) to make better use of the available memory at a certain service tier and compute size.
+- Drill down into the details of a query to view its text and its history of resource usage.
+- Access performance-tuning recommendations that show actions taken by [SQL Database Advisor](database-advisor-implement-performance-recommendations.md).
+
+### Elastic database transaction units (eDTUs)
+
+Rather than provide a dedicated set of resources (DTUs) that might not always be needed, you can place these databases into an [elastic pool](elastic-pool-overview.md). The databases in an elastic pool use a single instance of the database engine and share the same pool of resources.
+
+The shared resources in an elastic pool are measured by elastic database transaction units (eDTUs). Elastic pools provide a simple, cost-effective solution to manage performance goals for multiple databases that have widely varying and unpredictable usage patterns. An elastic pool guarantees that all the resources can't be consumed by one database in the pool, while ensuring that each database in the pool always has a minimum amount of necessary resources available.
+
+A pool is given a set number of eDTUs for a set price. In the elastic pool, individual databases can autoscale within the configured boundaries. A database under a heavier load will consume more eDTUs to meet demand. Databases under lighter loads will consume fewer eDTUs. Databases with no load will consume no eDTUs. Because resources are provisioned for the entire pool, rather than per database, elastic pools simplify your management tasks and provide a predictable budget for the pool.
+
+You can add additional eDTUs to an existing pool with minimal database downtime. Similarly, if you no longer need extra eDTUs, remove them from an existing pool at any time. You can also add databases to or remove databases from a pool at any time. To reserve eDTUs for other databases, limit the number of eDTUs databases can use under a heavy load. If a database has consistently high resource utilization that impacts other databases in the pool, move it out of the pool and configure it as a single database with a predictable amount of required resources.
+
+#### Workloads that benefit from an elastic pool of resources
+Pools are well suited for databases with a low resource-utilization average and relatively infrequent utilization spikes. For more information, see [When should you consider a SQL Database elastic pool?](elastic-pool-overview.md).
+
+## Determine the number of DTUs needed by a workload
+
+If you want to migrate an existing on-premises or SQL Server virtual machine workload to SQL Database, see [SKU recommendations](/sql/dm) to understand your database-resource consumption (DTUs) and gain deeper insights for optimizing your workload. The [sys.dm_db_resource_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-resource-stats-azure-sql-database) dynamic management view (DMV) lets you view resource consumption for the last hour. The [sys.resource_stats](/sql/relational-databases/system-catalog-views/sys-resource-stats-azure-sql-database) catalog view displays resource consumption for the last 14 days, but at a lower fidelity of five-minute averages.
+
+## Determine DTU utilization
+
+To determine the average percentage of DTU/eDTU utilization relative to the DTU/eDTU limit of a database or an elastic pool, use the following formula:
+
+`avg_dtu_percent = MAX(avg_cpu_percent, avg_data_io_percent, avg_log_write_percent)`
+
+The input values for this formula can be obtained from [sys.dm_db_resource_stats](/sql/relational-databases/system-dynamic-management-views/sys-dm-db-resource-stats-azure-sql-database), [sys.resource_stats](/sql/relational-databases/system-catalog-views/sys-resource-stats-azure-sql-database), and [sys.elastic_pool_resource_stats](/sql/relational-databases/system-catalog-views/sys-elastic-pool-resource-stats-azure-sql-database) DMVs. In other words, to determine the percentage of DTU/eDTU utilization toward the DTU/eDTU limit of a database or an elastic pool, pick the largest percentage value from the following: `avg_cpu_percent`, `avg_data_io_percent`, and `avg_log_write_percent` at a given point in time.
> [!NOTE]
-> For information about vCore-based service tiers, see [vCore-based service tiers](service-tiers-vcore.md). For information about differentiating DTU-based service tiers and vCore-based service tiers, see [purchasing models](purchasing-models.md).
+> The DTU limit of a database is determined by CPU, reads, writes, and memory available to the database. However, because the SQL Database engine typically uses all available memory for its data cache to improve performance, the `avg_memory_usage_percent` value will usually be close to 100 percent, regardless of current database load. Therefore, even though memory does indirectly influence the DTU limit, it is not used in the DTU utilization formula.
+
+## Hardware generations
+
+In the DTU-based purchasing model, customers cannot choose the hardware generation used for their databases. While a given database usually stays on a specific hardware generation for a long time (commonly for multiple months), there are certain events that can cause a database to be moved to another hardware generation.
+
+For example, a database can be moved to a different hardware generation if it's scaled up or down to a different service objective, or if the current infrastructure in a datacenter is approaching its capacity limits, or if the currently used hardware is being decommissioned due to its end of life.
-## Compare the DTU-based service tiers
+If a database is moved to different hardware, workload performance can change. The DTU model guarantees that the throughput and response time of the [DTU benchmark](./service-tiers-dtu.md#dtu-benchmark) workload will remain substantially identical as the database moves to a different hardware generation, as long as its service objective (the number of DTUs) stays the same.
+
+However, across the wide spectrum of customer workloads running in Azure SQL Database, the impact of using different hardware for the same service objective can be more pronounced. Different workloads will benefit from different hardware configuration and features. Therefore, for workloads other than the DTU benchmark, it's possible to see performance differences if the database moves from one hardware generation to another.
+
+For example, an application that is sensitive to network latency can see better performance on Gen5 hardware vs. Gen4 due to the use of Accelerated Networking in Gen5, but an application using intensive read IO can see better performance on Gen4 hardware versus Gen5 due to a higher memory per core ratio on Gen4.
+
+Customers with workloads that are sensitive to hardware changes or customers who wish to control the choice of hardware generation for their database can use the [vCore](service-tiers-vcore.md) model to choose their preferred hardware generation during database creation and scaling. In the vCore model, resource limits of each service objective on each hardware generation are documented, for both [single databases](resource-limits-vcore-single-databases.md) and [elastic pools](resource-limits-vcore-elastic-pools.md). For more information about hardware generations in the vCore model, see [Hardware generations for SQL Database](./service-tiers-sql-database-vcore.md#hardware-generations) or [Hardware generations for SQL Managed Instance](../managed-instance/service-tiers-managed-instance-vcore.md#hardware-generations).
+
+## Compare service tiers
Choosing a service tier depends primarily on business continuity, storage, and performance requirements.
Choosing a service tier depends primarily on business continuity, storage, and p
> [!NOTE] > You can get a free database in Azure SQL Database at the Basic service tier in conjunction with an Azure free account to explore Azure. For information, see [Create a managed cloud database with your Azure free account](https://azure.microsoft.com/free/services/sql-database/).
-## Single database DTU and storage limits
-Compute sizes are expressed in terms of Database Transaction Units (DTUs) for single databases and elastic Database Transaction Units (eDTUs) for elastic pools. For more on DTUs and eDTUs, see [DTU-based purchasing model](purchasing-models.md#dtu-based-purchasing-model).
+## Resource limits
+
+Resource limits differ for single and pooled databases.
+
+### Single database storage limits
+
+Compute sizes are expressed in terms of Database Transaction Units (DTUs) for single databases and elastic Database Transaction Units (eDTUs) for elastic pools. To learn more, review [Resource limits for single databases](resource-limits-dtu-single-databases.md).
||Basic|Standard|Premium| | :-- | --: | --: | --: |
Compute sizes are expressed in terms of Database Transaction Units (DTUs) for si
> [!IMPORTANT] > Under some circumstances, you may need to shrink a database to reclaim unused space. For more information, see [Manage file space in Azure SQL Database](file-space-manage.md).
-## Elastic pool eDTU, storage, and pooled database limits
+### Elastic pool limits
+
+To learn more, review [Resource limits for pooled databases](resource-limits-dtu-elastic-pools.md).
|| **Basic** | **Standard** | **Premium** | | :-- | --: | --: | --: |
azure-sql Service Tiers General Purpose Business Critical https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tiers-general-purpose-business-critical.md
- Title: General purpose and business critical service tiers-
-description: The article discusses the general purpose and business critical service tiers in the vCore-based purchasing model used by Azure SQL Database and Azure SQL Managed Instance.
-------- Previously updated : 11/15/2021-
-# Azure SQL Database and Azure SQL Managed Instance service tiers
-
- Two [vCore](service-tiers-vcore.md) service tiers are available in both Azure SQL Database and Azure SQL Managed Instance:
--- [General purpose](service-tier-general-purpose.md) is a budget-friendly tier designed for most workloads with common performance and availability requirements.-- [Business critical](service-tier-business-critical.md) tier is designed for performance-sensitive workloads with strict availability requirements.-
-Azure SQL Database also provides the Hyperscale service tier:
--- [Hyperscale](service-tier-hyperscale.md) is designed for most business workloads, providing highly scalable storage, read scale-out, fast scaling, and fast database restore capabilities.-
-For a comparison of the vCore-based purchasing model with the DTU-based purchasing model, see [purchasing models and resources](purchasing-models.md).
-
-## Service tier comparison
-
-The following table describes the key differences between service tiers.
-
-|-| Resource type | General Purpose | Hyperscale | Business Critical |
-|::|::|::|::|::|
-| **Best for** | | Offers budget oriented balanced compute and storage options. | Most business workloads. Auto-scaling storage size up to 100 TB, fluid vertical and horizontal compute scaling, fast database restore. | OLTP applications with high transaction rate and low IO latency. Offers highest resilience to failures and fast failovers using multiple synchronously updated replicas.|
-| **Available in resource type:** ||SQL Database / SQL Managed Instance | Single Azure SQL Database | SQL Database / SQL Managed Instance |
-| **Compute size**| SQL Database | 1 to 80 vCores | 1 to 80 vCores | 1 to 128 vCores |
-| | SQL Managed Instance | 4, 8, 16, 24, 32, 40, 64, 80 vCores | N/A | 4, 8, 16, 24, 32, 40, 64, 80 vCores |
-| | SQL Managed Instance pools | 2, 4, 8, 16, 24, 32, 40, 64, 80 vCores | N/A | N/A |
-| **Storage type** | All | Remote storage | Tiered remote and local SSD storage | Local SSD storage |
-| **Database size** | SQL Database | 1 GB ΓÇô 4 TB | 40 GB - 100 TB | 1 GB ΓÇô 4 TB |
-| | SQL Managed Instance | 32 GB ΓÇô 16 TB| N/A | 32 GB ΓÇô 16 TB |
-| **Storage size** | SQL Database | 1 GB ΓÇô 4 TB | 40 GB - 100 TB | 1 GB ΓÇô 4 TB |
-| | SQL Managed Instance | 32 GB ΓÇô 16 TB | N/A | 32 GB ΓÇô 16 TB |
-| **TempDB size** | SQL Database | [32 GB per vCore](resource-limits-vcore-single-databases.md) | [32 GB per vCore](resource-limits-vcore-single-databases.md) | [32 GB per vCore](resource-limits-vcore-single-databases.md) |
-| | SQL Managed Instance | [24 GB per vCore](../managed-instance/resource-limits.md#service-tier-characteristics) | N/A | Up to 4 TB - [limited by storage size](../managed-instance/resource-limits.md#service-tier-characteristics) |
-| **Log write throughput** | SQL Database | Single databases: [4.5 MB/s per vCore (max 50 MB/s)](resource-limits-vcore-single-databases.md) <br> Elastic pools: [6 MB/s per vCore (max 62.5 MB/s)](resource-limits-vcore-elastic-pools.md)| 100 MB/s | Single databases: [12 MB/s per vCore (max 96 MB/s)](resource-limits-vcore-single-databases.md) <br> Elastic pools: [15 MB/s per vCore (max 120 MB/s)](resource-limits-vcore-elastic-pools.md)|
-| | SQL Managed Instance | [3 MB/s per vCore (max 22 MB/s)](../managed-instance/resource-limits.md#service-tier-characteristics) | N/A | [4 MB/s per vCore (max 48 MB/s)](../managed-instance/resource-limits.md#service-tier-characteristics) |
-|**Availability**|SQL Database ([SLA](https://azure.microsoft.com/support/legal/sla/azure-sql-database/))| 99.99% | [99.95% with one secondary replica, 99.99% with more replicas](service-tier-hyperscale-frequently-asked-questions-faq.yml#what-slas-are-provided-for-a-hyperscale-database-) | 99.99% <br/> [99.995% with zone redundant single database](https://azure.microsoft.com/blog/understanding-and-leveraging-azure-sql-database-sla/) |
-| |SQL Managed Instance ([SLA](https://azure.microsoft.com/support/legal/sla/azure-sql-sql-managed-instance/))| 99.99% | [99.95% with one secondary replica, 99.99% with more replicas](service-tier-hyperscale-frequently-asked-questions-faq.yml#what-slas-are-provided-for-a-hyperscale-database-) | 99.99% <br/> [99.995% with zone redundant single database](https://azure.microsoft.com/blog/understanding-and-leveraging-azure-sql-database-sla/) |
-|**Backups**|All|RA-GRS, 1-35 days (7 days by default) | RA-GRS, 7 days, fast point-in-time recovery (PITR) | RA-GRS, 1-35 days (7 days by default) |
-|**In-memory OLTP** | | N/A | Partial support. Memory-optimized table types, table variables, and natively compiled modules are supported. | Available |
-|**Read-only replicas**| | 0 built-in <br> 0 - 4 using [geo-replication](active-geo-replication-overview.md) | 0 - 4 built-in | 1 built-in, included in price <br> 0 - 4 using [geo-replication](active-geo-replication-overview.md) |
-|**Pricing/billing** | SQL Database | [vCore, reserved storage, and backup storage](https://azure.microsoft.com/pricing/details/sql-database/single/) are charged. <br/>IOPS is not charged. | [vCore for each replica and used storage](https://azure.microsoft.com/pricing/details/sql-database/single/) are charged. <br/>IOPS not yet charged. | [vCore, reserved storage, and backup storage](https://azure.microsoft.com/pricing/details/sql-database/single/) are charged. <br/>IOPS is not charged. |
-|| SQL Managed Instance | [vCore, reserved storage, and backup storage](https://azure.microsoft.com/pricing/details/sql-database/managed/) is charged. <br/>IOPS is not charged| N/A | [vCore, reserved storage, and backup storage](https://azure.microsoft.com/pricing/details/sql-database/managed/) is charged. <br/>IOPS is not charged.|
-|**Discount models**| | [Reserved instances](reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions| [Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions| [Reserved instances](reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions|
-
-> [!NOTE]
-> For more information on the Service Level Agreement (SLA), see [SLA for Azure SQL Database](https://azure.microsoft.com/support/legal/sla/azure-sql-database/) or [SLA for Azure SQL Managed Instance](https://azure.microsoft.com/support/legal/sla/azure-sql-sql-managed-instance/).
-
-### Resource limits
-
-For more information on resource limits, see:
--
-## Data and log storage
-
-The following factors affect the amount of storage used for data and log files, and apply to General Purpose and Business Critical tiers. For details on data and log storage in Hyperscale, see [Hyperscale service tier](service-tier-hyperscale.md).
--- Each compute size supports a maximum data size, with a default of 32 GB.-- When you configure maximum data size, an additional 30 percent of storage is automatically added for log files.-- You can select any maximum data size between 1 GB and the supported storage size maximum, in 1 GB increments.-- In the General Purpose service tier, `tempdb` uses local SSD storage, and this storage cost is included in the vCore price.-- In the Business Critical service tier, `tempdb` shares local SSD storage with data and log files, and `tempdb` storage cost is included in the vCore price.-- The maximum storage size for a SQL Managed Instance must be specified in multiples of 32 GB.-
-> [!IMPORTANT]
-> In the General Purpose and Business Critical tiers, you are charged for the maximum storage size configured for a database, elastic pool, or managed instance. In the Hyperscale tier, you are charged for the allocated data storage.
-
-To monitor the current allocated and used data storage size in SQL Database, use *allocated_data_storage* and *storage* Azure Monitor [metrics](../../azure-monitor/essentials/metrics-supported.md#microsoftsqlserversdatabases) respectively. To monitor total consumed instance storage size for SQL Managed Instance, use the *storage_space_used_mb* [metric](../../azure-monitor/essentials/metrics-supported.md#microsoftsqlmanagedinstances). To monitor the current allocated and used storage size of individual data and log files in a database using T-SQL, use the [sys.database_files](/sql/relational-databases/system-catalog-views/sys-database-files-transact-sql) view and the [FILEPROPERTY(... , 'SpaceUsed')](/sql/t-sql/functions/fileproperty-transact-sql) function.
-
-> [!TIP]
-> Under some circumstances, you may need to shrink a database to reclaim unused space. For more information, see [Manage file space in Azure SQL Database](file-space-manage.md).
-
-## Backups and storage
-
-Storage for database backups is allocated to support the [point-in-time restore (PITR)](recovery-using-backups.md) and [long-term retention (LTR)](long-term-retention-overview.md) capabilities of SQL Database and SQL Managed Instance. This storage is separate from data and log file storage, and is billed separately.
--- **PITR**: In General Purpose and Business Critical tiers, individual database backups are copied to [read-access geo-redundant (RA-GRS) storage](../../storage/common/geo-redundant-design.md) automatically. The storage size increases dynamically as new backups are created. The storage is used by full, differential, and transaction log backups. The storage consumption depends on the rate of change of the database and the retention period configured for backups. You can configure a separate retention period for each database between 1 and 35 days for SQL Database, and 0 to 35 days for SQL Managed Instance. A backup storage amount equal to the configured maximum data size is provided at no extra charge.-- **LTR**: You also have the option to configure long-term retention of full backups for up to 10 years. If you set up an LTR policy, these backups are stored in RA-GRS storage automatically, but you can control how often the backups are copied. To meet different compliance requirements, you can select different retention periods for weekly, monthly, and/or yearly backups. The configuration you choose determines how much storage will be used for LTR backups. For more information, see [Long-term backup retention](long-term-retention-overview.md).-
-## Next steps
-
-For details about the specific compute and storage sizes available in vCore service tiers, see:
--- [vCore-based resource limits for Azure SQL Database](resource-limits-vcore-single-databases.md).-- [vCore-based resource limits for pooled databases in Azure SQL Database](resource-limits-vcore-elastic-pools.md).-- [vCore-based resource limits for Azure SQL Managed Instance](../managed-instance/resource-limits.md).
azure-sql Service Tiers Sql Database Vcore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tiers-sql-database-vcore.md
Title: vCore purchase model
+ Title: vCore purchasing model
description: The vCore purchasing model lets you independently scale compute and storage resources, match on-premises performance, and optimize price for Azure SQL Database
Previously updated : 09/10/2021 Last updated : 02/02/2022
-# vCore purchase model overview - Azure SQL Database
+# vCore purchasing model - Azure SQL Database
[!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
-This article reviews the vCore purchase model for [Azure SQL Database](sql-database-paas-overview.md). For more information on choosing between the vCore and DTU purchase models, see [Choose between the vCore and DTU purchasing models](purchasing-models.md).
+> [!div class="op_single_selector"]
+> * [Azure SQL Database](service-tiers-sql-database-vcore.md)
+> * [Azure SQL Managed Instance](../managed-instance/service-tiers-managed-instance-vcore.md)
-The virtual core (vCore) purchase model used by Azure SQL Database provides several benefits over the DTU purchase model:
+This article reviews the [vCore purchasing model](service-tiers-vcore.md) for [Azure SQL Database](sql-database-paas-overview.md). For help choosing between the vCore and DTU purchasing models, see the [differences between the vCore and DTU purchasing models](purchasing-models.md).
+
+## Overview
++
+The vCore purchasing model used by Azure SQL Database provides several benefits over the DTU purchasing model:
- Higher compute, memory, I/O, and storage limits. - Control over the hardware generation to better match compute and memory requirements of the workload. - Pricing discounts for [Azure Hybrid Benefit (AHB)](../azure-hybrid-benefit.md). - Greater transparency in the hardware details that power the compute, that facilitates planning for migrations from on-premises deployments.-- [Reserved instance pricing](reserved-capacity-overview.md) is only available for vCore purchase model.
+- [Reserved instance pricing](reserved-capacity-overview.md) is only available for vCore purchasing model.
+- Higher scaling granularity with multiple compute sizes available.
+ ## Service tiers
-Service tier options in the vCore purchase model include General Purpose, Business Critical, and Hyperscale. The service tier generally defines the storage architecture, space and I/O limits, and business continuity options related to availability and disaster recovery.
+Service tier options in the vCore purchasing model include General Purpose, Business Critical, and Hyperscale. The service tier generally service tier defines hardware, storage type and IOPS, high availability and disaster recovery options, and other features like memory-optimized object types.
+
+For greater details, review resource limits for [logical server](resource-limits-logical-server.md), [single databases](resource-limits-vcore-single-databases.md), and [pooled databases](resource-limits-vcore-elastic-pools.md).
|**Use case**|**General Purpose**|**Business Critical**|**Hyperscale**| |||||
-|Best for|Most business workloads. Offers budget-oriented, balanced, and scalable compute and storage options. |Offers business applications the highest resilience to failures by using several isolated replicas, and provides the highest I/O performance per database replica.|Most business workloads with highly scalable storage and read-scale requirements. Offers higher resilience to failures by allowing configuration of more than one isolated database replica. |
-|Storage|Uses remote storage.<br/>**SQL Database provisioned compute**:<br/>5 GB ΓÇô 4 TB<br/>**Serverless compute**:<br/>5 GB - 3 TB|Uses local SSD storage.<br/>**SQL Database provisioned compute**:<br/>5 GB ΓÇô 4 TB|Flexible autogrow of storage as needed. Supports up to 100 TB of storage. Uses local SSD storage for local buffer-pool cache and local data storage. Uses Azure remote storage as final long-term data store. |
-|IOPS and throughput (approximate)|**SQL Database**: See resource limits for [single databases](resource-limits-vcore-single-databases.md) and [elastic pools](resource-limits-vcore-elastic-pools.md).|See resource limits for [single databases](resource-limits-vcore-single-databases.md) and [elastic pools](resource-limits-vcore-elastic-pools.md).|Hyperscale is a multi-tiered architecture with caching at multiple levels. Effective IOPS and throughput will depend on the workload.|
-|Availability|1 replica, no read-scale replicas, <br/>zone-redundant high availability (HA) (preview)|3 replicas, 1 [read-scale replica](read-scale-out.md),<br/>zone-redundant high availability (HA)|1 read-write replica, plus 0-4 [read-scale replicas](read-scale-out.md)|
-|Backups|A choice of geo-redundant, zone-redundant\*, or locally-redundant\* backup storage, 1-35 day retention (default 7 days)|A choice of geo-redundant, zone-redundant\*, or locally-redundant\* backup storage, 1-35 day retention (default 7 days)|A choice of geo-redundant, zone-redundant\*\*, or locally-redundant\*\* backup storage, 7 day retention.<p>Snapshot-based backups in Azure remote storage. Restores use snapshots for fast recovery. Backups are instantaneous and don't impact compute I/O performance. Restores are fast and aren't a size-of-data operation (taking minutes rather than hours).|
-|In-memory|Not supported|Supported|Partial support. Memory-optimized table types, table variables, and natively compiled modules are supported.|
-|||
+|**Best for**|Most business workloads. Offers budget-oriented, balanced, and scalable compute and storage options. |Offers business applications the highest resilience to failures by using several isolated replicas, and provides the highest I/O performance per database replica.|Most business workloads with highly scalable storage and read-scale requirements. Offers higher resilience to failures by allowing configuration of more than one isolated database replica. |
+|**Availability**|1 replica, no read-scale replicas, <br/>zone-redundant high availability (HA) (preview)|3 replicas, 1 [read-scale replica](read-scale-out.md),<br/>zone-redundant high availability (HA)|
+|**Pricing/billing** | [vCore, reserved storage, and backup storage](https://azure.microsoft.com/pricing/details/sql-database/single/) are charged. <br/>IOPS is not charged. |[vCore, reserved storage, and backup storage](https://azure.microsoft.com/pricing/details/sql-database/single/) are charged. <br/>IOPS is not charged. | [vCore for each replica and used storage](https://azure.microsoft.com/pricing/details/sql-database/single/) are charged. <br/>IOPS not yet charged. |
+|**Discount models**| [Reserved instances](reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions|[Reserved instances](reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions | [Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions|
+| | |
-\* In preview
-\*\* In preview, for new Hyperscale databases only
+> [!NOTE]
+> For more information on the Service Level Agreement (SLA), see [SLA for Azure SQL Database](https://azure.microsoft.com/support/legal/sla/azure-sql-database/)
### Choosing a service tier
For information on selecting a service tier for your particular workload, see th
- [When to choose the Business Critical service tier](service-tier-business-critical.md#when-to-choose-this-service-tier) - [When to choose the Hyperscale service tier](service-tier-hyperscale.md#who-should-consider-the-hyperscale-service-tier)
+## Resource limits
-## Compute tiers
-
-Compute tier options in the vCore model include the provisioned and serverless compute tiers.
--
-### Provisioned compute
+For vCore resource limits, see [logical servers](resource-limits-logical-server.md), [single databases](resource-limits-vcore-single-databases.md), [pooled databases](resource-limits-vcore-elastic-pools.md).
-The provisioned compute tier provides a specific amount of compute resources that are continuously provisioned independent of workload activity, and bills for the amount of compute provisioned at a fixed price per hour.
--
-### Serverless compute
+## Compute tiers
-The [serverless compute tier](serverless-tier-overview.md) auto-scales compute resources based on workload activity, and bills for the amount of compute used per second.
+Compute tier options in the vCore model include the provisioned and [serverless](serverless-tier-overview.md) compute tiers.
+- While the **provisioned compute tier** provides a specific amount of compute resources that are continuously provisioned independent of workload activity, the **serverless compute tier** auto-scales compute resources based on workload activity.
+- While the **provisioned compute tier** bills for the amount of compute provisioned at a fixed price per hour, the **serverless compute tier** bills for the amount of compute used, per second.
## Hardware generations
DC-series is only supported for the Provisioned compute (Serverless is not suppo
To access DC-series, the subscription must be a paid offer type including Pay-As-You-Go or Enterprise Agreement (EA). For a complete list of Azure offer types supported by DC-series, see [current offers without spending limits](https://azure.microsoft.com/support/legal/offer-details).
-### Compute and memory specifications
--
-|Hardware generation |Compute |Memory |
-|:|:|:|
-|Gen4 |- Intel&reg; E5-2673 v3 (Haswell) 2.4-GHz processors<br>- Provision up to 24 vCores (1 vCore = 1 physical core) |- 7 GB per vCore<br>- Provision up to 168 GB|
-|Gen5 |**Provisioned compute**<br>- Intel&reg; E5-2673 v4 (Broadwell) 2.3-GHz, Intel&reg; SP-8160 (Skylake)\*, and Intel&reg; 8272CL (Cascade Lake) 2.5 GHz\* processors<br>- Provision up to 80 vCores (1 vCore = 1 hyper-thread)<br><br>**Serverless compute**<br>- Intel&reg; E5-2673 v4 (Broadwell) 2.3-GHz and Intel&reg; SP-8160 (Skylake)* processors<br>- Auto-scale up to 40 vCores (1 vCore = 1 hyper-thread)|**Provisioned compute**<br>- 5.1 GB per vCore<br>- Provision up to 408 GB<br><br>**Serverless compute**<br>- Auto-scale up to 24 GB per vCore<br>- Auto-scale up to 120 GB max|
-|Fsv2-series |- Intel&reg; 8168 (Skylake) processors<br>- Featuring a sustained all core turbo clock speed of 3.4 GHz and a maximum single core turbo clock speed of 3.7 GHz.<br>- Provision up to 72 vCores (1 vCore = 1 hyper-thread)|- 1.9 GB per vCore<br>- Provision up to 136 GB|
-|M-series |- Intel&reg; E7-8890 v3 2.5 GHz and Intel&reg; 8280M 2.7 GHz (Cascade Lake) processors<br>- Provision up to 128 vCores (1 vCore = 1 hyper-thread)|- 29 GB per vCore<br>- Provision up to 3.7 TB|
-|DC-series | - Intel XEON E-2288G processors<br>- Featuring Intel Software Guard Extension (Intel SGX))<br>- Provision up to 8 vCores (1 vCore = 1 physical core) | 4.5 GB per vCore |
-
-\* In the [sys.dm_user_db_resource_governance](/sql/relational-databases/system-dynamic-management-views/sys-dm-user-db-resource-governor-azure-sql-database) dynamic management view, hardware generation for databases using Intel&reg; SP-8160 (Skylake) processors appears as Gen6, while hardware generation for databases using Intel&reg; 8272CL (Cascade Lake) appears as Gen7. Resource limits for all Gen5 databases are the same regardless of processor type (Broadwell, Skylake, or Cascade Lake).
-
-For more information on resource limits, see [Resource limits for single databases (vCore)](resource-limits-vcore-single-databases.md), or [Resource limits for elastic pools (vCore)](resource-limits-vcore-elastic-pools.md).
### Selecting a hardware generation
If you need DC-series in a currently unsupported region, [submit a support ticke
:::image type="content" source="./media/service-tiers-vcore/request-dc-series.png" alt-text="Request DC-series in a new region" loc-scope="azure-portal":::
+## Compute and memory
+
+The following table compares compute and memory between the different generations and compute tiers:
+
+|Hardware generation |Compute |Memory |
+|:|:|:|
+|Gen4 |- Intel&reg; E5-2673 v3 (Haswell) 2.4-GHz processors<br>- Provision up to 24 vCores (1 vCore = 1 physical core) |- 7 GB per vCore<br>- Provision up to 168 GB|
+|Gen5 |**Provisioned compute**<br>- Intel&reg; E5-2673 v4 (Broadwell) 2.3-GHz, Intel&reg; SP-8160 (Skylake)\*, and Intel&reg; 8272CL (Cascade Lake) 2.5 GHz\* processors<br>- Provision up to 80 vCores (1 vCore = 1 hyper-thread)<br><br>**Serverless compute**<br>- Intel&reg; E5-2673 v4 (Broadwell) 2.3-GHz and Intel&reg; SP-8160 (Skylake)* processors<br>- Auto-scale up to 40 vCores (1 vCore = 1 hyper-thread)|**Provisioned compute**<br>- 5.1 GB per vCore<br>- Provision up to 408 GB<br><br>**Serverless compute**<br>- Auto-scale up to 24 GB per vCore<br>- Auto-scale up to 120 GB max|
+|Fsv2-series |- Intel&reg; 8168 (Skylake) processors<br>- Featuring a sustained all core turbo clock speed of 3.4 GHz and a maximum single core turbo clock speed of 3.7 GHz.<br>- Provision up to 72 vCores (1 vCore = 1 hyper-thread)|- 1.9 GB per vCore<br>- Provision up to 136 GB|
+|M-series |- Intel&reg; E7-8890 v3 2.5 GHz and Intel&reg; 8280M 2.7 GHz (Cascade Lake) processors<br>- Provision up to 128 vCores (1 vCore = 1 hyper-thread)|- 29 GB per vCore<br>- Provision up to 3.7 TB|
+|DC-series | - Intel XEON E-2288G processors<br>- Featuring Intel Software Guard Extension (Intel SGX))<br>- Provision up to 8 vCores (1 vCore = 1 physical core) | 4.5 GB per vCore |
+
+\* In the [sys.dm_user_db_resource_governance](/sql/relational-databases/system-dynamic-management-views/sys-dm-user-db-resource-governor-azure-sql-database) dynamic management view, hardware generation for databases using Intel&reg; SP-8160 (Skylake) processors appears as Gen6, hardware generation for databases using Intel&reg; 8272CL (Cascade Lake) appears as Gen7 and hardware generation for databases using Intel Xeon&reg; Platinum 8307C (Ice Lake) appear as Gen8. Resource limits for all Gen5 databases are the same regardless of processor type (Broadwell, Skylake, or Cascade Lake).
+
+For more information on vCore resource limits, review [single databases](resource-limits-vcore-single-databases.md), or [pooled databases](resource-limits-vcore-elastic-pools.md).
++ ## Next steps - To get started, see [Creating a SQL Database using the Azure portal](single-database-create-quickstart.md)
azure-sql Service Tiers Vcore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/service-tiers-vcore.md
Title: vCore purchase model
+ Title: vCore purchasing model
description: The vCore purchasing model lets you independently scale compute and storage resources, match on-premises performance, and optimize price for Azure SQL Database and Azure SQL Managed Instance.
Previously updated : 05/18/2021- Last updated : 02/02/2022
-# vCore model overview - Azure SQL Database and Azure SQL Managed Instance
+# vCore purchasing model overview - Azure SQL Database and Azure SQL Managed Instance
[!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqldb-sqlmi.md)]
-The virtual core (vCore) purchasing model used by Azure SQL Database and Azure SQL Managed Instance provides several benefits:
+This article provides a brief overview of the vCore purchasing model used by both Azure SQL Database and Azure SQL Managed Instance. To learn more about the vCore model for each product, review [Azure SQL Database](service-tiers-sql-database-vcore.md) and [Azure SQL Managed Instance](../managed-instance/service-tiers-managed-instance-vcore.md).
-- Control over the hardware generation to better match compute and memory requirements of the workload.-- Pricing discounts for [Azure Hybrid Benefit (AHB)](../azure-hybrid-benefit.md) and [Reserved Instance (RI)](reserved-capacity-overview.md).-- Greater transparency in the hardware details that power the compute, that facilitates planning for migrations from on-premises deployments.-- In the case of Azure SQL Database, vCore purchasing model provides higher compute, memory, I/O, and storage limits than the DTU model.
+## Overview
-For more information on choosing between the vCore and DTU purchase models, see [Choose between the vCore and DTU purchasing models](purchasing-models.md).
++
+The vCore purchasing model provides transparency in the hardware details that power compute, control over the hardware generation, higher scaling granularity, and pricing discounts with the [Azure Hybrid Benefit (AHB)](../azure-hybrid-benefit.md) and [Reserved Instance (RI)](../database/reserved-capacity-overview.md).
+
+In the case of Azure SQL Database, the vCore purchasing model provides higher compute, memory, I/O, and storage limits than the DTU model.
## Service tiers
-The following articles provide specific information on the vCore purchase model in each product.
+Two vCore service tiers are available in both Azure SQL Database and Azure SQL Managed Instance:
+
+- [General purpose](service-tier-general-purpose.md) is a budget-friendly tier designed for most workloads with common performance and availability requirements.
+- [Business critical](service-tier-business-critical.md) tier is designed for performance-sensitive workloads with strict availability requirements.
+
+The [Hyperscale service tier](service-tier-Hyperscale.md) is also available for single databases in Azure SQL Database. This service tier is designed for most business workloads, providing highly scalable storage, read scale-out, fast scaling, and fast database restore capabilities.
+
+## Resource limits
+
+For more information on resource limits, see:
+
+ - Azure SQL Database: [logical server](resource-limits-logical-server.md), [single databases](resource-limits-vcore-single-databases.md), [pooled databases](resource-limits-vcore-elastic-pools.md)
+ - [Azure SQL Managed Instance](../managed-instance/resource-limits.md)
+
+## Compute cost
+
+The vCore-based purchasing model has a provisioned compute tier for both Azure SQL Database and Azure SQL Managed Instance, and a serverless compute tier for Azure SQL Database.
+
+In the provisioned compute tier, the compute cost reflects the total compute capacity continuously provisioned for the application independent of workload activity. Choose the resource allocation that best suits your business needs based on vCore and memory requirements, then scale resources up and down as needed by your workload.
+
+In the serverless compute tier for Azure SQL database, compute resources are auto-scaled based on workload capacity and billed for the amount of compute used, per second.
+
+Since three additional replicas are automatically allocated in the Business Critical service tier, the price is approximately 2.7 times higher than it is in the General Purpose service tier. Likewise, the higher storage price per GB in the Business Critical service tier reflects the higher IO limits and lower latency of the local SSD storage.
+
+## Data and log storage
+
+The following factors affect the amount of storage used for data and log files, and apply to General Purpose and Business Critical tiers.
+
+- Each compute size supports a configurable maximum data size, with a default of 32 GB.
+- When you configure maximum data size, an additional 30 percent of billable storage is automatically added for the log file.
+- In the General Purpose service tier, `tempdb` uses local SSD storage, and this storage cost is included in the vCore price.
+- In the Business Critical service tier, `tempdb` shares local SSD storage with data and log files, and `tempdb` storage cost is included in the vCore price.
+- In the General Purpose and Business Critical tiers, you are charged for the maximum storage size configured for a database, elastic pool, or managed instance.
+- For SQL Database, you can select any maximum data size between 1 GB and the supported storage size maximum, in 1 GB increments. For SQL Managed Instance, select data sizes in multiples of 32 GB up to the supported storage size maximum.
+
+To monitor the current allocated and used data storage size in SQL Database, use the *allocated_data_storage* and *storage* Azure Monitor [metrics](../../azure-monitor/essentials/metrics-supported.md#microsoftsqlserversdatabases) respectively.
+
+For both SQL Database and SQL Managed instance, to monitor the current allocated and used storage size of individual data and log files in a database by using T-SQL, use the [sys.database_files](/sql/relational-databases/system-catalog-views/sys-database-files-transact-sql) view and the [FILEPROPERTY(... , 'SpaceUsed')](/sql/t-sql/functions/fileproperty-transact-sql) function.
+
+> [!TIP]
+> Under some circumstances, you may need to shrink a database to reclaim unused space. For more information, see [Manage file space in Azure SQL Database](file-space-manage.md).
+
+## Backup storage
+
+Storage for database backups is allocated to support the [point-in-time restore (PITR)](recovery-using-backups.md) and [long-term retention (LTR)](long-term-retention-overview.md) capabilities of SQL Database and SQL Managed Instance. This storage is separate from data and log file storage, and is billed separately.
-- For information on Azure SQL Database service tiers for the vCore model, see [vCore model overview - Azure SQL Database](service-tiers-sql-database-vcore.md).-- For information on Azure SQL Managed Instance service tiers for the vCore model, see [vCore model overview - Azure SQL Managed Instance](../managed-instance/service-tiers-managed-instance-vcore.md).
+- **PITR**: In General Purpose and Business Critical tiers, individual database backups are copied to [Azure storage](automated-backups-overview.md#restore-capabilities) automatically. The storage size increases dynamically as new backups are created. The storage is used by full, differential, and transaction log backups. The storage consumption depends on the rate of change of the database and the retention period configured for backups. You can configure a separate retention period for each database between 1 and 35 days for SQL Database, and 0 to 35 days for SQL Managed Instance. A backup storage amount equal to the configured maximum data size is provided at no extra charge.
+- **LTR**: You also have the option to configure long-term retention of full backups for up to 10 years. If you set up an LTR policy, these backups are stored in Azure Blob storage automatically, but you can control how often the backups are copied. To meet different compliance requirements, you can select different retention periods for weekly, monthly, and/or yearly backups. The configuration you choose determines how much storage will be used for LTR backups. For more information, see [Long-term backup retention](long-term-retention-overview.md).
## Next steps
To get started, see:
- [Azure SQL Managed Instance single instance pricing page](https://azure.microsoft.com/pricing/details/azure-sql-managed-instance/single/) - [Azure SQL Managed Instance pools pricing page](https://azure.microsoft.com/pricing/details/azure-sql-managed-instance/pools/)
-For details about the specific compute and storage sizes available in the general purpose and business critical service tiers, see:
+For details about the specific compute and storage sizes available in the General Purpose and Business Critical service tiers, see:
- [vCore-based resource limits for Azure SQL Database](resource-limits-vcore-single-databases.md). - [vCore-based resource limits for pooled Azure SQL Database](resource-limits-vcore-elastic-pools.md).
azure-sql Sql Database Paas Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/sql-database-paas-overview.md
SQL Database delivers predictable performance with multiple resource types, serv
## Scalable performance and pools You can define the amount of resources assigned. -- With single databases, each database is isolated from others and is portable. Each has its own guaranteed amount of compute, memory, and storage resources. The amount of the resources assigned to the database is dedicated to that database, and isn't shared with other databases in Azure. You can dynamically [scale single database resources](single-database-scale.md) up and down. The single database option provides different compute, memory, and storage resources for different needs. For example, you can get 1 to 80 vCores, or 32 GB to 4 TB. The [hyperscale service tier](service-tier-hyperscale.md) for single databases enables you to scale to 100 TB, with fast backup and restore capabilities.
+- With single databases, each database is isolated from others and is portable. Each has its own guaranteed amount of compute, memory, and storage resources. The amount of the resources assigned to the database is dedicated to that database, and isn't shared with other databases in Azure. You can dynamically [scale single database resources](single-database-scale.md) up and down. The single database option provides different compute, memory, and storage resources for different needs. For example, you can get 1 to 80 vCores, or 32 GB to 4 TB. The [Hyperscale service tier](service-tier-hyperscale.md) for single databases enables you to scale to 100 TB, with fast backup and restore capabilities.
- With elastic pools, you can assign resources that are shared by all databases in the pool. You can create a new database, or move the existing single databases into a resource pool to maximize the use of resources and save money. This option also gives you the ability to dynamically [scale elastic pool resources](elastic-pool-scale.md) up and down. You can build your first app on a small, single database at a low cost per month in the general-purpose service tier. You can then change its service tier manually or programmatically at any time to the business-critical service tier, to meet the needs of your solution. You can adjust performance without downtime to your app or to your customers. Dynamic scalability enables your database to transparently respond to rapidly changing resource requirements. You pay for only the resources that you need when you need them.
azure-sql Transparent Data Encryption Byok Create Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/transparent-data-encryption-byok-create-server.md
This how-to guide outlines the steps to create an Azure SQL logical [server](log
## Prerequisites - This how-to guide assumes that you've already created an [Azure Key Vault](../../key-vault/general/quick-create-portal.md) and imported a key into it to use as the TDE protector for Azure SQL Database. For more information, see [transparent data encryption with BYOK support](transparent-data-encryption-byok-overview.md).
+- Soft-delete and Purge protection must be enabled on the key vault
- You must have created a [user-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md#managed-identity-types) and provided it the required TDE permissions (*Get, Wrap Key, Unwrap Key*) on the above key vault. For creating a user-assigned managed identity, see [Create a user-assigned managed identity](/azure/active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal). - You must have Azure PowerShell installed and running. - [Recommended but optional] Create the key material for the TDE protector in a hardware security module (HSM) or local key store first, and import the key material to Azure Key Vault. Follow the [instructions for using a hardware security module (HSM) and Key Vault](../../key-vault/general/overview.md) to learn more.
To get your user-assigned managed identity **Resource ID**, search for **Managed
## Next steps -- Get started with Azure Key Vault integration and Bring Your Own Key support for TDE: [Turn on TDE using your own key from Key Vault](transparent-data-encryption-byok-configure.md).
+- Get started with Azure Key Vault integration and Bring Your Own Key support for TDE: [Turn on TDE using your own key from Key Vault](transparent-data-encryption-byok-configure.md).
azure-sql Transparent Data Encryption Byok Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/transparent-data-encryption-byok-overview.md
Auditors can use Azure Monitor to review key vault AuditEvent logs, if logging i
### Requirements for configuring AKV - Key vault and SQL Database/managed instance must belong to the same Azure Active Directory tenant. Cross-tenant key vault and server interactions aren't supported. To move resources afterwards, TDE with AKV will have to be reconfigured. Learn more about [moving resources](../../azure-resource-manager/management/move-resource-group-and-subscription.md).
+- [Soft-delete](../../key-vault/general/soft-delete-overview.md) and [purge protection](../../key-vault/general/soft-delete-overview.md#purge-protection) features must be enabled on the key vault to protect from data loss due to accidental key (or key vault) deletion.
+- Grant the server or managed instance access to the key vault (*get*, *wrapKey*, *unwrapKey*) using its Azure Active Directory identity. The server identity can be a system-assigned managed identity or a user-assigned managed identity assigned to the server. When using the Azure portal, the Azure AD identity gets automatically created when the server is created. When using PowerShell or Azure CLI, the Azure AD identity must be explicitly created and should be verified. See [Configure TDE with BYOK](transparent-data-encryption-byok-configure.md) and [Configure TDE with BYOK for SQL Managed Instance](../managed-instance/scripts/transparent-data-encryption-byok-powershell.md) for detailed step-by-step instructions when using PowerShell.
+ - Depending on the permission model of the key vault (access policy or Azure RBAC), key vault access can be granted either by creating an access policy on the key vault, or by creating a new Azure RBAC role assignment with the role [Key Vault Crypto Service Encryption User](../../key-vault/general/rbac-guide.md#azure-built-in-roles-for-key-vault-data-plane-operations).
+
+- When using firewall with AKV, you must enable option *Allow trusted Microsoft services to bypass the firewall*.
-- [Soft-delete](../../key-vault/general/soft-delete-overview.md) and [Purge protection](../../key-vault/general/soft-delete-overview.md#purge-protection) features must be enabled on the key vault to protect from data loss due to accidental key (or key vault) deletion.
- - Soft-deleted resources are retained for 90 days, unless recovered or purged by the customer. The *recover* and *purge* actions have their own permissions associated in a key vault access policy. The Soft-delete feature can be enabled using the Azure portal, [PowerShell](../../key-vault/general/key-vault-recovery.md?tabs=azure-powershell) or [Azure CLI](../../key-vault/general/key-vault-recovery.md?tabs=azure-cli).
- - Purge protection can be turned on using [Azure CLI](../../key-vault/general/key-vault-recovery.md?tabs=azure-cli) or [PowerShell](../../key-vault/general/key-vault-recovery.md?tabs=azure-powershell). When purge protection is enabled, a vault or an object in the deleted state cannot be purged until the retention period has passed. The default retention period is 90 days, but is configurable from 7 to 90 days through the Azure portal.
+### Enable soft-delete and purge protection for AKV
> [!IMPORTANT]
-> Both Soft-delete and Purge protection must be enabled on the key vault(s) for servers being configured with customer-managed TDE, as well as existing servers using customer-managed TDE.
+> Both **soft-delete** and **purge protection** must be enabled on the key vault when configuring customer-managed TDE on a new or existing server or managed instance.
-- Grant the server or managed instance access to the key vault (*get*, *wrapKey*, *unwrapKey*) using its Azure Active Directory identity. The server identity can be a system-assigned managed identity or a user-assigned managed identity assigned to the server. When using the Azure portal, the Azure AD identity gets automatically created when the server is created. When using PowerShell or Azure CLI, the Azure AD identity must be explicitly created and should be verified. See [Configure TDE with BYOK](transparent-data-encryption-byok-configure.md) and [Configure TDE with BYOK for SQL Managed Instance](../managed-instance/scripts/transparent-data-encryption-byok-powershell.md) for detailed step-by-step instructions when using PowerShell.
- - Depending on the permission model of the key vault (access policy or Azure RBAC), key vault access can be granted either by creating an access policy on the key vault, or by creating a new Azure RBAC role assignment with the role [Key Vault Crypto Service Encryption User](../../key-vault/general/rbac-guide.md#azure-built-in-roles-for-key-vault-data-plane-operations).
+[Soft-delete](../../key-vault/general/soft-delete-overview.md) and [purge protection](../../key-vault/general/soft-delete-overview.md#purge-protection) are important features of Azure Key Vault that allow recovery of deleted vaults and deleted key vault objects, reducing the risk of a user accidentally or maliciously deleting a key or a key vault.
+
+- Soft-deleted resources are retained for 90 days, unless recovered or purged by the customer. The *recover* and *purge* actions have their own permissions associated in a key vault access policy. The soft-delete feature is on by default for new key vaults and can also be enabled using the Azure portal, [PowerShell](../../key-vault/general/key-vault-recovery.md?tabs=azure-powershell) or [Azure CLI](../../key-vault/general/key-vault-recovery.md?tabs=azure-cli).
+
+- Purge protection can be turned on using [Azure CLI](../../key-vault/general/key-vault-recovery.md?tabs=azure-cli) or [PowerShell](../../key-vault/general/key-vault-recovery.md?tabs=azure-powershell). When purge protection is enabled, a vault or an object in the deleted state cannot be purged until the retention period has passed. The default retention period is 90 days, but is configurable from 7 to 90 days through the Azure portal.
+
+- Azure SQL requires soft-delete and purge protection to be enabled on the key vault containing the encryption key being used as the TDE Protector for the server or managed instance. This helps prevent the scenario of accidental or malicious key vault or key deletion that can lead to the database going into *Inaccessible* state.
+
+- When configuring the TDE Protector on an existing server or during server creation, Azure SQL validates that the key vault being used has soft-delete and purge protection turned on. If soft-delete and purge protection are not enabled on the key vault, the TDE Protector setup fails with an error. In this case, soft-delete and purge protection must first be enabled on the key vault and then the TDE Protector setup should be performed.
-- When using firewall with AKV, you must enable option *Allow trusted Microsoft services to bypass the firewall*. ### Requirements for configuring TDE protector
The Azure Policy can be applied to the whole Azure subscription, or just within
For more information on Azure Policy, see [What is Azure Policy?](../../governance/policy/overview.md) and [Azure Policy definition structure](../../governance/policy/concepts/definition-structure.md). The following two built-in policies are supported for customer-managed TDE in Azure Policy:-- SQL server should use customer-managed keys to encrypt data at rest
+- SQL servers should use customer-managed keys to encrypt data at rest
- SQL managed instances should use customer-managed keys to encrypt data at rest The customer-managed TDE policy can be managed by going to the [Azure portal](https://portal.azure.com), and searching for the **Policy** service. Under **Definitions**, search for customer-managed key.
You may also want to check the following PowerShell sample scripts for the commo
- [Remove a transparent data encryption (TDE) protector for SQL Database](transparent-data-encryption-byok-remove-tde-protector.md) -- [Manage transparent data encryption in SQL Managed Instance with your own key using PowerShell](../managed-instance/scripts/transparent-data-encryption-byok-powershell.md?toc=%2fpowershell%2fmodule%2ftoc.json)
+- [Manage transparent data encryption in SQL Managed Instance with your own key using PowerShell](../managed-instance/scripts/transparent-data-encryption-byok-powershell.md?toc=%2fpowershell%2fmodule%2ftoc.json)
azure-sql Troubleshoot Common Errors Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/troubleshoot-common-errors-issues.md
For more information on other out of memory errors and sample queries, see [Trou
| 40544 |20 |The database has reached its size quota. Partition or delete data, drop indexes, or consult the documentation for possible resolutions. For database scaling, see [Scale single database resources](single-database-scale.md) and [Scale elastic pool resources](elastic-pool-scale.md).| | 40549 |16 |Session is terminated because you have a long-running transaction. Try shortening your transaction. For information on batching, see [How to use batching to improve SQL Database application performance](../performance-improve-use-batching.md).| | 40550 |16 |The session has been terminated because it has acquired too many locks. Try reading or modifying fewer rows in a single transaction. For information on batching, see [How to use batching to improve SQL Database application performance](../performance-improve-use-batching.md).|
-| 40551 |16 |The session has been terminated because of excessive `TEMPDB` usage. Try modifying your query to reduce the temporary table space usage.<br/><br/>If you are using temporary objects, conserve space in the `TEMPDB` database by dropping temporary objects after they are no longer needed by the session. For more information on tempdb usage in SQL Database, see [Tempdb database in SQL Database](/sql/relational-databases/databases/tempdb-database#tempdb-database-in-sql-database).|
+| 40551 |16 |The session has been terminated because of excessive `TEMPDB` usage. Try modifying your query to reduce the temporary table space usage.<br/><br/>If you are using temporary objects, conserve space in the `TEMPDB` database by dropping temporary objects after they are no longer needed by the session. For more information on tempdb limits in SQL Database, see [Tempdb database in SQL Database](resource-limits-logical-server.md#tempdb-sizes).|
| 40552 |16 |The session has been terminated because of excessive transaction log space usage. Try modifying fewer rows in a single transaction. For information on batching, see [How to use batching to improve SQL Database application performance](../performance-improve-use-batching.md).<br/><br/>If you perform bulk inserts using the `bcp.exe` utility or the `System.Data.SqlClient.SqlBulkCopy` class, try using the `-b batchsize` or `BatchSize` options to limit the number of rows copied to the server in each transaction. If you are rebuilding an index with the `ALTER INDEX` statement, try using the `REBUILD WITH ONLINE = ON` option. For information on transaction log sizes for the vCore purchasing model, see: <br/>&bull; &nbsp;[vCore-based limits for single databases](resource-limits-vcore-single-databases.md)<br/>&bull; &nbsp;[vCore-based limits for elastic pools](resource-limits-vcore-elastic-pools.md)<br/>&bull; &nbsp;[Azure SQL Managed Instance resource limits](../managed-instance/resource-limits.md).| | 40553 |16 |The session has been terminated because of excessive memory usage. Try modifying your query to process fewer rows.<br/><br/>Reducing the number of `ORDER BY` and `GROUP BY` operations in your Transact-SQL code reduces the memory requirements of your query. For database scaling, see [Scale single database resources](single-database-scale.md) and [Scale elastic pool resources](elastic-pool-scale.md). For more information on out of memory errors and sample queries, see [Troubleshoot out of memory errors with Azure SQL Database](troubleshoot-memory-errors-issues.md).|
azure-sql Glossary Terms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/glossary-terms.md
Previously updated : 12/15/2021 Last updated : 02/02/2022 # Azure SQL glossary of terms [!INCLUDE[appliesto-asf](./includes/appliesto-asf.md)]
Last updated 12/15/2021
||DTU-based purchasing model|The [Database Transaction Unit (DTU)-based purchasing model](database/service-tiers-dtu.md) is based on a bundled measure of compute, storage, and I/O resources. Compute sizes are expressed in DTUs for single databases and in elastic database transaction units (eDTUs) for elastic pools. | ||vCore-based purchasing model (recommended)| A virtual core (vCore) represents a logical CPU. The [vCore-based purchasing model](database/service-tiers-vcore.md) offers greater control over the hardware configuration to better match compute and memory requirements of the workload, pricing discounts for [Azure Hybrid Benefit (AHB)](azure-hybrid-benefit.md) and [Reserved Instance (RI)](database/reserved-capacity-overview.md), more granular scaling, and greater transparency in hardware details. Newer capabilities (for example, hyperscale, serverless) are only available in the vCore model. | |Service tier|| The service tier defines the storage architecture, storage and I/O limits, and business continuity options. Options for service tiers vary by purchasing model. |
-||DTU-based service tiers | [Basic, standard, and premium service tiers](database/service-tiers-dtu.md#compare-the-dtu-based-service-tiers) are available in the DTU-based purchasing model.|
+||DTU-based service tiers | [Basic, standard, and premium service tiers](database/service-tiers-dtu.md#compare-service-tiers) are available in the DTU-based purchasing model.|
||vCore-based service tiers (recommended) |[General purpose, business critical, and hyperscale service tiers](database/service-tiers-sql-database-vcore.md#service-tiers) are available in the vCore-based purchasing model (recommended).| |Compute tier|| The compute tier determines whether resources are continuously available (provisioned) or autoscaled (serverless). Compute tier availability varies by purchasing model and service tier. Only the vCore purchasing model's general purpose service tier makes serverless compute available.| ||Provisioned compute|The [provisioned compute tier](database/service-tiers-sql-database-vcore.md#compute-tiers) provides a specific amount of compute resources that are continuously provisioned independent of workload activity. Under the provisioned compute tier, you are billed at a fixed price per hour.
Last updated 12/15/2021
|Compute size (service objective) ||Compute size (service objective) is the amount of CPU, memory, and storage resources available for a single database or elastic pool. Compute size also defines resource consumption limits, such as maximum IOPS, maximum log rate, etc. ||vCore-based sizing options| Configure the compute size for your database or elastic pool by selecting the appropriate service tier, compute tier, and hardware generation for your workload. When using an elastic pool, configure the reserved vCores for the pool, and optionally configure per-database settings. For sizing options and resource limits in the vCore-based purchasing model, see [vCore single databases](database/resource-limits-vcore-single-databases.md), and [vCore elastic pools](database/resource-limits-vcore-elastic-pools.md).| ||DTU-based sizing options| Configure the compute size for your database or elastic pool by selecting the appropriate service tier and selecting the maximum data size and number of DTUs. When using an elastic pool, configure the reserved eDTUs for the pool, and optionally configure per-database settings. For sizing options and resource limits in the DTU-based purchasing model, see [DTU single databases](database/resource-limits-dtu-single-databases.md) and [DTU elastic pools](database/resource-limits-dtu-elastic-pools.md).
+||||
## Azure SQL Managed Instance
Last updated 12/15/2021
|Compute|Provisioned compute| SQL Managed Instance provides a specific amount of [compute resources](managed-instance/service-tiers-managed-instance-vcore.md#compute) that are continuously provisioned independent of workload activity, and bills for the amount of compute provisioned at a fixed price per hour. | |Hardware generation|Available hardware configurations| SQL Managed Instance [hardware generations](managed-instance/service-tiers-managed-instance-vcore.md#hardware-generations) include standard-series (Gen5), premium-series, and memory optimized premium-series hardware generations. | |Compute size | vCore-based sizing options | Compute size (service objective) is the maximum amount of CPU, memory, and storage resources available for a single managed instance or instance pool. Configure the compute size for your managed instance by selecting the appropriate service tier and hardware generation for your workload. Learn about [resource limits for managed instances](managed-instance/resource-limits.md). |
+||||
## SQL Server on Azure VMs |Context|Term|More information|
Last updated 12/15/2021
| | Security considerations | You can enable Microsoft Defender for SQL, integrate Azure Key Vault, control access, and secure connections to your SQL Server VM. Learn [security guidelines](virtual-machines/windows/security-considerations-best-practices.md) to establish secure access to SQL Server VMs. | | SQL IaaS Agent extension | | The [SQL IaaS Agent extension](virtual-machines/windows/sql-server-iaas-agent-extension-automate-management.md) (SqlIaasExtension) runs on SQL Server VMs to automate management and administration tasks. There's no extra cost associated with the extension. | | | Automated patching | [Automated Patching](virtual-machines/windows/automated-patching.md) establishes a maintenance window for a SQL Server VM when security updates will be automatically applied by the SQL IaaS Agent extension. Note that there may be other mechanisms for applying Automatic Updates. If you configure automated patching using the SQL IaaS Agent extension you should ensure that there are no other conflicting update schedules. |
-| | Automated backup | [Automated Backup v2](virtual-machines/windows/automated-backup.md) automatically configures Managed Backup to Microsoft Azure for all existing and new databases on a SQL Server VM running SQL Server 2016 or later Standard, Enterprise, or Developer editions. |
+| | Automated backup | [Automated Backup v2](virtual-machines/windows/automated-backup.md) automatically configures Managed Backup to Microsoft Azure for all existing and new databases on a SQL Server VM running SQL Server 2016 or later Standard, Enterprise, or Developer editions. |
+||||
azure-sql Doc Changes Updates Release Notes Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/doc-changes-updates-release-notes-whats-new.md
Last updated 01/05/2022
# What's new in Azure SQL Managed Instance? [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqlmi.md)]
-This article summarizes the documentation changes associated with new features and improvements in the recent releases of [Azure SQL Managed Instance](https://azure.microsoft.com/updates/?product=sql-database&query=sql%20managed%20instance). To learn more about Azure SQL Managed Instance, see the [overview](sql-managed-instance-paas-overview.md).
+> [!div class="op_single_selector"]
+> * [Azure SQL Database](../database/doc-changes-updates-release-notes-whats-new.md)
+> * [Azure SQL Managed Instance](../managed-instance/doc-changes-updates-release-notes-whats-new.md)
+This article summarizes the documentation changes associated with new features and improvements in the recent releases of [Azure SQL Managed Instance](https://azure.microsoft.com/updates/?product=sql-database&query=sql%20managed%20instance). To learn more about Azure SQL Managed Instance, see the [overview](sql-managed-instance-paas-overview.md).
-For Azure SQL Database, see [What's new](../database/doc-changes-updates-release-notes-whats-new.md).
## Preview
azure-sql Instance Create Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/instance-create-quickstart.md
If you don't have an Azure subscription, [create a free account](https://azure.m
| Setting| Suggested value | DescriptionΓÇ»| | | | -- |
-| **Service Tier** | Select one of the options. | Based on your scenario, select one of the following options: </br> <ul><li>**General Purpose**: for most production workloads, and the default option.</li><li>**Business Critical**: designed for low-latency workloads with high resiliency to failures and fast failovers.</li></ul><BR>For more information, see [Azure SQL Database and Azure SQL Managed Instance service tiers](../../azure-sql/database/service-tiers-general-purpose-business-critical.md) and review [Overview of Azure SQL Managed Instance resource limits](../../azure-sql/managed-instance/resource-limits.md).|
+| **Service Tier** | Select one of the options. | Based on your scenario, select one of the following options: </br> <ul><li>**General Purpose**: for most production workloads, and the default option.</li><li>**Business Critical**: designed for low-latency workloads with high resiliency to failures and fast failovers.</li></ul><BR>For more information, review [service tiers](service-tiers-managed-instance-vcore.md) and [resource limits](../../azure-sql/managed-instance/resource-limits.md).|
| **Hardware Generation** | Select one of the options. | The hardware generation generally defines the compute and memory limits and other characteristics that impact the performance of the workload. **Gen5** is the default.| | **vCore compute model** | Select an option. | vCores represent exact amount of compute resources that are always provisioned for your workload. **Eight vCores** is the default.| | **Storage in GB** | Select an option. | Storage size in GB, select based on expected data size. If migrating existing data from on-premises or on various cloud platforms, see [Migration overview: SQL Server to SQL Managed Instance](../../azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-overview.md).|
azure-sql Link Feature https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/link-feature.md
Previously updated : 01/19/2022 Last updated : 02/04/2022 # Link feature for Azure SQL Managed Instance (limited preview) [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
After a disastrous event, you can continue running your read-only workloads on S
To use the link feature, you will need: -- SQL Server 2019 Enterprise Edition with [CU13 (or above)](https://support.microsoft.com/topic/kb5005679-cumulative-update-13-for-sql-server-2019-5c1be850-460a-4be4-a569-fe11f0adc535) installed on-premises, or on an Azure VM.
+- SQL Server 2019 Enterprise Edition with [CU15 (or above)](https://support.microsoft.com/en-us/topic/kb5008996-cumulative-update-15-for-sql-server-2019-4b6a8ee9-1c61-482d-914f-36e429901fb6) installed on-premises, or on an Azure VM.
- Network connectivity between your SQL Server and managed instance is required. If your SQL Server is running on-premises, use a VPN link or Express route. If your SQL Server is running on an Azure VM, either deploy your VM to the same subnet as your managed instance, or use global VNet peering to connect two separate subnets. - Azure SQL Managed Instance provisioned on any service tier.
azure-sql Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/resource-limits.md
Previously updated : 01/18/2022 Last updated : 02/02/2022 # Overview of Azure SQL Managed Instance resource limits [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
+> [!div class="op_single_selector"]
+> * [Azure SQL Database](../database/resource-limits-logical-server.md)
+> * [Azure SQL Managed Instance](resource-limits.md)
+ This article provides an overview of the technical characteristics and resource limits for Azure SQL Managed Instance, and provides information about how to request an increase to these limits. > [!NOTE]
-> For differences in supported features and T-SQL statements see [Feature differences](../database/features-comparison.md) and [T-SQL statement support](transact-sql-tsql-differences-sql-server.md). For general differences between service tiers for Azure SQL Database and SQL Managed Instance see [Service tier comparison](../database/service-tiers-general-purpose-business-critical.md#service-tier-comparison).
+> For differences in supported features and T-SQL statements see [Feature differences](../database/features-comparison.md) and [T-SQL statement support](transact-sql-tsql-differences-sql-server.md). For general differences between service tiers for Azure SQL Database and SQL Managed Instance review [General Purpose](../database/service-tier-general-purpose.md) and [Business Critical](../database/service-tier-business-critical.md) service tiers.
## Hardware generation characteristics
Hardware generations have different characteristics, as described in the followi
\* Dependent on [the number of vCores](#service-tier-characteristics).
+>[!NOTE]
+> If your business requires storage sizes greater than the available resource limits for Azure SQL Managed Instance, consider the Azure SQL Database [Hyperscale service tier](../database/service-tier-hyperscale.md).
++ ### Regional support for premium-series hardware generations (preview) Support for the premium-series hardware generations (public preview) is currently available only in these specific regions: <br>
The amount of in-memory OLTP space in [Business Critical](../database/service-ti
## Service tier characteristics
-SQL Managed Instance has two service tiers: [General Purpose](../database/service-tier-general-purpose.md) and [Business Critical](../database/service-tier-business-critical.md). These tiers provide [different capabilities](../database/service-tiers-general-purpose-business-critical.md), as described in the table below.
+SQL Managed Instance has two service tiers: [General Purpose](../database/service-tier-general-purpose.md) and [Business Critical](../database/service-tier-business-critical.md).
> [!Important]
-> Business Critical service-tier provides an additional built-in copy of the SQL Managed Instance (secondary replica) that can be used for read-only workload. If you can separate read-write queries and read-only/analytic/reporting queries, you are getting twice the vCores and memory for the same price. The secondary replica might lag a few seconds behind the primary instance, so it is designed to offload reporting/analytic workloads that don't need exact current state of data. In the table below, **read-only queries** are the queries that are executed on secondary replica.
+> The Business Critical service tier provides an additional built-in copy of the SQL Managed Instance (secondary replica) that can be used for read-only workload. If you can separate read-write queries and read-only/analytic/reporting queries, you are getting twice the vCores and memory for the same price. The secondary replica might lag a few seconds behind the primary instance, so it is designed to offload reporting/analytic workloads that don't need exact current state of data. In the table below, **read-only queries** are the queries that are executed on secondary replica.
| **Feature** | **General Purpose** | **Business Critical** | | | | |
A few additional considerations:
Find more information about the [resource limits in SQL Managed Instance pools in this article](instance-pools-overview.md#resource-limitations).
+### Data and log storage
+
+The following factors affect the amount of storage used for data and log files, and apply to General Purpose and Business Critical tiers.
+
+- Each compute size supports a maximum data size, with a default of 16 GB. For more information on resource limits in Azure SQL Managed Instance, see [resource-limits.md].
+- When you configure maximum data size, an additional 30 percent of storage is automatically added for log files.
+- You can select any maximum data size between 1 GB and the supported storage size maximum, in 1 GB increments.
+- In the General Purpose service tier, `tempdb` uses local SSD storage, and this storage cost is included in the vCore price.
+- In the Business Critical service tier, `tempdb` shares local SSD storage with data and log files, and `tempdb` storage cost is included in the vCore price.
+- The maximum storage size for a SQL Managed Instance must be specified in multiples of 32 GB.
+
+> [!IMPORTANT]
+> In the General Purpose and Business Critical tiers, you are charged for the maximum storage size configured for a managed instance.
+
+To monitor total consumed instance storage size for SQL Managed Instance, use the *storage_space_used_mb* [metric](../../azure-monitor/essentials/metrics-supported.md#microsoftsqlmanagedinstances). To monitor the current allocated and used storage size of individual data and log files in a database using T-SQL, use the [sys.database_files](/sql/relational-databases/system-catalog-views/sys-database-files-transact-sql) view and the [FILEPROPERTY(... , 'SpaceUsed')](/sql/t-sql/functions/fileproperty-transact-sql) function.
+
+> [!TIP]
+> Under some circumstances, you may need to shrink a database to reclaim unused space. For more information, see [Manage file space in Azure SQL Database](../database/file-space-manage.md).
+
+### Backups and storage
+
+Storage for database backups is allocated to support the [point-in-time restore (PITR)](../database/recovery-using-backups.md) and [long-term retention (LTR)](../database/long-term-retention-overview.md) capabilities of SQL Managed Instance. This storage is separate from data and log file storage, and is billed separately.
+
+- **PITR**: In General Purpose and Business Critical tiers, individual database backups are copied to [read-access geo-redundant (RA-GRS) storage](../../storage/common/geo-redundant-design.md) automatically. The storage size increases dynamically as new backups are created. The storage is used by full, differential, and transaction log backups. The storage consumption depends on the rate of change of the database and the retention period configured for backups. You can configure a separate retention period for each database between 0 to 35 days for SQL Managed Instance. A backup storage amount equal to the configured maximum data size is provided at no extra charge.
+- **LTR**: You also have the option to configure long-term retention of full backups for up to 10 years. If you set up an LTR policy, these backups are stored in RA-GRS storage automatically, but you can control how often the backups are copied. To meet different compliance requirements, you can select different retention periods for weekly, monthly, and/or yearly backups. The configuration you choose determines how much storage will be used for LTR backups. For more information, see [Long-term backup retention](../database/long-term-retention-overview.md).
+ ### File IO characteristics in General Purpose tier In the General Purpose service tier, every database file gets dedicated IOPS and throughput that depend on the file size. Larger files get more IOPS and throughput. IO characteristics of database files are shown in the following table:
azure-sql Service Tiers Managed Instance Vcore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/managed-instance/service-tiers-managed-instance-vcore.md
Title: vCore purchase model
+ Title: vCore purchasing model
description: The vCore purchasing model lets you independently scale compute and storage resources, match on-premises performance, and optimize price for Azure SQL Managed Instance.
Previously updated : 05/18/2021 Last updated : 02/02/2022
-# Azure SQL Managed Instance - Compute Hardware in the vCore Service Tier
+# vCore purchasing model - Azure SQL Managed Instance
[!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
-This article reviews the vCore purchase model for [Azure SQL Managed Instance](sql-managed-instance-paas-overview.md). For more information on choosing between the vCore and DTU purchase models, see [Choose between the vCore and DTU purchasing models](../database/purchasing-models.md).
+> [!div class="op_single_selector"]
+> * [Azure SQL Database](../database/service-tiers-sql-database-vcore.md)
+> * [Azure SQL Managed Instance](service-tiers-managed-instance-vcore.md)
-The virtual core (vCore) purchase model used by Azure SQL Managed Instance has following characteristics:
+This article reviews the [vCore purchasing model](../database/service-tiers-vcore.md) for [Azure SQL Managed Instance](sql-managed-instance-paas-overview.md).
+
+## Overview
++
+The virtual core (vCore) purchasing model used by Azure SQL Managed Instance provides the following benefits:
- Control over the hardware generation to better match the compute and memory requirements of the workload. - Pricing discounts for [Azure Hybrid Benefit (AHB)](../azure-hybrid-benefit.md) and [Reserved Instance (RI)](../database/reserved-capacity-overview.md).-- Greater transparency in the hardware details that power the compute, that facilitates planning for migrations from on-premises deployments.-- [Reserved instance pricing](../database/reserved-capacity-overview.md) is only available for vCore purchase model.
+- Greater transparency in the hardware details that power compute, helping facilitate planning for migrations from on-premises deployments.
+- Higher scaling granularity with multiple compute sizes available.
+ ## <a id="compute-tiers"></a>Service tiers
-Service tier options in the vCore purchase model include General Purpose and Business Critical. The service tier generally defines the storage architecture, space and I/O limits, and business continuity options related to availability and disaster recovery.
+Service tier options in the vCore purchasing model include General Purpose and Business Critical. The service tier generally defines the storage architecture, space and I/O limits, and business continuity options related to availability and disaster recovery.
+
+For more details, review [resource limits](resource-limits.md).
-|**Use case**|**General Purpose**|**Business Critical**|
+|**Category**|**General Purpose**|**Business Critical**|
||||
-|Best for|Most business workloads. Offers budget-oriented, balanced, and scalable compute and storage options. |Offers business applications the highest resilience to failures by using several isolated replicas, and provides the highest I/O performance.|
-|Storage|Uses remote storage. 32 GB - 16 TB depending on number of cores |Uses local SSD storage. <BR>- **Standard-series (Gen5):** 32 GB - 4 TB <BR>- **Premium-series:** 32 GB - 5.5 TB <BR>- **Memory optimized premium-series:** 32 GB - 16 TB |
-|IOPS and throughput (approximate)|See [Overview Azure SQL Managed Instance resource limits](../managed-instance/resource-limits.md#service-tier-characteristics).|See [Overview Azure SQL Managed Instance resource limits](../managed-instance/resource-limits.md#service-tier-characteristics).|
-|Availability|1 replica, no read-scale replicas|4 replicas total, 1 [read-scale replica](../database/read-scale-out.md),<br/> 2 high availability replicas (HA)|
-|Backups|[Read-access geo-redundant storage (RA-GRS)](../../storage/common/geo-redundant-design.md), 1-35 days (7 days by default)|[RA-GRS](../../storage/common/geo-redundant-design.md), 1-35 days (7 days by default)|
-|In-memory|Not supported|Supported|
-||||
+|**Best for**|Most business workloads. Offers budget-oriented, balanced, and scalable compute and storage options. |Offers business applications the highest resilience to failures by using several isolated replicas, and provides the highest I/O performance.|
+|**Availability**|1 replica, no read-scale replicas|4 replicas total, 1 [read-scale replica](../database/read-scale-out.md),<br/> 2 high availability replicas (HA)|
+|**Read-only replicas**| 0 built-in <br> 0 - 4 using [geo-replication](../database/active-geo-replication-overview.md) | 1 built-in, included in price <br> 0 - 4 using [geo-replication](../database/active-geo-replication-overview.md) |
+|**Pricing/billing**| [vCore, reserved storage, and backup storage](https://azure.microsoft.com/pricing/details/sql-database/managed/) is charged. <br/>IOPS is not charged| [vCore, reserved storage, and backup storage](https://azure.microsoft.com/pricing/details/sql-database/managed/) is charged. <br/>IOPS is not charged.
+|**Discount models**| [Reserved instances](../database/reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions|[Reserved instances](../database/reserved-capacity-overview.md)<br/>[Azure Hybrid Benefit](../azure-hybrid-benefit.md) (not available on dev/test subscriptions)<br/>[Enterprise](https://azure.microsoft.com/offers/ms-azr-0148p/) and [Pay-As-You-Go](https://azure.microsoft.com/offers/ms-azr-0023p/) Dev/Test subscriptions|
+|||
+
+> [!NOTE]
+> For more information on the Service Level Agreement (SLA), see [SLA for Azure SQL Managed Instance](https://azure.microsoft.com/support/legal/sla/azure-sql-sql-managed-instance/).
### Choosing a service tier
Premium-series and memory optimized premium-series hardware is in preview, and h
- For pricing details, see - [Azure SQL Managed Instance single instance pricing page](https://azure.microsoft.com/pricing/details/azure-sql-managed-instance/single/) - [Azure SQL Managed Instance pools pricing page](https://azure.microsoft.com/pricing/details/azure-sql-managed-instance/pools/)-- For details about the specific compute and storage sizes available in the general purpose and business critical service tiers, see [vCore-based resource limits for Azure SQL Managed Instance](resource-limits.md).
+- For details about the specific compute and storage sizes available in the General Purpose and Business Critical service tiers, see [vCore-based resource limits for Azure SQL Managed Instance](resource-limits.md).
azure-sql Multi Model Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/multi-model-features.md
ms.devlang: --++ Last updated 12/17/2018
backup Backup Azure Database Postgresql Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-azure-database-postgresql-support-matrix.md
East US, East US 2, Central US, South Central US, West US, West US 2, West Centr
- Recommended limit for the maximum database size is 400 GB. - Cross-region backup isn't supported. Therefore, you can't back up an Azure PostgreSQL server to a vault in another region. Similarly, you can only restore a backup to a server within the same region as the vault. However, we support cross-subscription backup and restore. -- Only the data is recovered during restore; "roles" aren't restored.-- We recommend you run the solution only on your test environment.
+- Only the data is recovered during restore; _roles_ aren't restored.
## Next steps
cognitive-services Create Project https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/create-project.md
Previously updated : 11/02/2021 Last updated : 02/03/2022
Use this article to learn how to prepare the requirements for using custom NER.
## Prerequisites
-An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services).
+* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services).
You should have an idea of the [project schema](design-schema.md) you will use for your data.
-Use this article to learn how to prepare the requirements for using custom text classification.
- ## Azure resources Before you start using custom NER, you will need a Azure Language resource. We recommend the steps in the [quickstart](../quickstart.md) for creating one in the Azure portal. Creating a resource in the Azure portal lets you create an Azure storage account at the same time, with all of the required permissions pre-configured. You can also read further in the article to learn how to use a pre-existing resource, and configure it to work with custom NER.
If it's your first time logging in, you'll see a window appear in [Language Stud
To use custom NER, you'll need to [create an Azure storage account](../../../../storage/common/storage-account-create.md) if you don't have one already.
-Next you'll need to assign the [correct roles](#roles-for-your-storage-account) for the storage account to connect it to your Language resource.
+Next you'll need to assign the [correct roles](#required-roles-for-your-storage-account) for the storage account to connect it to your Language resource.
# [Azure PowerShell](#tab/powershell)
You can use an existing Language resource to get started with custom NER as long
|Pricing tier | Make sure your existing resource is in the Standard (**S**) pricing tier. Only this pricing tier is supported. If your resource doesn't use this pricing tier, you will need to create a new resource. | |Managed identity | Make sure that the resource-managed identity setting is enabled. Otherwise, read the next section. |
-To use custom NER, you'll need to [create an Azure storage account](../../../../storage/common/storage-account-create.md) if you don't have one already.
+To use custom NER, you'll need to [create an Azure storage account](../../../../storage/common/storage-account-create.md) if you don't have one already, and assign the [correct roles](#required-roles-for-your-storage-account) to connect it to your Language resource.
-Next you'll need to assign the [correct roles](#roles-for-your-storage-account) for the storage account to connect it to your Language resource.
+> [!NOTE]
+> Custom NER currently does not currently support Data Lake Storage Gen 2.
-## Roles for your Azure Language resource
+## Required roles for Azure Language resources
-You should have the **owner** or **contributor** role assigned on your Azure Language resource.
+To access and use custom NER projects, your account must have one of the following roles in your Language resource. If you have contributors who need access to your projects, they will also need one of these roles to access the Language resource's managed identity:
+* *owner*
+* *contributor*
-## Enable identity management for your resource
+### Enable managed identities for your Language resource
Your Language resource must have identity management, which can be enabled either using the Azure portal or from Language Studio. To enable it using [Language Studio](https://aka.ms/languageStudio): 1. Click the settings icon in the top right corner of the screen 2. Select **Resources** 3. Select **Managed Identity** for your Azure resource.
-## Roles for your storage account
+### Add roles to your Language resource
+
+After you've enabled managed identities for your resource, add the appropriate owner or contributor role assignments for your account, and your contributors' Azure accounts:
+
+1. Go to your Language resource in the [Azure portal](https://ms.portal.azure.com/).
+2. Select **Access Control (IAM)** in the left navigation menu.
+3. Select **Add** then **Add Role Assignments**, and choose the **Owner** or **Contributor** role. You can search for user names in the **Select** field.
+
+## Required roles for your storage account
-Your Azure blob storage account must have the below roles:
+Your Language resource must have the below roles assigned within your Azure blob storage account:
-* Your resource has the **owner** or **contributor** role on the storage account.
-* Your resource has the **Storage blob data owner** or **Storage blob data contributor** role on the storage account.
-* Your resource has the **Reader** role on the storage account.
+* *owner* or *contributor*, and
+* *storage blob data owner* or *storage blob data contributor*, and
+* *reader*
+
+### Add roles to your storage account
To set proper roles on your storage account: 1. Go to your storage account page in the [Azure portal](https://ms.portal.azure.com/). 2. Select **Access Control (IAM)** in the left navigation menu.
-3. Select **Add** to **Add Role Assignments**, and choose the **Owner** or **Contributor** role. You can search for user names in the **Select** field.
+3. Select **Add** then **Add Role Assignments**, and choose the appropriate role for your Language resource. You can search for your resource in the **Select** field. Repeat this for all roles.
[!INCLUDE [Storage connection note](../../custom-classification/includes/storage-account-note.md)]
+For information on authorizing access to your Azure blob storage account and data, see [Authorize access to data in Azure storage](/azure/storage/common/authorize-data-access?toc=/azure/storage/blobs/toc.json).
+ ## Prepare training data * As a prerequisite for creating a custom NER project, your training data needs to be uploaded to a blob container in your storage account. You can create and upload training files from Azure directly or through using the Azure Storage Explorer tool. Using Azure Storage Explorer tool allows you to upload more data in less time.
cognitive-services Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/language-service/custom-named-entity-recognition/tutorials/cognitive-search.md
In this tutorial, you learn how to:
3. Select **Create new project** from the top menu in your projects page. Creating a project will let you tag data, train, evaluate, improve, and deploy your models.
-4. If youΓÇÖve created your resource using the steps above in this [guide](../how-to/create-project.md#azure-resources), the **Connect storage** step will be completed already. If not, you need to assign [roles for your storage account](../how-to/create-project.md#roles-for-your-storage-account) before connecting it to your resource
+4. If youΓÇÖve created your resource using the steps above in this [guide](../how-to/create-project.md#azure-resources), the **Connect storage** step will be completed already. If not, you need to assign [roles for your storage account](../how-to/create-project.md#required-roles-for-your-storage-account) before connecting it to your resource
5. Enter project information, including a name, description, and the language of the files in your project. You wonΓÇÖt be able to change the name of your project later. >[!TIP]
communication-services Credentials Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/credentials-best-practices.md
+
+ Title: Azure Communication Services - Credentials best practices
+description: Learn more about the best practices for managing User Access Tokens in SDKs
+++++ Last updated : 01/30/2022++
+#Customer intent: As a developer, I want learn how to correctly handle Credential objects so that I can build applications that run efficiently.
++
+# Credentials in Communication SDKs
+
+This article provides best practices for managing [User Access Tokens](./authentication.md#user-access-tokens) in Azure Communication Services SDKs. Following this guidance will help you optimize the resources used by your application and reduce the number of roundtrips to the Azure Communication Identity API.
+
+## Communication Token Credential
+
+Communication Token Credential (Credential) is an authentication primitive that wraps User Access Tokens. It's used to authenticate users in Communication Services, such as Chat or Calling. Additionally, it provides built-in token refreshing functionality for the convenience of the developer.
+
+## Initialization
+
+Depending on your scenario, you may want to initialize the Credential with a [static token](#static-token) or a [callback function](#callback-function) returning tokens.
+No matter which method you choose, you can supply the tokens to the Credential via the Azure Communication Identity API.
+
+### Static token
+
+For short-lived clients, initialize the Credential with a static token. This approach is suitable for scenarios such as sending one-off Chat messages or time-limited Calling sessions.
+
+```javascript
+const tokenCredential = new AzureCommunicationTokenCredential("<user_access_token>");
+```
+
+### Callback function
+
+For long-lived clients, initialize the Credential with a callback function that ensures a continuous authentication state during communications. This approach is suitable, for example, for long Calling sessions.
+
+```javascript
+const tokenCredential = new AzureCommunicationTokenCredential({
+ tokenRefresher: async (abortSignal) => fetchTokenFromMyServerForUser(abortSignal, "<user_name>")
+ });
+```
+
+## Token refreshing
+
+To correctly implement the token refresher callback, the code must return a string with a valid JSON Web Token (JWT). It's necessary that the returned token is valid (its expiration date is set in the future) at all times. Some platforms, such as JavaScript and .NET, offer a way to abort the refresh operation, and pass `AbortSignal` or `CancellationToken` to your function. It's recommended to accept these objects, utilize them or pass them further.
+
+### Example 1: Refresh token for a Communication User
+
+Let's assume we have a Node.js application built on Express with the `/getToken` endpoint allowing to fetch a new valid token for a user specified by name.
+
+```javascript
+app.post('/getToken', async (req, res) => {
+ // Custom logic to determine the communication user id
+ let userId = await getCommunicationUserIdFromDb(req.body.username);
+ // Get a fresh token
+ const identityClient = new CommunicationIdentityClient("<COMMUNICATION_SERVICES_CONNECTION_STRING>");
+ let communicationIdentityToken = await identityClient.getToken({ communicationUserId: userId }, ["chat", "voip"]);
+ res.json({ communicationIdentityToken: communicationIdentityToken.token });
+});
+```
+
+Next, we need to implement a token refresher callback in the client application, properly utilizing the `AbortSignal` and returning an unwrapped JWT string.
+
+```javascript
+const fetchTokenFromMyServerForUser = async function (abortSignal, username) {
+ const response = await fetch(`${HOST_URI}/getToken`,
+ {
+ method: "POST",
+ body: JSON.stringify({ username: username }),
+ signal: abortSignal,
+ headers: { 'Content-Type': 'application/json' }
+ });
+
+ if (response.ok) {
+ const data = await response.json();
+ return data.communicationIdentityToken;
+ }
+};
+```
+
+### Example 2: Refresh token for a Teams User
+
+Let's assume we have a Node.js application built on Express with the `/getTokenForTeamsUser` endpoint allowing to exchange an Azure Active Directory (Azure AD) access token of a Teams user for a new Communication Identity access token with a matching expiration time.
+
+```javascript
+app.post('/getTokenForTeamsUser', async (req, res) => {
+ const identityClient = new CommunicationIdentityClient("<COMMUNICATION_SERVICES_CONNECTION_STRING>");
+ let communicationIdentityToken = await identityClient.getTokenForTeamsUser(req.body.teamsToken);
+ res.json({ communicationIdentityToken: communicationIdentityToken.token });
+});
+```
+
+Next, we need to implement a token refresher callback in the client application, whose responsibility will be to:
+
+1. Refresh the Azure AD access token of the Teams User
+1. Exchange the Azure AD access token of the Teams User for a Communication Identity access token
+
+```javascript
+const fetchTokenFromMyServerForUser = async function (abortSignal, username) {
+ // 1. Refresh the Azure AD access token of the Teams User
+ let teamsTokenResponse = await refreshAadToken(abortSignal, username);
+
+ // 2. Exchange the Azure AD access token of the Teams User for a Communication Identity access token
+ const response = await fetch(`${HOST_URI}/getTokenForTeamsUser`,
+ {
+ method: "POST",
+ body: JSON.stringify({ teamsToken: teamsTokenResponse.accessToken }),
+ signal: abortSignal,
+ headers: { 'Content-Type': 'application/json' }
+ });
+
+ if (response.ok) {
+ const data = await response.json();
+ return data.communicationIdentityToken;
+ }
+}
+```
+
+In this example, we use the Microsoft Authentication Library (MSAL) to refresh the Azure AD access token. Following the guide to [acquire an Azure AD token to call an API](../../active-directory/develop/scenario-spa-acquire-token.md), we first try to obtain the token without the user's interaction. If that's not possible, we trigger one of the interactive flows.
+
+```javascript
+const refreshAadToken = async function (abortSignal, username) {
+ if (abortSignal.aborted === true) throw new Error("Operation canceled");
+
+ // MSAL.js v2 exposes several account APIs; the logic to determine which account to use is the responsibility of the developer.
+ // In this case, we'll use an account from the cache.
+ let account = (await publicClientApplication.getTokenCache().getAllAccounts()).find(u => u.username === username);
+
+ const renewRequest = {
+ scopes: ["https://auth.msft.communication.azure.com/Teams.ManageCalls"],
+ account: account,
+ forceRefresh: forceRefresh
+ };
+ let tokenResponse = null;
+ // Try to get the token silently without the user's interaction
+ await publicClientApplication.acquireTokenSilent(renewRequest).then(renewResponse => {
+ tokenResponse = renewResponse;
+ }).catch(async (error) => {
+ // In case of an InteractionRequired error, send the same request in an interactive call
+ if (error instanceof InteractionRequiredAuthError) {
+ // You can choose the popup or redirect experience (`acquireTokenPopup` or `acquireTokenRedirect` respectively)
+ publicClientApplication.acquireTokenPopup(renewRequest).then(function (renewInteractiveResponse) {
+ tokenResponse = renewInteractiveResponse;
+ }).catch(function (interactiveError) {
+ console.log(interactiveError);
+ });
+ }
+ });
+ return tokenResponse;
+}
+```
+
+## Initial token
+
+To further optimize your code, you can fetch the token at the application's startup and pass it to the Credential directly. Providing an initial token will skip the first call to the refresher callback function while preserving all subsequent calls to it.
+
+```javascript
+const tokenCredential = new AzureCommunicationTokenCredential({
+ tokenRefresher: async () => fetchTokenFromMyServerForUser("<user_id>"),
+ token: "<initial_token>"
+ });
+```
+
+## Proactive token refreshing
+
+Use proactive refreshing to eliminate any possible delay during the on-demand fetching of the token. The proactive refreshing will refresh the token in the background at the end of its lifetime. When the token is about to expire, 10 minutes before the end of its validity, the Credential will start attempting to retrieve the token. It will trigger the refresher callback with increasing frequency until it succeeds and retrieves a token with long enough validity.
+
+```javascript
+const tokenCredential = new AzureCommunicationTokenCredential({
+ tokenRefresher: async () => fetchTokenFromMyServerForUser("<user_id>"),
+ refreshProactively: true
+ });
+```
+
+If you want to cancel scheduled refresh tasks, [dispose](#clean-up-resources) of the Credential object.
+
+### Proactively refresh token for a Teams User
+
+To minimize the number of roundtrips to the Azure Communication Identity API, make sure the Azure AD token you're passing for an [exchange](../quickstarts/manage-teams-identity.md#step-3-exchange-the-azure-ad-access-token-of-the-teams-user-for-a-communication-identity-access-token) has long enough validity (> 10 minutes). In case that MSAL returns a cached token with a shorter validity, you have the following options to bypass the cache:
+
+1. Refresh the token forcibly
+1. Increase the MSAL's token renewal window to more than 10 minutes
+
+# [JavaScript](#tab/javascript)
+
+Option 1: Trigger the token acquisition flow with [`AuthenticationParameters.forceRefresh`](../../active-directory/develop/msal-js-pass-custom-state-authentication-request.md) set to `true`.
+
+```javascript
+// Extend the `refreshAadToken` function
+const refreshAadToken = async function (abortSignal, username) {
+
+ // ... existing refresh logic
+
+ // Make sure the token has at least 10-minute lifetime and if not, force-renew it
+ if (tokenResponse.expiresOn < (Date.now() + (10 * 60 * 1000))) {
+ const renewRequest = {
+ scopes: ["https://auth.msft.communication.azure.com/Teams.ManageCalls"],
+ account: account,
+ forceRefresh: true // Force-refresh the token
+ };
+
+ await publicClientApplication.acquireTokenSilent(renewRequest).then(renewResponse => {
+ tokenResponse = renewResponse;
+ });
+ }
+}
+```
+
+Option 2: Initialize the MSAL authentication context by instantiating a `PublicClientApplication` with a custom [`SystemOptions.tokenRenewalOffsetSeconds`](https://azuread.github.io/microsoft-authentication-library-for-js/ref/modules/_azure_msal_common.html#systemoptions-1).
+
+```javascript
+const publicClientApplication = new PublicClientApplication({
+ system: {
+ tokenRenewalOffsetSeconds: 900 // 15 minutes (by default 5 minutes)
+ });
+```
+++
+## Cancel refreshing
+
+For the Communication clients to be able to cancel ongoing refresh tasks, it's necessary to pass a cancellation object to the refresher callback.
+*Note that this pattern applies only to JavaScript and .NET.*
+
+```javascript
+var controller = new AbortController();
+var signal = controller.signal;
+
+var joinChatBtn = document.querySelector('.joinChat');
+var leaveChatBtn = document.querySelector('.leaveChat');
+
+joinChatBtn.addEventListener('click', function() {
+ // Wrong:
+ const tokenCredentialWrong = new AzureCommunicationTokenCredential({
+ tokenRefresher: async () => fetchTokenFromMyServerForUser("<user_name>")
+ });
+
+ // Correct: Pass abortSignal through the arrow function
+ const tokenCredential = new AzureCommunicationTokenCredential({
+ tokenRefresher: async (abortSignal) => fetchTokenFromMyServerForUser(abortSignal, "<user_name>")
+ });
+
+ // ChatClient is now able to abort token refresh tasks
+ const chatClient = new ChatClient("<endpoint-url>", tokenCredential);
+
+ // ...
+});
+
+leaveChatBtn.addEventListener('click', function() {
+ controller.abort();
+ console.log('Leaving chat...');
+});
+```
+
+### Clean up resources
+
+Communication Services applications should dispose the Credential instance when it's no longer needed. Disposing the credential is also the recommended way of canceling scheduled refresh actions when the proactive refreshing is enabled.
+
+Call the `.dispose()` function.
+
+```javascript
+const tokenCredential = new AzureCommunicationTokenCredential("<token>");
+// Use the credential for Calling or Chat
+const chatClient = new ChatClient("<endpoint-url>", tokenCredential);
+// ...
+tokenCredential.dispose()
+```
+++
+## Next steps
+
+In this article, you learned how to:
+
+> [!div class="checklist"]
+> * Correctly initialize and dispose of a Credential object
+> * Implement a token refresher callback
+> * Optimize your token refreshing logic
+
+To learn more, you may want to explore the following quickstart guides:
+
+* [Create and manage access tokens](../quickstarts/access-tokens.md)
+* [Manage access tokens for Teams users](../quickstarts/manage-teams-identity.md)
communication-services Direct Routing Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony/direct-routing-infrastructure.md
The infrastructure requirements for the supported SBCs, domains, and other netwo
|Azure subscription|An Azure subscription that you use to create Communication Services resource, and the configuration and connection to the SBC.| |Communication Services Access Token|To make calls, you need a valid Access Token with `voip` scope. See [Access Tokens](../identity-model.md#access-tokens)| |Public IP address for the SBC|A public IP address that can be used to connect to the SBC. Based on the type of SBC, the SBC can use NAT.|
-|Fully Qualified Domain Name (FQDN) for the SBC|An FQDN for the SBC, where the domain portion of the FQDN doesnΓÇÖt match registered domains in your Microsoft 365 or Office 365 organization. For more information, see [SBC certificates and domain names](#sbc-certificates-and-domain-names).|
+|Fully Qualified Domain Name (FQDN) for the SBC|For more information, see [SBC certificates and domain names](#sbc-certificates-and-domain-names).|
|Public DNS entry for the SBC |A public DNS entry mapping the SBC FQDN to the public IP address. | |Public trusted certificate for the SBC |A certificate for the SBC to be used for all communication with Azure direct routing. For more information, see [SBC certificates and domain names](#sbc-certificates-and-domain-names).| |Firewall IP addresses and ports for SIP signaling and media |The SBC communicates to the following services in the cloud:<br/><br/>SIP Proxy, which handles the signaling<br/>Media Processor, which handles media<br/><br/>These two services have separate IP addresses in Microsoft Cloud, described later in this document.
communication-services Direct Routing Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/concepts/telephony/direct-routing-provisioning.md
For information about whether Azure Communication Services direct routing is the
1. In the left navigation, select Direct routing under Voice Calling - PSTN and then select Configure from the Session Border Controller tab. 1. Enter a fully qualified domain name and signaling port for the SBC.
-If you are using Office 365, make sure the domain part of the SBCΓÇÖs FQDN is different from the domain you registered in Office 365 admin portal under Domains.
-- For example, if `contoso.com` is a registered domain in O365, you cannot use `sbc.contoso.com` for Communication Services. But you can use an upper-level domain if one does not exist in O365: you can create an `acs.contoso.com` domain and use FQDN `sbc.acs.contoso.com` as an SBC name. - SBC certificate must match the name; wildcard certificates are supported.-- The *.onmicrosoft.com domain cannot be used for the FQDN of the SBC.
+- The *.onmicrosoft.com domain canΓÇÖt be used for the FQDN of the SBC.
For the full list of requirements, refer to [Azure direct routing infrastructure requirements](./direct-routing-infrastructure.md). :::image type="content" source="../media/direct-routing-provisioning/add-session-border-controller.png" alt-text="Adding Session Border Controller.":::-- When you are done, click Next.
+- When you're done, select Next.
If everything set up correctly, you should see exchange of OPTIONS messages between Microsoft and your Session Border Controller, user your SBC monitoring/logs to validate the connection. ## Voice routing considerations Azure Communication Services direct routing has a routing mechanism that allows a call to be sent to a specific Session Border Controller (SBC) based on the called number pattern.
-When you add a direct routing configuration to a resource, all calls made from this resourceΓÇÖs instances (identities) will try a direct routing trunk first. The routing is based on a dialed number and a match in voice routes configured for the resource. If there is a match, the call goes through the direct routing trunk. If there is no match, the next step is to process the alternateCallerId parameter of callAgent.startCall method. If the resource is enabled for Voice Calling (PSTN) and has at least one number purchased from Microsoft, and if alternateCallerId matches one of a purchased number for the resource, the call is routed through the Voice Calling (PSTN) using Microsoft infrastructure. If alternateCallerId parameter does not match any of the purchased numbers, the call will fail. The diagram below demonstrates the Azure Communication Services voice routing logic.
+When you add a direct routing configuration to a resource, all calls made from this resourceΓÇÖs instances (identities) will try a direct routing trunk first. The routing is based on a dialed number and a match in voice routes configured for the resource. If there's a match, the call goes through the direct routing trunk. If there's no match, the next step is to process the `alternateCallerId` parameter of the `callAgent.startCall` method. If the resource is enabled for Voice Calling (PSTN) and has at least one number purchased from Microsoft, the `alternateCallerId` is checked. If the `alternateCallerId` matches one of a purchased number for the resource, the call is routed through the Voice Calling (PSTN) using Microsoft infrastructure. If `alternateCallerId` parameter doesn't match any of the purchased numbers, the call will fail. The diagram below demonstrates the Azure Communication Services voice routing logic.
:::image type="content" source="../media/direct-routing-provisioning/voice-routing-diagram.png" alt-text="Communication Services outgoing voice routing.":::
If you created one voice route with a pattern `^\+1(425|206)(\d{7})$` and added
If you created one voice route with a pattern `^\+1(425|206)(\d{7})$` and added `sbc1.contoso.biz` and `sbc2.contoso.biz` to it, and then created a second route with the same pattern with `sbc3.contoso.biz` and `sbc4.contoso.biz`. In this case, when the user makes a call to `+1 425 XXX XX XX` or `+1 206 XXX XX XX`, the call is first routed to SBC `sbc1.contoso.biz` or `sbc2.contoso.biz`. If both sbc1 and sbc2 are unavailable, the route with lower priority will be tried (`sbc3.contoso.biz` and `sbc4.contoso.biz`). If none of the SBCs of the second route are available, the call is dropped. ### Three routes example:
-If you created one voice route with a pattern `^\+1(425|206)(\d{7})$` and added `sbc1.contoso.biz` and `sbc2.contoso.biz` to it, and then created a second route with the same pattern with `sbc3.contoso.biz` and `sbc4.contoso.biz`, and created a third route with `^+1(\d[10])$` with `sbc5.contoso.biz`. In this case, when the user makes a call to `+1 425 XXX XX XX` or `+1 206 XXX XX XX`, the call is first routed to SBC `sbc1.contoso.biz` or `sbc2.contoso.biz`. If both sbc1 nor sbc2 are unavailable, the route with lower priority will be tried (`sbc3.contoso.biz` and `sbc4.contoso.biz`). If none of the SBCs of a second route are available, the third route will be tried; if sbc5 is also not available, the call is dropped. Also, if a user dials `+1 321 XXX XX XX`, the call goes to `sbc5.contoso.biz`, and it is not available, the call is dropped.
+If you created one voice route with a pattern `^\+1(425|206)(\d{7})$` and added `sbc1.contoso.biz` and `sbc2.contoso.biz` to it, and then created a second route with the same pattern with `sbc3.contoso.biz` and `sbc4.contoso.biz`, and created a third route with `^+1(\d[10])$` with `sbc5.contoso.biz`. In this case, when the user makes a call to `+1 425 XXX XX XX` or `+1 206 XXX XX XX`, the call is first routed to SBC `sbc1.contoso.biz` or `sbc2.contoso.biz`. If both sbc1 nor sbc2 are unavailable, the route with lower priority will be tried (`sbc3.contoso.biz` and `sbc4.contoso.biz`). If none of the SBCs of a second route are available, the third route will be tried. If sbc5 is also not available, the call is dropped. Also, if a user dials `+1 321 XXX XX XX`, the call goes to `sbc5.contoso.biz`, and it isn't available, the call is dropped.
> [!NOTE] > Failover to the next SBC in voice routing works only for response codes 408, 503, and 504.
If you created one voice route with a pattern `^\+1(425|206)(\d{7})$` and added
Give your Voice Route a name, specify the number pattern using regular expressions, and select SBC for that pattern. Here are some examples of basic regular expressions:-- `^\+\d+$` - matches a telephone number with one or more digits that starts with a plus
+- `^\+\d+$` - matches a telephone number with one or more digits that start with a plus
- `^+1(\d[10])$` - matches a telephone number with a ten digits after a `+1` - `^\+1(425|206)(\d{7})$` - matches a telephone number that starts with `+1425` or with `+1206` followed by seven digits - `^\+0?1234$` - matches both `+01234` and `+1234` telephone numbers. For more information about regular expressions, see [.NET regular expressions overview](/dotnet/standard/base-types/regular-expressions).
-You can select multiple SBCs for a single pattern. In such a case, the routing algorithm will choose them in random order. You may also specify the exact number pattern more than once. The higher row will have higher priority, and if all SBCs associated with that row are not available next row will be selected. This way, you create complex routing scenarios.
+You can select multiple SBCs for a single pattern. In such a case, the routing algorithm will choose them in random order. You may also specify the exact number pattern more than once. The higher row will have higher priority, and if all SBCs associated with that row aren't available next row will be selected. This way, you create complex routing scenarios.
## Delete direct routing configuration
communication-services Get Started With Closed Captions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/communication-services/quickstarts/voice-video-calling/get-started-with-closed-captions.md
+
+ Title: Quickstart - Add closed captions to your app
+
+description: In this quickstart, you'll learn how to add closed captions to your existing calling app using Azure Communication Services.
++ Last updated : 02/02/2022+++
+zone_pivot_groups: acs-plat-web-ios-android
+++
+# QuickStart: Add closed captions to your calling app
+++++++
+## Clean up resources
+If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../create-communication-resource.md?pivots=platform-azp&tabs=windows#clean-up-resources).
+
+## Next steps
+For more information, see the following articles:
+
+- Check out our [web calling sample](../../samples/web-calling-sample.md)
+- Learn about [Calling SDK capabilities](./getting-started-with-calling.md?pivots=platform-web)
+- Learn more about [how calling works](../../concepts/voice-video-calling/about-call-types.md)
container-registry Zone Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/container-registry/zone-redundancy.md
description: Learn about enabling zone redundancy in Azure Container Registry. C
Last updated 09/13/2021 + # Enable zone redundancy in Azure Container Registry for resiliency and high availability
In addition to [geo-replication](container-registry-geo-replication.md), which r
This article shows how to set up a zone-redundant container registry or replica by using the Azure CLI, Azure portal, or Azure Resource Manager template.
-Zone redundancy is a **preview** feature of the Premium container registry service tier. For information about registry service tiers and limits, see [Azure Container Registry service tiers](container-registry-skus.md).
+Zone redundancy is a feature of the Premium container registry service tier. For information about registry service tiers and limits, see [Azure Container Registry service tiers](container-registry-skus.md).
-## Preview limitations
+## Regional Support
-* Currently supported in the following regions:
+* ACR Availability Zones are supported in the following regions:
|Americas |Europe |Africa |Asia Pacific | ||||| |Brazil South<br/>Canada Central<br/>Central US<br/>East US<br/>East US 2<br/>South Central US<br/>US Government Virginia<br/>West US 2<br/>West US 3 |France Central<br/>Germany West Central<br/>North Europe<br/>Norway East<br/>West Europe<br/>UK South |South Africa North<br/> |Australia East<br/>Central India<br/>Japan East<br/>Korea Central<br/> | * Region conversions to availability zones aren't currently supported. To enable availability zone support in a region, the registry must either be created in the desired region, with availability zone support enabled, or a replicated region must be added with availability zone support enabled.
+* A registry with an AZ-enabled stamp creates a home region replication with an AZ-enabled stamp by default. The AZ stamp can't be disabled once it's enabled.
+* The home region replication represents the home region registry. It helps to view and manage the availability zone properties and can't be deleted.
+* The availability zone is per region, once the replications are created, their states cannot be changed, except by deleting and re-creating the replications.
* Zone redundancy can't be disabled in a region. * [ACR Tasks](container-registry-tasks-overview.md) doesn't yet support availability zones. + ## About zone redundancy Use Azure [availability zones](../availability-zones/az-overview.md) to create a resilient and high availability Azure container registry within an Azure region. For example, organizations can set up a zone-redundant Azure container registry with other [supported Azure resources](../availability-zones/az-region.md) to meet data residency or other compliance requirements, while providing high availability within a region.
cosmos-db How To Setup Cmk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/how-to-setup-cmk.md
description: Learn how to configure customer-managed keys for your Azure Cosmos
Previously updated : 10/15/2021 Last updated : 02/03/2022
You must store customer-managed keys in [Azure Key Vault](../key-vault/general/o
## Configure your Azure Key Vault instance
+> [!IMPORTANT]
+> Your Azure Key Vault instance must be accessible through public network access. An instance that is only accessible through [private endpoints](../key-vault/general/private-link-service.md) cannot be used to host your customer-managed keys.
+ Using customer-managed keys with Azure Cosmos DB requires you to set two properties on the Azure Key Vault instance that you plan to use to host your encryption keys: **Soft Delete** and **Purge Protection**. If you create a new Azure Key Vault instance, enable these properties during creation:
cosmos-db Advanced Threat Protection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/advanced-threat-protection.md
- Title: 'Advanced Threat Protection for Azure Cosmos DB'
-description: Learn how Azure Cosmos DB provides encryption of data at rest and how it's implemented.
--- Previously updated : 06/08/2021------
-# Advanced Threat Protection for Azure Cosmos DB (Preview)
-
-Advanced Threat Protection for Azure Cosmos DB provides an additional layer of security intelligence that detects unusual and potentially harmful attempts to access or exploit Azure Cosmos DB accounts. This layer of protection allows you to address threats, even without being a security expert, and integrate them with central security monitoring systems.
-
-Security alerts are triggered when anomalies in activity occur. These security alerts are integrated with [Microsoft Defender for Cloud](https://azure.microsoft.com/services/security-center/), and are also sent via email to subscription administrators, with details of the suspicious activity and recommendations on how to investigate and remediate the threats.
-
-> [!NOTE]
->
-> * Advanced Threat Protection for Azure Cosmos DB is currently available only for the SQL API.
-> * Advanced Threat Protection for Azure Cosmos DB is currently not available in Azure government and sovereign cloud regions.
-
-For a full investigation experience of the security alerts, we recommended enabling [diagnostic logging in Azure Cosmos DB](../monitor-cosmos-db.md), which logs operations on the database itself, including CRUD operations on all documents, containers, and databases.
-
-## Threat types
-
-Advanced Threat Protection for Azure Cosmos DB detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. It can currently trigger the following alerts:
--- **Access from unusual locations**: This alert is triggered when there is a change in the access pattern to an Azure Cosmos account, where someone has connected to the Azure Cosmos DB endpoint from an unusual geographical location. In some cases, the alert detects a legitimate action, meaning a new application or developerΓÇÖs maintenance operation. In other cases, the alert detects a malicious action from a former employee, external attacker, etc.--- **Unusual data extraction**: This alert is triggered when a client is extracting an unusual amount of data from an Azure Cosmos DB account. This can be the symptom of some data exfiltration performed to transfer all the data stored in the account to an external data store.---
-## Configure Advanced Threat Protection
-
-You can configure advanced threat protection in any of several ways, described in the following sections.
-
-# [Portal](#tab/azure-portal)
-
-1. Launch the Azure portal at [https://portal.azure.com](https://portal.azure.com/).
-
-2. From the Azure Cosmos DB account, from the **Settings** menu, select **Advanced security**.
-
- :::image type="content" source="./media/advanced-threat-protection/cosmos-db-atp.png" alt-text="Set up ATP":::
-
-3. In the **Advanced security** configuration blade:
-
- * Click the **Advanced Threat Protection** option to set it to **ON**.
- * Click **Save** to save the new or updated Advanced Threat Protection policy.
-
-# [REST API](#tab/rest-api)
-
-Use Rest API commands to create, update, or get the Advanced Threat Protection setting for a specific Azure Cosmos DB account.
-
-* [Advanced Threat Protection - Create](/rest/api/securitycenter/advancedthreatprotection/create)
-* [Advanced Threat Protection - Get](/rest/api/securitycenter/advancedthreatprotection/get)
-
-# [PowerShell](#tab/azure-powershell)
-
-Use the following PowerShell cmdlets:
-
-* [Enable Advanced Threat Protection](/powershell/module/az.security/enable-azsecurityadvancedthreatprotection)
-* [Get Advanced Threat Protection](/powershell/module/az.security/get-azsecurityadvancedthreatprotection)
-* [Disable Advanced Threat Protection](/powershell/module/az.security/disable-azsecurityadvancedthreatprotection)
-
-# [ARM template](#tab/arm-template)
-
-Use an Azure Resource Manager (ARM) template to set up Cosmos DB with Advanced Threat Protection enabled.
-For more information, see
-[Create a CosmosDB Account with Advanced Threat Protection](https://azure.microsoft.com/resources/templates/cosmosdb-advanced-threat-protection-create-account/).
-
-# [Azure Policy](#tab/azure-policy)
-
-Use an Azure Policy to enable Advanced Threat Protection for Cosmos DB.
-
-1. Launch the Azure **Policy - Definitions** page, and search for the **Deploy Advanced Threat Protection for Cosmos DB** policy.
-
- :::image type="content" source="./media/advanced-threat-protection/cosmos-db.png" alt-text="Search Policy":::
-
-1. Click on the **Deploy Advanced Threat Protection for CosmosDB** policy, and then click **Assign**.
-
- :::image type="content" source="./media/advanced-threat-protection/cosmos-db-atp-policy.png" alt-text="Select Subscription Or Group":::
--
-1. From the **Scope** field, click the three dots, select an Azure subscription or resource group, and then click **Select**.
-
- :::image type="content" source="./media/advanced-threat-protection/cosmos-db-atp-details.png" alt-text="Policy Definitions Page":::
--
-1. Enter the other parameters, and click **Assign**.
----
-## Manage ATP security alerts
-
-When Azure Cosmos DB activity anomalies occur, a security alert is triggered with information about the suspicious security event.
-
- From Microsoft Defender for Cloud, you can review and manage your current [security alerts](../../security-center/security-center-alerts-overview.md). Click on a specific alert in [Defender for Cloud](https://ms.portal.azure.com/#blade/Microsoft_Azure_Security/SecurityMenuBlade/0) to view possible causes and recommended actions to investigate and mitigate the potential threat. The following image shows an example of alert details provided in Defender for Cloud.
-
- :::image type="content" source="./media/advanced-threat-protection/cosmos-db-alert-details.png" alt-text="Threat details":::
-
-An email notification is also sent with the alert details and recommended actions. The following image shows an example of an alert email.
-
- :::image type="content" source="./media/advanced-threat-protection/cosmos-db-alert.png" alt-text="Alert details":::
-
-## Cosmos DB ATP alerts
-
- To see a list of the alerts generated when monitoring Azure Cosmos DB accounts, see the [Cosmos DB alerts](../../security-center/alerts-reference.md#alerts-azurecosmos) section in the Microsoft Defender for Cloud documentation.
-
-## Next steps
-
-* Learn more about [Diagnostic logging in Azure Cosmos DB](../cosmosdb-monitor-resource-logs.md)
-* Learn more about [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md)
cosmos-db Defender For Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/defender-for-cosmos-db.md
+
+ Title: 'Microsoft Defender for Azure Cosmos DB'
+description: Learn how Microsoft Defender provides advanced threat protection on Azure Cosmos DB.
+++ Last updated : 02/03/2022++++
+# Microsoft Defender for Cosmos DB (Preview)
+
+Microsoft Defender for Cosmos DB provides an extra layer of security intelligence that detects unusual and potentially harmful attempts to access or exploit Azure Cosmos DB accounts. This layer of protection allows you to address threats, even without being a security expert, and integrate them with central security monitoring systems.
+
+Security alerts are triggered when anomalies in activity occur. These security alerts show up in [Microsoft Defender for Cloud](https://azure.microsoft.com/services/security-center/). Subscription administrators also get these alerts over email, with details of the suspicious activity and recommendations on how to investigate and remediate the threats.
+
+> [!NOTE]
+>
+> * Microsoft Defender for Cosmos DB is currently available only for the Core (SQL) API.
+> * Microsoft Defender for Cosmos DB is not currently available in Azure government and sovereign cloud regions.
+
+For a full investigation experience of the security alerts, we recommended enabling [diagnostic logging in Azure Cosmos DB](../monitor-cosmos-db.md), which logs operations on the database itself, including CRUD operations on all documents, containers, and databases.
+
+## Threat types
+
+Microsoft Defender for Cosmos DB detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. It can currently trigger the following alerts:
+
+- **Access from unusual locations**: This alert is triggered when there is a change in the access pattern to an Azure Cosmos DB account, where someone has connected to the Azure Cosmos DB endpoint from an unusual geographical location. In some cases, the alert detects a legitimate action, meaning a new application or developerΓÇÖs maintenance operation. In other cases, the alert detects a malicious action from a former employee, external attacker, etc.
+
+- **Unusual data extraction**: This alert is triggered when a client is extracting an unusual amount of data from an Azure Cosmos DB account. It can be the symptom of some data exfiltration performed to transfer all the data stored in the account to an external data store.
+
+## Configure Microsoft Defender for Cosmos DB
+
+You can configure Microsoft Defender protection in any of several ways, described in the following sections.
+
+# [Portal](#tab/azure-portal)
+
+1. Launch the Azure portal at [https://portal.azure.com](https://portal.azure.com/).
+
+2. From the Azure Cosmos DB account, from the **Settings** menu, select **Microsoft Defender for Cloud**.
+
+ :::image type="content" source="./media/defender-for-cosmos-db/cosmos-db-atp.png" alt-text="Set up Azure Defender for Cosmos DB" border="true":::
+
+3. In the **Microsoft Defender for Cloud** configuration blade:
+
+ * Change the option from **OFF** to **ON**.
+ * Click **Save**.
+
+# [REST API](#tab/rest-api)
+
+Use Rest API commands to create, update, or get the Azure Defender setting for a specific Azure Cosmos DB account.
+
+* [Advanced Threat Protection - Create](/rest/api/securitycenter/advancedthreatprotection/create)
+* [Advanced Threat Protection - Get](/rest/api/securitycenter/advancedthreatprotection/get)
+
+# [PowerShell](#tab/azure-powershell)
+
+Use the following PowerShell cmdlets:
+
+* [Enable Advanced Threat Protection](/powershell/module/az.security/enable-azsecurityadvancedthreatprotection)
+* [Get Advanced Threat Protection](/powershell/module/az.security/get-azsecurityadvancedthreatprotection)
+* [Disable Advanced Threat Protection](/powershell/module/az.security/disable-azsecurityadvancedthreatprotection)
+
+# [ARM template](#tab/arm-template)
+
+Use an Azure Resource Manager (ARM) template to set up Azure Cosmos DB with Azure Defender protection enabled. For more information, see
+[Create a CosmosDB Account with Advanced Threat Protection](https://azure.microsoft.com/resources/templates/cosmosdb-advanced-threat-protection-create-account/).
+
+# [Azure Policy](#tab/azure-policy)
+
+Use an Azure Policy to enable Azure Defender for Cosmos DB.
+
+1. Launch the Azure **Policy - Definitions** page, and search for the **Deploy Advanced Threat Protection for Cosmos DB** policy.
+
+ :::image type="content" source="./media/defender-for-cosmos-db/cosmos-db.png" alt-text="Search Policy":::
+
+1. Click on the **Deploy Advanced Threat Protection for CosmosDB** policy, and then click **Assign**.
+
+ :::image type="content" source="./media/defender-for-cosmos-db/cosmos-db-atp-policy.png" alt-text="Select Subscription Or Group":::
+
+1. From the **Scope** field, click the three dots, select an Azure subscription or resource group, and then click **Select**.
+
+ :::image type="content" source="./media/defender-for-cosmos-db/cosmos-db-atp-details.png" alt-text="Policy Definitions Page":::
+
+1. Enter the other parameters, and click **Assign**.
+++
+## Manage security alerts
+
+When Azure Cosmos DB activity anomalies occur, a security alert is triggered with information about the suspicious security event.
+
+ From Microsoft Defender for Cloud, you can review and manage your current [security alerts](../../security-center/security-center-alerts-overview.md). Click on a specific alert in [Defender for Cloud](https://ms.portal.azure.com/#blade/Microsoft_Azure_Security/SecurityMenuBlade/0) to view possible causes and recommended actions to investigate and mitigate the potential threat. The following image shows an example of alert details provided in Defender for Cloud.
+
+ :::image type="content" source="./media/defender-for-cosmos-db/cosmos-db-alert-details.png" alt-text="Threat details":::
+
+An email notification is also sent with the alert details and recommended actions. The following image shows an example of an alert email.
+
+ :::image type="content" source="./media/defender-for-cosmos-db/cosmos-db-alert.png" alt-text="Alert details":::
+
+## Azure Cosmos DB alerts
+
+ To see a list of the alerts generated when monitoring Azure Cosmos DB accounts, see the [Azure Cosmos DB alerts](../../security-center/alerts-reference.md#alerts-azurecosmos) section in the Microsoft Defender for Cloud documentation.
+
+## Next steps
+
+* Learn more about [Diagnostic logging in Azure Cosmos DB](../cosmosdb-monitor-resource-logs.md)
+* Learn more about [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md)
cosmos-db How To Use Stored Procedures Triggers Udfs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/how-to-use-stored-procedures-triggers-udfs.md
Learn more concepts and how-to write or use stored procedures, triggers, and use
- [Working with Azure Cosmos DB stored procedures, triggers, and user-defined functions in Azure Cosmos DB](stored-procedures-triggers-udfs.md) - [Working with JavaScript language integrated query API in Azure Cosmos DB](javascript-query-api.md) - [How to write stored procedures, triggers, and user-defined functions in Azure Cosmos DB](how-to-write-stored-procedures-triggers-udfs.md)-- [How to write stored procedures and triggers using Javascript Query API in Azure Cosmos DB](how-to-write-javascript-query-api.md)
+- [How to write stored procedures and triggers using JavaScript Query API in Azure Cosmos DB](how-to-write-javascript-query-api.md)
cosmos-db How To Write Stored Procedures Triggers Udfs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/how-to-write-stored-procedures-triggers-udfs.md
Learn more concepts and how-to write or use stored procedures, triggers, and use
* [How to register and use stored procedures, triggers, and user-defined functions in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md)
-* [How to write stored procedures and triggers using Javascript Query API in Azure Cosmos DB](how-to-write-javascript-query-api.md)
+* [How to write stored procedures and triggers using JavaScript Query API in Azure Cosmos DB](how-to-write-javascript-query-api.md)
* [Working with Azure Cosmos DB stored procedures, triggers, and user-defined functions in Azure Cosmos DB](stored-procedures-triggers-udfs.md)
cosmos-db Javascript Query Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/javascript-query-api.md
The following table presents various SQL queries and the corresponding JavaScrip
Learn more concepts and how-to write and use stored procedures, triggers, and user-defined functions in Azure Cosmos DB: -- [How to write stored procedures and triggers using Javascript Query API](how-to-write-javascript-query-api.md)
+- [How to write stored procedures and triggers using JavaScript Query API](how-to-write-javascript-query-api.md)
- [Working with Azure Cosmos DB stored procedures, triggers and user-defined functions](stored-procedures-triggers-udfs.md) - [How to use stored procedures, triggers, user-defined functions in Azure Cosmos DB](how-to-use-stored-procedures-triggers-udfs.md) - [Azure Cosmos DB JavaScript server-side API reference](https://azure.github.io/azure-cosmosdb-js-server)
cosmos-db Sql Api Dotnet V3sdk Samples https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-api-dotnet-v3sdk-samples.md
The [RunBasicChangeFeed](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/ma
| [Basic change feed functionality](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs#L91-L119) |[Container.GetChangeFeedProcessorBuilder](/dotnet/api/microsoft.azure.cosmos.container.getchangefeedprocessorbuilder) | | [Read change feed from a specific time](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs#L127-L162) |[Container.GetChangeFeedProcessorBuilder](/dotnet/api/microsoft.azure.cosmos.container.getchangefeedprocessorbuilder) | | [Read change feed from the beginning](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs#L170-L198) |[ChangeFeedProcessorBuilder.WithStartTime(DateTime)](/dotnet/api/microsoft.azure.cosmos.changefeedprocessorbuilder.withstarttime) |
-| [MIgrate from change feed processor to change feed in V3 SDK](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs#L256-L333) |[Container.GetChangeFeedProcessorBuilder](/dotnet/api/microsoft.azure.cosmos.container.getchangefeedprocessorbuilder) |
+| [Migrate from change feed processor to change feed in V3 SDK](https://github.com/Azure/azure-cosmos-dotnet-v3/blob/master/Microsoft.Azure.Cosmos.Samples/Usage/ChangeFeed/Program.cs#L256-L333) |[Container.GetChangeFeedProcessorBuilder](/dotnet/api/microsoft.azure.cosmos.container.getchangefeedprocessorbuilder) |
## Server-side programming examples
cosmos-db Stored Procedures Triggers Udfs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/stored-procedures-triggers-udfs.md
Similar to pre-triggers, post-triggers, are also associated with an operation on
## <a id="jsqueryapi"></a>JavaScript language-integrated query API
-In addition to issuing queries using SQL API query syntax, the [server-side SDK](https://azure.github.io/azure-cosmosdb-js-server) allows you to perform queries by using a JavaScript interface without any knowledge of SQL. The JavaScript query API allows you to programmatically build queries by passing predicate functions into sequence of function calls. Queries are parsed by the JavaScript runtime and are executed efficiently within Azure Cosmos DB. To learn about JavaScript query API support, see [Working with JavaScript language integrated query API](javascript-query-api.md) article. For examples, see [How to write stored procedures and triggers using Javascript Query API](how-to-write-javascript-query-api.md) article.
+In addition to issuing queries using SQL API query syntax, the [server-side SDK](https://azure.github.io/azure-cosmosdb-js-server) allows you to perform queries by using a JavaScript interface without any knowledge of SQL. The JavaScript query API allows you to programmatically build queries by passing predicate functions into sequence of function calls. Queries are parsed by the JavaScript runtime and are executed efficiently within Azure Cosmos DB. To learn about JavaScript query API support, see [Working with JavaScript language integrated query API](javascript-query-api.md) article. For examples, see [How to write stored procedures and triggers using JavaScript Query API](how-to-write-javascript-query-api.md) article.
## Next steps
cosmos-db Troubleshoot Java Sdk V4 Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/troubleshoot-java-sdk-v4-sql.md
Title: Diagnose and troubleshoot Azure Cosmos DB Java SDK v4
description: Use features like client-side logging and other third-party tools to identify, diagnose, and troubleshoot Azure Cosmos DB issues in Java SDK v4. Previously updated : 01/25/2022 Last updated : 02/03/2022 ms.devlang: java
Also follow the [Connection limit on a host machine](#connection-limit-on-host).
#### UnknownHostException
-UnknownHostException means that the Java framework cannot resolve the DNS entry for the Cosmos DB endpoint in the affected machine. You should verify that the machine can resolve the DNS entry or if you have any custom DNS resolution software (such as VPN or Proxy, or a custom solution), make sure it contains the right configuration for the DNS endpoint that the error is claiming cannot be resolved.
+UnknownHostException means that the Java framework cannot resolve the DNS entry for the Cosmos DB endpoint in the affected machine. You should verify that the machine can resolve the DNS entry or if you have any custom DNS resolution software (such as VPN or Proxy, or a custom solution), make sure it contains the right configuration for the DNS endpoint that the error is claiming cannot be resolved. If the error is constant, you can verify the machine's DNS resolution through a `curl` command to the endpoint described in the error.
#### HTTP proxy
cosmos-db Troubleshoot Service Unavailable Java Sdk V4 Sql https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/troubleshoot-service-unavailable-java-sdk-v4-sql.md
description: Learn how to diagnose and fix Azure Cosmos DB service unavailable e
Previously updated : 10/28/2020 Last updated : 02/03/2022
Exception in thread "main" ServiceUnavailableException{userAgent=azsdk-java-cosm
Follow the [request timeout troubleshooting steps](troubleshoot-request-timeout-java-sdk-v4-sql.md#troubleshooting-steps) to resolve it.
+#### UnknownHostException
+UnknownHostException means that the Java framework cannot resolve the DNS entry for the Cosmos DB endpoint in the affected machine. You should verify that the machine can resolve the DNS entry or if you have any custom DNS resolution software (such as VPN or Proxy, or a custom solution), make sure it contains the right configuration for the DNS endpoint that the error is claiming cannot be resolved. If the error is constant, you can verify the machine's DNS resolution through a `curl` command to the endpoint described in the error.
+ ### Service outage Check the [Azure status](https://status.azure.com/status) to see if there's an ongoing issue.
data-factory Author Global Parameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/author-global-parameters.md
After a global parameter is created, you can edit it by clicking the parameter's
:::image type="content" source="media/author-global-parameters/create-global-parameter-3.png" alt-text="Create global parameters":::
-Global parameters are stored as part of the /factory/{factory_name}-arm-template parameters.json.
## Using global parameters in a pipeline
$globalParametersJson = Get-Content $globalParametersFilePath
Write-Host "Parsing JSON..." $globalParametersObject = [Newtonsoft.Json.Linq.JObject]::Parse($globalParametersJson)
-foreach ($gp in $factoryFileObject.properties.globalParameters.GetEnumerator()) {
- # foreach ($gp in $globalParametersObject.GetEnumerator()) {
+# $gp in $factoryFileObject.properties.globalParameters.GetEnumerator())
+# may be used in case you use non-standard location for global parameters. It is not recommended.
+foreach ($gp in $globalParametersObject.GetEnumerator()) {
Write-Host "Adding global parameter:" $gp.Key $globalParameterValue = $gp.Value.ToObject([Microsoft.Azure.Management.DataFactory.Models.GlobalParameterSpecification]) $newGlobalParameters.Add($gp.Key, $globalParameterValue)
data-factory Concepts Data Flow Column Pattern https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-data-flow-column-pattern.md
The above example matches on all subcolumns of complex column `a`. `a` contains
* `origin` is the transformation where a column originated or was last updated ## Next steps
-* Learn more about the mapping data flow [expression language](data-flow-expression-functions.md) for data transformations
+* Learn more about the mapping data flow [expression language](data-transformation-functions.md) for data transformations
* Use column patterns in the [sink transformation](data-flow-sink.md) and [select transformation](data-flow-select.md) with rule-based mapping
data-factory Concepts Data Flow Expression Builder https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-data-flow-expression-builder.md
In mapping data flows, expressions can be composed of column values, parameters,
### Functions
-Mapping data flows has built-in functions and operators that can be used in expressions. For a list of available functions, see the [mapping data flow language reference](data-flow-expression-functions.md).
+Mapping data flows has built-in functions and operators that can be used in expressions. For a list of available functions, see the [mapping data flow language reference](data-transformation-functions.md).
#### Address array indexes
In the portal for the service, timestamp is being shown in the **current browser
## Next steps
-[Begin building data transformation expressions](data-flow-expression-functions.md)
+[Begin building data transformation expressions](data-transformation-functions.md)
data-factory Concepts Data Flow Schema Drift https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-data-flow-schema-drift.md
In the generated Derived Column transformation, each drifted column is mapped to
:::image type="content" source="media/data-flow/mapdrifted2.png" alt-text="Screenshot shows the Derived Column's Settings tab."::: ## Next steps
-In the [Data Flow Expression Language](data-flow-expression-functions.md), you'll find additional facilities for column patterns and schema drift including "byName" and "byPosition".
+In the [Data Flow Expression Language](data-transformation-functions.md), you'll find additional facilities for column patterns and schema drift including "byName" and "byPosition".
data-factory Connector Github https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-github.md
The following properties are supported for the GitHub linked service.
| userName | GitHub username | yes | | password | GitHub password | yes |
-## Next Steps
+## Next steps
Create a [source dataset](data-flow-source.md) in mapping data flow.
data-factory Continuous Integration Delivery Improvements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/continuous-integration-delivery-improvements.md
Follow these steps to get started:
inputs: command: 'custom' workingDir: '$(Build.Repository.LocalPath)/<folder-of-the-package.json-file>' #replace with the package.json folder
- customCommand: 'run build validate $(Build.Repository.LocalPath) /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/yourFactoryName'
+ customCommand: 'run build validate $(Build.Repository.LocalPath)/<Root-folder-from-Git-configuration-settings-in-ADF>/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/<Your-Factory-Name>'
displayName: 'Validate' # Validate and then generate the ARM template into the destination folder, which is the same as selecting "Publish" from the UX.
Follow these steps to get started:
inputs: command: 'custom' workingDir: '$(Build.Repository.LocalPath)/<folder-of-the-package.json-file>' #replace with the package.json folder
- customCommand: 'run build export $(Build.Repository.LocalPath) /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/yourFactoryName "ArmTemplate"'
+ customCommand: 'run build export $(Build.Repository.LocalPath)/<Root-folder-from-Git-configuration-settings-in-ADF>/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/testResourceGroup/providers/Microsoft.DataFactory/factories/<Your-Factory-Name> "ArmTemplate"'
displayName: 'Validate and Generate ARM template' # Publish the artifact to be used as a source for a release pipeline.
data-factory Continuous Integration Delivery Resource Manager Custom Parameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/continuous-integration-delivery-resource-manager-custom-parameters.md
The following example shows how to add a single value to the default parameteriz
} ```
-## Next Steps
+## Next steps
- [Continuous integration and delivery overview](continuous-integration-delivery.md) - [Automate continuous integration using Azure Pipelines releases](continuous-integration-delivery-automate-azure-pipelines.md)
data-factory Data Flow Aggregate Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-aggregate-functions.md
+
+ Title: Aggregate functions in the mapping data flow
+
+description: Learn about aggregate functions in mapping data flow.
++++++ Last updated : 02/02/2022++
+# Aggregate functions in mapping data flow
+++
+The following articles provide details about aggregate functions supported by Azure Data Factory and Azure Synapse Analytics in mapping data flows.
+
+## Aggregate function list
+
+The following functions are only available in aggregate, pivot, unpivot, and window transformations.
+
+| Aggregate function | Task |
+|-|-|
+| [approxDistinctCount](data-flow-expressions-usage.md#approxDistinctCount) | Gets the approximate aggregate count of distinct values for a column. The optional second parameter is to control the estimation error.|
+| [avg](data-flow-expressions-usage.md#avg) | Gets the average of values of a column. |
+| [avgIf](data-flow-expressions-usage.md#avgIf) | Based on a criteria gets the average of values of a column. |
+| [collect](data-flow-expressions-usage.md#collect) | Collects all values of the expression in the aggregated group into an array. Structures can be collected and transformed to alternate structures during this process. The number of items will be equal to the number of rows in that group and can contain null values. The number of collected items should be small. |
+| [count](data-flow-expressions-usage.md#count) | Gets the aggregate count of values. If the optional column(s) is specified, it ignores NULL values in the count. |
+| [countAll](data-flow-expressions-usage.md#countAll) | Gets the aggregate count of values including NULLs. |
+| [countDistinct](data-flow-expressions-usage.md#countDistinct) | Gets the aggregate count of distinct values of a set of columns. |
+| [countAllDistinct](data-flow-expressions-usage.md#countAllDistinct) | Gets the aggregate count of distinct values of a set of columns including NULLs. |
+| [countIf](data-flow-expressions-usage.md#countIf) | Based on a criteria gets the aggregate count of values. If the optional column is specified, it ignores NULL values in the count. |
+| [covariancePopulation](data-flow-expressions-usage.md#covariancePopulation) | Gets the population covariance between two columns. |
+| [covariancePopulationIf](data-flow-expressions-usage.md#covariancePopulationIf) | Based on a criteria, gets the population covariance of two columns. |
+| [covarianceSample](data-flow-expressions-usage.md#covarianceSample) | Gets the sample covariance of two columns. |
+| [covarianceSampleIf](data-flow-expressions-usage.md#covarianceSampleIf) | Based on a criteria, gets the sample covariance of two columns. |
+| [first](data-flow-expressions-usage.md#first) | Gets the first value of a column group. If the second parameter ignoreNulls is omitted, it is assumed false. |
+| [isDistinct](data-flow-expressions-usage.md#isDistinct) | Finds if a column or set of columns is distinct. It does not count null as a distinct value|
+| [kurtosis](data-flow-expressions-usage.md#kurtosis) | Gets the kurtosis of a column. |
+| [kurtosisIf](data-flow-expressions-usage.md#kurtosisIf) | Based on a criteria, gets the kurtosis of a column. |
+| [last](data-flow-expressions-usage.md#last) | Gets the last value of a column group. If the second parameter ignoreNulls is omitted, it is assumed false. |
+| [max](data-flow-expressions-usage.md#max) | Gets the maximum value of a column. |
+| [maxIf](data-flow-expressions-usage.md#maxIf) | Based on a criteria, gets the maximum value of a column. |
+| [mean](data-flow-expressions-usage.md#mean) | Gets the mean of values of a column. Same as AVG. |
+| [meanIf](data-flow-expressions-usage.md#meanIf) | Based on a criteria gets the mean of values of a column. Same as avgIf. |
+| [min](data-flow-expressions-usage.md#min) | Gets the minimum value of a column. |
+| [minIf](data-flow-expressions-usage.md#minIf) | Based on a criteria, gets the minimum value of a column. |
+| [skewness](data-flow-expressions-usage.md#skewness) | Gets the skewness of a column. |
+| [skewnessIf](data-flow-expressions-usage.md#skewnessIf) | Based on a criteria, gets the skewness of a column. |
+| [stddev](data-flow-expressions-usage.md#stddev) | Gets the standard deviation of a column. |
+| [stddevIf](data-flow-expressions-usage.md#stddevIf) | Based on a criteria, gets the standard deviation of a column. |
+| [stddevPopulation](data-flow-expressions-usage.md#stddevPopulation) | Gets the population standard deviation of a column. |
+| [stddevPopulationIf](data-flow-expressions-usage.md#stddevPopulationIf) | Based on a criteria, gets the population standard deviation of a column. |
+| [stddevSample](data-flow-expressions-usage.md#stddevSample) | Gets the sample standard deviation of a column. |
+| [stddevSampleIf](data-flow-expressions-usage.md#stddevSampleIf) | Based on a criteria, gets the sample standard deviation of a column. |
+| [sum](data-flow-expressions-usage.md#sum) | Gets the aggregate sum of a numeric column. |
+| [sumDistinct](data-flow-expressions-usage.md#sumDistinct) | Gets the aggregate sum of distinct values of a numeric column. |
+| [sumDistinctIf](data-flow-expressions-usage.md#sumDistinctIf) | Based on criteria gets the aggregate sum of a numeric column. The condition can be based on any column. |
+| [sumIf](data-flow-expressions-usage.md#sumIf) | Based on criteria gets the aggregate sum of a numeric column. The condition can be based on any column. |
+| [variance](data-flow-expressions-usage.md#variance) | Gets the variance of a column. |
+| [varianceIf](data-flow-expressions-usage.md#varianceIf) | Based on a criteria, gets the variance of a column. |
+| [variancePopulation](data-flow-expressions-usage.md#variancePopulation) | Gets the population variance of a column. |
+| [variancePopulationIf](data-flow-expressions-usage.md#variancePopulationIf) | Based on a criteria, gets the population variance of a column. |
+| [varianceSample](data-flow-expressions-usage.md#varianceSample) | Gets the unbiased variance of a column. |
+| [varianceSampleIf](data-flow-expressions-usage.md#varianceSampleIf) | Based on a criteria, gets the unbiased variance of a column. |
+|||
+
+## Next steps
+
+- List of all [array functions](data-flow-array-functions.md).
+- List of all [cached lookup functions](data-flow-cached-lookup-functions.md).
+- List of all [conversion functions](data-flow-conversion-functions.md).
+- List of all [date and time functions](data-flow-date-time-functions.md).
+- List of all [expression functions](data-flow-expression-functions.md).
+- List of all [map functions](data-flow-map-functions.md).
+- List of all [metafunctions](data-flow-metafunctions.md).
+- List of all [window functions](data-flow-window-functions.md).
+- [Usage details of all data transformation expressions](data-flow-expressions-usage.md).
+- [Learn how to use Expression Builder](concepts-data-flow-expression-builder.md).
data-factory Data Flow Array Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-array-functions.md
+
+ Title: Array functions in the mapping data flow
+
+description: Learn about array functions in mapping data flow.
++++++ Last updated : 02/02/2022++
+# Array functions in mapping data flow
+++
+The following articles provide details about array functions supported by Azure Data Factory and Azure Synapse Analytics in mapping data flows.
+
+## Array function list
+
+Array functions perform transformations on data structures that are arrays. These include special keywords to address array elements and indexes:
+
+* ```#acc``` represents a value that you wish to include in your single output when reducing an array
+* ```#index``` represents the current array index, along with array index numbers ```#index2, #index3 ...```
+* ```#item``` represents the current element value in the array
+
+| Array function | Task |
+|-|-|
+| [array](data-flow-expressions-usage.md#array) | Creates an array of items. All items should be of the same type. If no items are specified, an empty string array is the default. Same as a [] creation operator. |
+| [at](data-flow-expressions-usage.md#at) | Finds the element at an array index. The index is 1-based. Out of bounds index results in a null value. Finds a value in a map given a key. If the key is not found it returns null.|
+| [contains](data-flow-expressions-usage.md#contains) | Returns true if any element in the provided array evaluates as true in the provided predicate. Contains expects a reference to one element in the predicate function as #item. |
+| [distinct](data-flow-expressions-usage.md#distinct) | Returns a distinct set of items from an array.|
+| [except](data-flow-expressions-usage.md#except) | Returns a difference set of one array from another dropping duplicates.|
+| [filter](data-flow-expressions-usage.md#filter) | Filters elements out of the array that do not meet the provided predicate. Filter expects a reference to one element in the predicate function as #item. |
+| [find](data-flow-expressions-usage.md#find) | Find the first item from an array that match the condition. It takes a filter function where you can address the item in the array as #item. For deeply nested maps you can refer to the parent maps using the #item_n(#item_1, #item_2...) notation. |
+| [flatten](data-flow-expressions-usage.md#flatten) | Flattens array or arrays into a single array. Arrays of atomic items are returned unaltered. The last argument is optional and is defaulted to false to flatten recursively more than one level deep.|
+| [in](data-flow-expressions-usage.md#in) | Checks if an item is in the array. |
+| [intersect](data-flow-expressions-usage.md#intersect) | Returns an intersection set of distinct items from 2 arrays.|
+| [map](data-flow-expressions-usage.md#map) | Maps each element of the array to a new element using the provided expression. Map expects a reference to one element in the expression function as #item. |
+| [mapIf](data-flow-expressions-usage.md#mapIf) | Conditionally maps an array to another array of same or smaller length. The values can be of any datatype including structTypes. It takes a mapping function where you can address the item in the array as #item and current index as #index. For deeply nested maps you can refer to the parent maps using the ``#item_[n](#item_1, #index_1...)`` notation.|
+| [mapIndex](data-flow-expressions-usage.md#mapIndex) | Maps each element of the array to a new element using the provided expression. Map expects a reference to one element in the expression function as #item and a reference to the element index as #index. |
+| [mapLoop](data-flow-expressions-usage.md#mapLoop) | Loops through from 1 to length to create an array of that length. It takes a mapping function where you can address the index in the array as #index. For deeply nested maps you can refer to the parent maps using the #index_n(#index_1, #index_2...) notation.|
+| [reduce](data-flow-expressions-usage.md#reduce) | Accumulates elements in an array. Reduce expects a reference to an accumulator and one element in the first expression function as #acc and #item and it expects the resulting value as #result to be used in the second expression function. |
+| [size](data-flow-expressions-usage.md#size) | Finds the size of an array or map type |
+| [slice](data-flow-expressions-usage.md#slice) | Extracts a subset of an array from a position. Position is 1 based. If the length is omitted, it is defaulted to end of the string. |
+| [sort](data-flow-expressions-usage.md#sort) | Sorts the array using the provided predicate function. Sort expects a reference to two consecutive elements in the expression function as #item1 and #item2. |
+| [unfold](data-flow-expressions-usage.md#unfold) | Unfolds an array into a set of rows and repeats the values for the remaining columns in every row.|
+| [union](data-flow-expressions-usage.md#union) | Returns a union set of distinct items from 2 arrays.|
+|||
+
+## Next steps
+
+- List of all [aggregate functions](data-flow-aggregate-functions.md).
+- List of all [cached lookup functions](data-flow-cached-lookup-functions.md).
+- List of all [conversion functions](data-flow-conversion-functions.md).
+- List of all [date and time functions](data-flow-date-time-functions.md).
+- List of all [expression functions](data-flow-expression-functions.md).
+- List of all [map functions](data-flow-map-functions.md).
+- List of all [metafunctions](data-flow-metafunctions.md).
+- List of all [window functions](data-flow-window-functions.md).
+- [Usage details of all data transformation expressions](data-flow-expressions-usage.md).
+- [Learn how to use Expression Builder](concepts-data-flow-expression-builder.md).
data-factory Data Flow Cached Lookup Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-cached-lookup-functions.md
+
+ Title: Cached lookup functions in the mapping data flow
+
+description: Learn about cached lookup functions in mapping data flow.
++++++ Last updated : 02/02/2022++
+# Cached lookup functions in mapping data flow
+++
+The following articles provide details about cached lookup functions supported by Azure Data Factory and Azure Synapse Analytics in mapping data flows.
+
+## Cached lookup function list
+
+The following functions are only available when using a cached lookup when you've included a cached sink.
+
+| Cached lookup function | Task |
+|-|-|
+| [lookup](data-flow-expressions-usage.md#lookup) | Looks up the first row from the cached sink using the specified keys that match the keys from the cached sink.|
+| [mlookup](data-flow-expressions-usage.md#mlookup) | Looks up the all matching rows from the cached sink using the specified keys that match the keys from the cached sink.|
+| [output](data-flow-expressions-usage.md#output) | Returns the first row of the results of the cache sink|
+| [outputs](data-flow-expressions-usage.md#outputs) | Returns the entire output row set of the results of the cache sink|
+|||
+
+## Next steps
+
+- List of all [aggregate functions](data-flow-aggregate-functions.md).
+- List of all [array functions](data-flow-array-functions.md).
+- List of all [conversion functions](data-flow-conversion-functions.md).
+- List of all [date and time functions](data-flow-date-time-functions.md).
+- List of all [expression functions](data-flow-expression-functions.md).
+- List of all [map functions](data-flow-map-functions.md).
+- List of all [metafunctions](data-flow-metafunctions.md).
+- List of all [window functions](data-flow-window-functions.md).
+- [Usage details of all data transformation expressions](data-flow-expressions-usage.md).
+- [Learn how to use Expression Builder](concepts-data-flow-expression-builder.md).
data-factory Data Flow Conversion Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-conversion-functions.md
+
+ Title: Conversion functions in the mapping data flow
+
+description: Learn about conversion functions in mapping data flow.
++++++ Last updated : 02/02/2022++
+# Conversion functions in mapping data flow
+++
+The following articles provide details about expressions and functions supported by Azure Data Factory and Azure Synapse Analytics in mapping data flows.
+
+## Conversion function list
+
+Conversion functions are used to convert data and test for data types
+
+| Conversion function | Task |
+|-|-|
+| [isBitSet](data-flow-expressions-usage.md#isBitSet) | Checks if a bit position is set in this bitset|
+| [setBitSet](data-flow-expressions-usage.md#setBitSet) | Sets bit positions in this bitset|
+| [isBoolean](data-flow-expressions-usage.md#isBoolean) | Checks if the string value is a boolean value according to the rules of ``toBoolean()``|
+| [isByte](data-flow-expressions-usage.md#isByte) | Checks if the string value is a byte value given an optional format according to the rules of ``toByte()``|
+| [isDate](data-flow-expressions-usage.md#isDate) | Checks if the input date string is a date using an optional input date format. Refer to Java's SimpleDateFormat for available formats. If the input date format is omitted, default format is ``yyyy-[M]M-[d]d``. Accepted formats are ``[ yyyy, yyyy-[M]M, yyyy-[M]M-[d]d, yyyy-[M]M-[d]dT* ]``|
+| [isShort](data-flow-expressions-usage.md#isShort) | Checks if the string value is a short value given an optional format according to the rules of ``toShort()``|
+| [isInteger](data-flow-expressions-usage.md#isInteger) | Checks if the string value is an integer value given an optional format according to the rules of ``toInteger()``|
+| [isLong](data-flow-expressions-usage.md#isLong) | Checks if the string value is a long value given an optional format according to the rules of ``toLong()``|
+| [isNan](data-flow-expressions-usage.md#isNan) | Check if a value isn't a number.|
+| [isFloat](data-flow-expressions-usage.md#isFloat) | Checks if the string value is a float value given an optional format according to the rules of ``toFloat()``|
+| [isDouble](data-flow-expressions-usage.md#isDouble) | Checks if the string value is a double value given an optional format according to the rules of ``toDouble()``|
+| [isDecimal](data-flow-expressions-usage.md#isDecimal) | Checks if the string value is a decimal value given an optional format according to the rules of ``toDecimal()``|
+| [isTimestamp](data-flow-expressions-usage.md#isTimestamp) | Checks if the input date string is a timestamp using an optional input timestamp format. Refer to Java's SimpleDateFormat for available formats. If the timestamp is omitted, the default pattern ``yyyy-[M]M-[d]d hh:mm:ss[.f...]`` is used. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. Timestamp supports up to millisecond accuracy with value of 999 Refer to Java's SimpleDateFormat for available formats.|
+| [toBase64](data-flow-expressions-usage.md#toBase64) | Encodes the given string in base64. |
+| [toBinary](data-flow-expressions-usage.md#toBinary) | Converts any numeric/date/timestamp/string to binary representation. |
+| [toBoolean](data-flow-expressions-usage.md#toBoolean) | Converts a value of ('t', 'true', 'y', 'yes', '1') to true and ('f', 'false', 'n', 'no', '0') to false and NULL for any other value. |
+| [toByte](data-flow-expressions-usage.md#toByte) | Converts any numeric or string to a byte value. An optional Java decimal format can be used for the conversion. |
+| [toDate](data-flow-expressions-usage.md#toDate) | Converts input date string to date using an optional input date format. Refer to Java's `SimpleDateFormat` class for available formats. If the input date format is omitted, default format is yyyy-[M]M-[d]d. Accepted formats are :[ yyyy, yyyy-[M]M, yyyy-[M]M-[d]d, yyyy-[M]M-[d]dT* ]. |
+| [toDecimal](data-flow-expressions-usage.md#toDecimal) | Converts any numeric or string to a decimal value. If precision and scale aren't specified, it's defaulted to (10,2). An optional Java decimal format can be used for the conversion. An optional locale format in the form of BCP47 language like en-US, de, zh-CN. |
+| [toDouble](data-flow-expressions-usage.md#toDouble) | Converts any numeric or string to a double value. An optional Java decimal format can be used for the conversion. An optional locale format in the form of BCP47 language like en-US, de, zh-CN. |
+| [toFloat](data-flow-expressions-usage.md#toFloat) | Converts any numeric or string to a float value. An optional Java decimal format can be used for the conversion. Truncates any double. |
+| [toInteger](data-flow-expressions-usage.md#toInteger) | Converts any numeric or string to an integer value. An optional Java decimal format can be used for the conversion. Truncates any long, float, double. |
+| [toLong](data-flow-expressions-usage.md#toLong) | Converts any numeric or string to a long value. An optional Java decimal format can be used for the conversion. Truncates any float, double. |
+| [toShort](data-flow-expressions-usage.md#toShort) | Converts any numeric or string to a short value. An optional Java decimal format can be used for the conversion. Truncates any integer, long, float, double. |
+| [toString](data-flow-expressions-usage.md#toString) | Converts a primitive datatype to a string. For numbers and date, a format can be specified. If unspecified the system default is picked.Java decimal format is used for numbers. Refer to Java SimpleDateFormat for all possible date formats; the default format is yyyy-MM-dd. |
+| [toTimestamp](data-flow-expressions-usage.md#toTimestamp) | Converts a string to a timestamp given an optional timestamp format. If the timestamp is omitted the default pattern yyyy-[M]M-[d]d hh:mm:ss[.f...] is used. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. Timestamp supports up to millisecond accuracy with value of 999. Refer to Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html. |
+| [toUTC](data-flow-expressions-usage.md#toUTC) | Converts the timestamp to UTC. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. It's defaulted to the current timezone. Refer to Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html. |
+|||
+
+## Next steps
+
+- List of all [aggregate functions](data-flow-aggregate-functions.md).
+- List of all [array functions](data-flow-array-functions.md).
+- List of all [cached lookup functions](data-flow-cached-lookup-functions.md).
+- List of all [date and time functions](data-flow-date-time-functions.md).
+- List of all [expression functions](data-flow-expression-functions.md).
+- List of all [map functions](data-flow-map-functions.md).
+- List of all [metafunctions](data-flow-metafunctions.md).
+- List of all [window functions](data-flow-window-functions.md).
+- [Usage details of all data transformation expressions](data-flow-expressions-usage.md).
+- [Learn how to use Expression Builder](concepts-data-flow-expression-builder.md).
data-factory Data Flow Date Time Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-date-time-functions.md
+
+ Title: Date and time functions in the mapping data flow
+
+description: Learn about date and time functions in mapping data flow.
++++++ Last updated : 02/02/2022++
+# Date and time functions in mapping data flow
+++
+The following articles provide details about date and time functions supported by Azure Data Factory and Azure Synapse Analytics in mapping data flows.
+
+## Expression functions list
+
+In Data Factory and Synapse pipelines, use date and time functions to express datetime values and manipulate them.
+
+| Expression function | Task |
+|--|--|
+| [add](data-flow-expressions-usage.md#add) | Adds a pair of strings or numbers. Adds a date to a number of days. Adds a duration to a timestamp. Appends one array of similar type to another. Same as the + operator. |
+| [addDays](data-flow-expressions-usage.md#addDays) | Add days to a date or timestamp. Same as the + operator for date. |
+| [addMonths](data-flow-expressions-usage.md#addMonths) | Add months to a date or timestamp. You can optionally pass a timezone. |
+| [between](data-flow-expressions-usage.md#between) | Checks if the first value is in between two other values inclusively. Numeric, string and datetime values can be compared |
+| [currentDate](data-flow-expressions-usage.md#currentDate) | Gets the current date when this job starts to run. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. The local timezone is used as the default. Refer to Java's `SimpleDateFormat` class for available formats. [https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html](https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html). |
+| [currentTimestamp](data-flow-expressions-usage.md#currentTimestamp) | Gets the current timestamp when the job starts to run with local time zone. |
+| [currentUTC](data-flow-expressions-usage.md#currentUTC) | Gets the current timestamp as UTC. If you want your current time to be interpreted in a different timezone than your cluster time zone, you can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', or 'America/Cayman'. It's defaulted to the current timezone. Refer to Java's `SimpleDateFormat` class for available formats. [https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html](https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html). To convert the UTC time to a different timezone, use `fromUTC()`. |
+| [dayOfMonth](data-flow-expressions-usage.md#dayOfMonth) | Gets the day of the month given a date. |
+| [dayOfWeek](data-flow-expressions-usage.md#dayOfWeek) | Gets the day of the week given a date. 1 - Sunday, 2 - Monday ..., 7 - Saturday. |
+| [dayOfYear](data-flow-expressions-usage.md#dayOfYear) | Gets the day of the year given a date. |
+| [days](data-flow-expressions-usage.md#days) | Duration in milliseconds for number of days. |
+| [fromUTC](data-flow-expressions-usage.md#fromUTC) | Converts to the timestamp from UTC. You can optionally pass the timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. It's defaulted to the current timezone. Refer to Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html. |
+| [hour](data-flow-expressions-usage.md#hour) | Gets the hour value of a timestamp. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. The local timezone is used as the default. Refer to Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html. |
+| [hours](data-flow-expressions-usage.md#hours) | Duration in milliseconds for number of hours. |
+| [isDate](data-flow-expressions-usage.md#isDate) | Checks if the input date string is a date using an optional input date format. Refer to Java's SimpleDateFormat for available formats. If the input date format is omitted, default format is ``yyyy-[M]M-[d]d``. Accepted formats are ``[ yyyy, yyyy-[M]M, yyyy-[M]M-[d]d, yyyy-[M]M-[d]dT* ]``|
+| [isTimestamp](data-flow-expressions-usage.md#isTimestamp) | Checks if the input date string is a timestamp using an optional input timestamp format. Refer to Java's SimpleDateFormat for available formats. If the timestamp is omitted the default pattern ``yyyy-[M]M-[d]d hh:mm:ss[.f...]`` is used. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. Timestamp supports up to millisecond accuracy with value of 999 Refer to Java's SimpleDateFormat for available formats.|
+| [lastDayOfMonth](data-flow-expressions-usage.md#lastDayOfMonth) | Gets the last date of the month given a date. |
+| [millisecond](data-flow-expressions-usage.md#millisecond) | Gets the millisecond value of a date. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. The local timezone is used as the default. Refer to Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html. |
+| [milliseconds](data-flow-expressions-usage.md#milliseconds) | Duration in milliseconds for number of milliseconds. |
+| [minus](data-flow-expressions-usage.md#minus) | Subtracts numbers. Subtract number of days from a date. Subtract duration from a timestamp. Subtract two timestamps to get difference in milliseconds. Same as the - operator. |
+| [minute](data-flow-expressions-usage.md#minute) | Gets the minute value of a timestamp. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. The local timezone is used as the default. Refer to Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html. |
+| [minutes](data-flow-expressions-usage.md#minutes) | Duration in milliseconds for number of minutes. |
+| [month](data-flow-expressions-usage.md#month) | Gets the month value of a date or timestamp. |
+| [monthsBetween](data-flow-expressions-usage.md#monthsBetween) | Gets the number of months between two dates. You can round off the calculation. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. The local timezone is used as the default. Refer to Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html. |
+| [second](data-flow-expressions-usage.md#second) | Gets the second value of a date. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. The local timezone is used as the default. Refer to Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html. |
+| [seconds](data-flow-expressions-usage.md#seconds) | Duration in milliseconds for number of seconds. |
+| [subDays](data-flow-expressions-usage.md#subDays) | Subtract days from a date or timestamp. Same as the - operator for date. |
+| [subMonths](data-flow-expressions-usage.md#subMonths) | Subtract months from a date or timestamp. |
+| [toDate](data-flow-expressions-usage.md#toDate) | Converts input date string to date using an optional input date format. Refer to Java's `SimpleDateFormat` class for available formats. If the input date format is omitted, default format is yyyy-[M]M-[d]d. Accepted formats are :[ yyyy, yyyy-[M]M, yyyy-[M]M-[d]d, yyyy-[M]M-[d]dT* ]. |
+| [toTimestamp](data-flow-expressions-usage.md#toTimestamp) | Converts a string to a timestamp given an optional timestamp format. If the timestamp is omitted the default pattern yyyy-[M]M-[d]d hh:mm:ss[.f...] is used. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. Timestamp supports up to millisecond accuracy with value of 999. Refer to Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html. |
+| [toUTC](data-flow-expressions-usage.md#toUTC) | Converts the timestamp to UTC. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. It is defaulted to the current timezone. Refer to Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html. |
+| [weekOfYear](data-flow-expressions-usage.md#weekOfYear) | Gets the week of the year given a date. |
+| [weeks](data-flow-expressions-usage.md#weeks) | Duration in milliseconds for number of weeks. |
+| [year](data-flow-expressions-usage.md#year) | Gets the year value of a date. |
+|||
+
+## Next steps
+
+- [Aggregate functions](data-flow-aggregate-functions.md)
+- [Array functions](data-flow-array-functions.md)
+- [Cached lookup functions](data-flow-cached-lookup-functions.md)
+- [Conversion functions](data-flow-conversion-functions.md)
+- [Expression functions](data-flow-expression-functions.md)
+- [Map functions](data-flow-map-functions.md)
+- [Metafunctions](data-flow-metafunctions.md)
+- [Window functions](data-flow-window-functions.md)
+- [Usage details of all data transformation expressions](data-flow-expressions-usage.md).
+- [Learn how to use Expression Builder](concepts-data-flow-expression-builder.md).
data-factory Data Flow Derived Column https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-derived-column.md
MoviesYear derive(
## Next steps -- Learn more about the [Mapping Data Flow expression language](data-flow-expression-functions.md).
+- Learn more about the [Mapping Data Flow expression language](data-transformation-functions.md).
data-factory Data Flow Expression Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-expression-functions.md
Previously updated : 01/22/2022 Last updated : 02/02/2022
-# Data transformation expressions in mapping data flow
+# Expression functions in mapping data flow
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)] [!INCLUDE[data-flow-preamble](includes/data-flow-preamble.md)]
-This article provides details about expressions and functions supported by Azure Data Factory and Azure Synapse Analytics in mapping data flows.
+The following articles provide details about expression functions supported by Azure Data Factory and Azure Synapse Analytics in mapping data flows.
-
-## Expression functions
+## Expression functions list
In Data Factory and Synapse pipelines, use the expression language of the mapping data flow feature to configure data transformations. | Expression function | Task | |--|--|
-| [abs](data-flow-expression-functions.md#abs) | Absolute value of a number. |
-| [acos](data-flow-expression-functions.md#acos) | Calculates a cosine inverse value. |
-| [add](data-flow-expression-functions.md#add) | Adds a pair of strings or numbers. Adds a date to a number of days. Adds a duration to a timestamp. Appends one array of similar type to another. Same as the + operator. |
-| [addDays](data-flow-expression-functions.md#addDays) | Add days to a date or timestamp. Same as the + operator for date. |
-| [addMonths](data-flow-expression-functions.md#addMonths) | Add months to a date or timestamp. You can optionally pass a timezone. |
-| [and](data-flow-expression-functions.md#and) | Logical AND operator. Same as &&. |
-| [asin](data-flow-expression-functions.md#asin) | Calculates an inverse sine value. |
-| [assertErrorMessages](data-flow-expression-functions.md#assertErrorMessages) | Returns map of all assert messages. |
-| [atan](data-flow-expression-functions.md#atan) | Calculates a inverse tangent value. |
-| [atan2](data-flow-expression-functions.md#atan2) | Returns the angle in radians between the positive x-axis of a plane and the point given by the coordinates. |
-| [between](data-flow-expression-functions.md#between) | Checks if the first value is in between two other values inclusively. Numeric, string and datetime values can be compared |
-| [bitwiseAnd](data-flow-expression-functions.md#bitwiseAnd) | Bitwise And operator across integral types. Same as & operator |
-| [bitwiseOr](data-flow-expression-functions.md#bitwiseOr) | Bitwise Or operator across integral types. Same as \| operator |
-| [bitwiseXor](data-flow-expression-functions.md#bitwiseXor) | Bitwise Or operator across integral types. Same as \| operator |
-| [blake2b](data-flow-expression-functions.md#blake2b) | Calculates the Blake2 digest of set of column of varying primitive datatypes given a bit length which can only be multiples of 8 between 8 & 512. It can be used to calculate a fingerprint for a row |
-| [blake2bBinary](data-flow-expression-functions.md#blake2bBinary) | Calculates the Blake2 digest of set of column of varying primitive datatypes given a bit length which can only be multiples of 8 between 8 & 512. It can be used to calculate a fingerprint for a row |
-| [case](data-flow-expression-functions.md#case) | Based on alternating conditions applies one value or the other. If the number of inputs are even, the other is defaulted to NULL for last condition. |
-| [cbrt](data-flow-expression-functions.md#cbrt) | Calculates the cube root of a number. |
-| [ceil](data-flow-expression-functions.md#ceil) | Returns the smallest integer not smaller than the number. |
-| [coalesce](data-flow-expression-functions.md#coalesce) | Returns the first not null value from a set of inputs. All inputs should be of the same type. |
-| [columnNames](data-flow-expression-functions.md#columnNames) | Gets the names of all output columns for a stream. You can pass an optional stream name as the second argument. |
-| [columns](data-flow-expression-functions.md#columns) | Gets the values of all output columns for a stream. You can pass an optional stream name as the second argument. |
-| [compare](data-flow-expression-functions.md#compare) | Compares two values of the same type. Returns negative integer if value1 < value2, 0 if value1 == value2, positive value if value1 > value2. |
-| [concat](data-flow-expression-functions.md#concat) | Concatenates a variable number of strings together. Same as the + operator with strings. |
-| [concatWS](data-flow-expression-functions.md#concatWS) | Concatenates a variable number of strings together with a separator. The first parameter is the separator. |
-| [cos](data-flow-expression-functions.md#cos) | Calculates a cosine value. |
-| [cosh](data-flow-expression-functions.md#cosh) | Calculates a hyperbolic cosine of a value. |
-| [crc32](data-flow-expression-functions.md#crc32) | Calculates the CRC32 hash of set of column of varying primitive datatypes given a bit length which can only be of values 0(256), 224, 256, 384, 512. It can be used to calculate a fingerprint for a row. |
-| [currentDate](data-flow-expression-functions.md#currentDate) | Gets the current date when this job starts to run. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. The local timezone is used as the default. Refer Java's `SimpleDateFormat` class for available formats. [https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html](https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html). |
-| [currentTimestamp](data-flow-expression-functions.md#currentTimestamp) | Gets the current timestamp when the job starts to run with local time zone. |
-| [currentUTC](data-flow-expression-functions.md#currentUTC) | Gets the current timestamp as UTC. If you want your current time to be interpreted in a different timezone than your cluster time zone, you can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. It is defaulted to the current timezone. Refer Java's `SimpleDateFormat` class for available formats. [https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html](https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html). To convert the UTC time to a different timezone use `fromUTC()`. |
-| [dayOfMonth](data-flow-expression-functions.md#dayOfMonth) | Gets the day of the month given a date. |
-| [dayOfWeek](data-flow-expression-functions.md#dayOfWeek) | Gets the day of the week given a date. 1 - Sunday, 2 - Monday ..., 7 - Saturday. |
-| [dayOfYear](data-flow-expression-functions.md#dayOfYear) | Gets the day of the year given a date. |
-| [days](data-flow-expression-functions.md#days) | Duration in milliseconds for number of days. |
-| [degrees](data-flow-expression-functions.md#degrees) | Converts radians to degrees. |
-| [divide](data-flow-expression-functions.md#divide) | Divides pair of numbers. Same as the `/` operator. |
-| [dropLeft](data-flow-expression-functions.md#dropLeft) | Removes as many characters from the left of the string. If the drop requested exceeds the length of the string, an empty string is returned.|
-| [dropRight](data-flow-expression-functions.md#dropRight) | Removes as many characters from the right of the string. If the drop requested exceeds the length of the string, an empty string is returned.|
-| [endsWith](data-flow-expression-functions.md#endsWith) | Checks if the string ends with the supplied string. |
-| [equals](data-flow-expression-functions.md#equals) | Comparison equals operator. Same as == operator. |
-| [equalsIgnoreCase](data-flow-expression-functions.md#equalsIgnoreCase) | Comparison equals operator ignoring case. Same as <=> operator. |
-| [escape](data-flow-expression-functions.md#escape) | Escapes a string according to a format. Literal values for acceptable format are 'json', 'xml', 'ecmascript', 'html', 'java'.|
-| [expr](data-flow-expression-functions.md#expr) | Results in a expression from a string. This is the same as writing this expression in a non-literal form. This can be used to pass parameters as string representations.|
-| [factorial](data-flow-expression-functions.md#factorial) | Calculates the factorial of a number. |
-| [false](data-flow-expression-functions.md#false) | Always returns a false value. Use the function `syntax(false())` if there is a column named 'false'. |
-| [floor](data-flow-expression-functions.md#floor) | Returns the largest integer not greater than the number. |
-| [fromBase64](data-flow-expression-functions.md#fromBase64) | Decodes the given base64-encoded string.|
-| [fromUTC](data-flow-expression-functions.md#fromUTC) | Converts to the timestamp from UTC. You can optionally pass the timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. It is defaulted to the current timezone. Refer Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html. |
-| [greater](data-flow-expression-functions.md#greater) | Comparison greater operator. Same as > operator. |
-| [greaterOrEqual](data-flow-expression-functions.md#greaterOrEqual) | Comparison greater than or equal operator. Same as >= operator. |
-| [greatest](data-flow-expression-functions.md#greatest) | Returns the greatest value among the list of values as input skipping null values. Returns null if all inputs are null. |
-| [hasColumn](data-flow-expression-functions.md#hasColumn) | Checks for a column value by name in the stream. You can pass a optional stream name as the second argument. Column names known at design time should be addressed just by their name. Computed inputs are not supported but you can use parameter substitutions. |
-| [hour](data-flow-expression-functions.md#hour) | Gets the hour value of a timestamp. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. The local timezone is used as the default. Refer Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html. |
-| [hasError](data-flow-expression-functions.md#hasError) | Checks if the assert with provided ID is marked as error. |
-| [hours](data-flow-expression-functions.md#hours) | Duration in milliseconds for number of hours. |
-| [iif](data-flow-expression-functions.md#iif) | Based on a condition applies one value or the other. If other is unspecified it is considered NULL. Both the values must be compatible(numeric, string...). |
-| [iifNull](data-flow-expression-functions.md#iifNull) | Checks if the first parameter is null. If not null, the first parameter is returned. If null, the second parameter is returned. If three parameters are specified, the behavior is the same as iif(isNull(value1), value2, value3) and the third parameter is returned if the first value is not null. |
-| [initCap](data-flow-expression-functions.md#initCap) | Converts the first letter of every word to uppercase. Words are identified as separated by whitespace. |
-| [instr](data-flow-expression-functions.md#instr) | Finds the position(1 based) of the substring within a string. 0 is returned if not found. |
-| [isDelete](data-flow-expression-functions.md#isDelete) | Checks if the row is marked for delete. For transformations taking more than one input stream you can pass the (1-based) index of the stream. The stream index should be either 1 or 2 and the default value is 1. |
-| [isError](data-flow-expression-functions.md#isError) | Checks if the row is marked as error. For transformations taking more than one input stream you can pass the (1-based) index of the stream. The stream index should be either 1 or 2 and the default value is 1. |
-| [isIgnore](data-flow-expression-functions.md#isIgnore) | Checks if the row is marked to be ignored. For transformations taking more than one input stream you can pass the (1-based) index of the stream. The stream index should be either 1 or 2 and the default value is 1. |
-| [isInsert](data-flow-expression-functions.md#isInsert) | Checks if the row is marked for insert. For transformations taking more than one input stream you can pass the (1-based) index of the stream. The stream index should be either 1 or 2 and the default value is 1. |
-| [isMatch](data-flow-expression-functions.md#isMatch) | Checks if the row is matched at lookup. For transformations taking more than one input stream you can pass the (1-based) index of the stream. The stream index should be either 1 or 2 and the default value is 1. |
-| [isNull](data-flow-expression-functions.md#isNull) | Checks if the value is NULL. |
-| [isUpdate](data-flow-expression-functions.md#isUpdate) | Checks if the row is marked for update. For transformations taking more than one input stream you can pass the (1-based) index of the stream. The stream index should be either 1 or 2 and the default value is 1. |
-| [isUpsert](data-flow-expression-functions.md#isUpsert) | Checks if the row is marked for insert. For transformations taking more than one input stream you can pass the (1-based) index of the stream. The stream index should be either 1 or 2 and the default value is 1. |
-| [jaroWinkler](data-flow-expression-functions.md#jaroWinkler) | Gets the JaroWinkler distance between two strings. |
-| [lastDayOfMonth](data-flow-expression-functions.md#lastDayOfMonth) | Gets the last date of the month given a date. |
-| [least](data-flow-expression-functions.md#least) | Comparison lesser than or equal operator. Same as <= operator. |
-| [left](data-flow-expression-functions.md#left) | Extracts a substring start at index 1 with number of characters. Same as SUBSTRING(str, 1, n). |
-| [length](data-flow-expression-functions.md#length) | Returns the length of the string. |
-| [lesser](data-flow-expression-functions.md#lesser) | Comparison less operator. Same as < operator. |
-| [lesserOrEqual](data-flow-expression-functions.md#lesserOrEqual) | Comparison lesser than or equal operator. Same as <= operator. |
-| [levenshtein](data-flow-expression-functions.md#levenshtein) | Gets the levenshtein distance between two strings. |
-| [like](data-flow-expression-functions.md#like) | The pattern is a string that is matched literally. The exceptions are the following special symbols: _ matches any one character in the input (similar to . in ```posix``` regular expressions)|
-| [locate](data-flow-expression-functions.md#locate) | Finds the position(1 based) of the substring within a string starting a certain position. If the position is omitted it is considered from the beginning of the string. 0 is returned if not found. |
-| [log](data-flow-expression-functions.md#log) | Calculates log value. An optional base can be supplied else a Euler number if used. |
-| [log10](data-flow-expression-functions.md#log10) | Calculates log value based on 10 base. |
-| [lower](data-flow-expression-functions.md#lower) | Lowercases a string. |
-| [lpad](data-flow-expression-functions.md#lpad) | Left pads the string by the supplied padding until it is of a certain length. If the string is equal to or greater than the length, then it is trimmed to the length. |
-| [ltrim](data-flow-expression-functions.md#ltrim) | Left trims a string of leading characters. If second parameter is unspecified, it trims whitespace. Else it trims any character specified in the second parameter. |
-| [md5](data-flow-expression-functions.md#md5) | Calculates the MD5 digest of set of column of varying primitive datatypes and returns a 32 character hex string. It can be used to calculate a fingerprint for a row. |
-| [millisecond](data-flow-expression-functions.md#millisecond) | Gets the millisecond value of a date. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. The local timezone is used as the default. Refer Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html. |
-| [milliseconds](data-flow-expression-functions.md#milliseconds) | Duration in milliseconds for number of milliseconds. |
-| [minus](data-flow-expression-functions.md#minus) | Subtracts numbers. Subtract number of days from a date. Subtract duration from a timestamp. Subtract two timestamps to get difference in milliseconds. Same as the - operator. |
-| [minute](data-flow-expression-functions.md#minute) | Gets the minute value of a timestamp. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. The local timezone is used as the default. Refer Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html. |
-| [minutes](data-flow-expression-functions.md#minutes) | Duration in milliseconds for number of minutes. |
-| [mod](data-flow-expression-functions.md#mod) | Modulus of pair of numbers. Same as the % operator. |
-| [month](data-flow-expression-functions.md#month) | Gets the month value of a date or timestamp. |
-| [monthsBetween](data-flow-expression-functions.md#monthsBetween) | Gets the number of months between two dates. You can round off the calculation.You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. The local timezone is used as the default. Refer Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html. |
-| [multiply](data-flow-expression-functions.md#multiply) | Multiplies pair of numbers. Same as the * operator. |
-| [negate](data-flow-expression-functions.md#negate) | Negates a number. Turns positive numbers to negative and vice versa. |
-| [nextSequence](data-flow-expression-functions.md#nextSequence) | Returns the next unique sequence. The number is consecutive only within a partition and is prefixed by the partitionId. |
-| [normalize](data-flow-expression-functions.md#normalize) | Normalizes the string value to separate accented unicode characters. |
-| [not](data-flow-expression-functions.md#not) | Logical negation operator. |
-| [notEquals](data-flow-expression-functions.md#notEquals) | Comparison not equals operator. Same as != operator. |
-| [notNull](data-flow-expression-functions.md#notNull) | Checks if the value is not NULL. |
-| [null](data-flow-expression-functions.md#null) | Returns a NULL value. Use the function `syntax(null())` if there is a column named 'null'. Any operation that uses will result in a NULL. |
-| [or](data-flow-expression-functions.md#or) | Logical OR operator. Same as \|\|. |
-| [pMod](data-flow-expression-functions.md#pMod) | Positive Modulus of pair of numbers. |
-| [partitionId](data-flow-expression-functions.md#partitionId) | Returns the current partition ID the input row is in. |
-| [power](data-flow-expression-functions.md#power) | Raises one number to the power of another. |
-| [radians](data-flow-expression-functions.md#radians) | Converts degrees to radians|
-| [random](data-flow-expression-functions.md#random) | Returns a random number given an optional seed within a partition. The seed should be a fixed value and is used in conjunction with the partitionId to produce random values |
-| [regexExtract](data-flow-expression-functions.md#regexExtract) | Extract a matching substring for a given regex pattern. The last parameter identifies the match group and is defaulted to 1 if omitted. Use `<regex>`(back quote) to match a string without escaping. |
-| [regexMatch](data-flow-expression-functions.md#regexMatch) | Checks if the string matches the given regex pattern. Use `<regex>`(back quote) to match a string without escaping. |
-| [regexReplace](data-flow-expression-functions.md#regexReplace) | Replace all occurrences of a regex pattern with another substring in the given string Use `<regex>`(back quote) to match a string without escaping. |
-| [regexSplit](data-flow-expression-functions.md#regexSplit) | Splits a string based on a delimiter based on regex and returns an array of strings. |
-| [replace](data-flow-expression-functions.md#replace) | Replace all occurrences of a substring with another substring in the given string. If the last parameter is omitted, it is default to empty string. |
-| [reverse](data-flow-expression-functions.md#reverse) | Reverses a string. |
-| [right](data-flow-expression-functions.md#right) | Extracts a substring with number of characters from the right. Same as SUBSTRING(str, LENGTH(str) - n, n). |
-| [rlike](data-flow-expression-functions.md#rlike) | Checks if the string matches the given regex pattern. |
-| [round](data-flow-expression-functions.md#round) | Rounds a number given an optional scale and an optional rounding mode. If the scale is omitted, it is defaulted to 0. If the mode is omitted, it is defaulted to ROUND_HALF_UP(5). The values for rounding include|
-| [rpad](data-flow-expression-functions.md#rpad) | Right pads the string by the supplied padding until it is of a certain length. If the string is equal to or greater than the length, then it is trimmed to the length. |
-| [rtrim](data-flow-expression-functions.md#rtrim) | Right trims a string of trailing characters. If second parameter is unspecified, it trims whitespace. Else it trims any character specified in the second parameter. |
-| [second](data-flow-expression-functions.md#second) | Gets the second value of a date. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. The local timezone is used as the default. Refer Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html. |
-| [seconds](data-flow-expression-functions.md#seconds) | Duration in milliseconds for number of seconds. |
-| [sha1](data-flow-expression-functions.md#sha1) | Calculates the SHA-1 digest of set of column of varying primitive datatypes and returns a 40 character hex string. It can be used to calculate a fingerprint for a row. |
-| [sha2](data-flow-expression-functions.md#sha2) | Calculates the SHA-2 digest of set of column of varying primitive datatypes given a bit length which can only be of values 0(256), 224, 256, 384, 512. It can be used to calculate a fingerprint for a row. |
-| [sin](data-flow-expression-functions.md#sin) | Calculates a sine value. |
-| [sinh](data-flow-expression-functions.md#sinh) | Calculates a hyperbolic sine value. |
-| [soundex](data-flow-expression-functions.md#soundex) | Gets the ```soundex``` code for the string. |
-| [split](data-flow-expression-functions.md#split) | Splits a string based on a delimiter and returns an array of strings. |
-| [sqrt](data-flow-expression-functions.md#sqrt) | Calculates the square root of a number. |
-| [startsWith](data-flow-expression-functions.md#startsWith) | Checks if the string starts with the supplied string. |
-| [subDays](data-flow-expression-functions.md#subDays) | Subtract days from a date or timestamp. Same as the - operator for date. |
-| [subMonths](data-flow-expression-functions.md#subMonths) | Subtract months from a date or timestamp. |
-| [substring](data-flow-expression-functions.md#substring) | Extracts a substring of a certain length from a position. Position is 1 based. If the length is omitted, it is defaulted to end of the string. |
-| [tan](data-flow-expression-functions.md#tan) | Calculates a tangent value. |
-| [tanh](data-flow-expression-functions.md#tanh) | Calculates a hyperbolic tangent value. |
-| [translate](data-flow-expression-functions.md#translate) | Replace one set of characters by another set of characters in the string. Characters have 1 to 1 replacement. |
-| [trim](data-flow-expression-functions.md#trim) | Trims a string of leading and trailing characters. If second parameter is unspecified, it trims whitespace. Else it trims any character specified in the second parameter. |
-| [true](data-flow-expression-functions.md#true) | Always returns a true value. Use the function `syntax(true())` if there is a column named 'true'. |
-| [typeMatch](data-flow-expression-functions.md#typeMatch) | Matches the type of the column. Can only be used in pattern expressions.number matches short, integer, long, double, float or decimal, integral matches short, integer, long, fractional matches double, float, decimal and datetime matches date or timestamp type. |
-| [unescape](data-flow-expression-functions.md#unescape) | Unescapes a string according to a format. Literal values for acceptable format are 'json', 'xml', 'ecmascript', 'html', 'java'.|
-| [upper](data-flow-expression-functions.md#upper) | Uppercases a string. |
-| [uuid](data-flow-expression-functions.md#uuid) | Returns the generated UUID. |
-| [weekOfYear](data-flow-expression-functions.md#weekOfYear) | Gets the week of the year given a date. |
-| [weeks](data-flow-expression-functions.md#weeks) | Duration in milliseconds for number of weeks. |
-| [xor](data-flow-expression-functions.md#xor) | Logical XOR operator. Same as ^ operator. |
-| [year](data-flow-expression-functions.md#year) | Gets the year value of a date. |
-|||
-
-## Aggregate functions
-The following functions are only available in aggregate, pivot, unpivot, and window transformations.
-
-| Aggregate function | Task |
-|-|-|
-| [approxDistinctCount](data-flow-expression-functions.md#approxDistinctCount) | Gets the approximate aggregate count of distinct values for a column. The optional second parameter is to control the estimation error.|
-| [avg](data-flow-expression-functions.md#avg) | Gets the average of values of a column. |
-| [avgIf](data-flow-expression-functions.md#avgIf) | Based on a criteria gets the average of values of a column. |
-| [collect](data-flow-expression-functions.md#collect) | Collects all values of the expression in the aggregated group into an array. Structures can be collected and transformed to alternate structures during this process. The number of items will be equal to the number of rows in that group and can contain null values. The number of collected items should be small. |
-| [count](data-flow-expression-functions.md#count) | Gets the aggregate count of values. If the optional column(s) is specified, it ignores NULL values in the count. |
-| [countAll](data-flow-expression-functions.md#countAll) | Gets the aggregate count of values including NULLs. |
-| [countDistinct](data-flow-expression-functions.md#countDistinct) | Gets the aggregate count of distinct values of a set of columns. |
-| [countAllDistinct](data-flow-expression-functions.md#countAllDistinct) | Gets the aggregate count of distinct values of a set of columns including NULLs. |
-| [countIf](data-flow-expression-functions.md#countIf) | Based on a criteria gets the aggregate count of values. If the optional column is specified, it ignores NULL values in the count. |
-| [covariancePopulation](data-flow-expression-functions.md#covariancePopulation) | Gets the population covariance between two columns. |
-| [covariancePopulationIf](data-flow-expression-functions.md#covariancePopulationIf) | Based on a criteria, gets the population covariance of two columns. |
-| [covarianceSample](data-flow-expression-functions.md#covarianceSample) | Gets the sample covariance of two columns. |
-| [covarianceSampleIf](data-flow-expression-functions.md#covarianceSampleIf) | Based on a criteria, gets the sample covariance of two columns. |
-| [first](data-flow-expression-functions.md#first) | Gets the first value of a column group. If the second parameter ignoreNulls is omitted, it is assumed false. |
-| [isDistinct](data-flow-expression-functions.md#isDistinct) | Finds if a column or set of columns is distinct. It does not count null as a distinct value|
-| [kurtosis](data-flow-expression-functions.md#kurtosis) | Gets the kurtosis of a column. |
-| [kurtosisIf](data-flow-expression-functions.md#kurtosisIf) | Based on a criteria, gets the kurtosis of a column. |
-| [last](data-flow-expression-functions.md#last) | Gets the last value of a column group. If the second parameter ignoreNulls is omitted, it is assumed false. |
-| [max](data-flow-expression-functions.md#max) | Gets the maximum value of a column. |
-| [maxIf](data-flow-expression-functions.md#maxIf) | Based on a criteria, gets the maximum value of a column. |
-| [mean](data-flow-expression-functions.md#mean) | Gets the mean of values of a column. Same as AVG. |
-| [meanIf](data-flow-expression-functions.md#meanIf) | Based on a criteria gets the mean of values of a column. Same as avgIf. |
-| [min](data-flow-expression-functions.md#min) | Gets the minimum value of a column. |
-| [minIf](data-flow-expression-functions.md#minIf) | Based on a criteria, gets the minimum value of a column. |
-| [skewness](data-flow-expression-functions.md#skewness) | Gets the skewness of a column. |
-| [skewnessIf](data-flow-expression-functions.md#skewnessIf) | Based on a criteria, gets the skewness of a column. |
-| [stddev](data-flow-expression-functions.md#stddev) | Gets the standard deviation of a column. |
-| [stddevIf](data-flow-expression-functions.md#stddevIf) | Based on a criteria, gets the standard deviation of a column. |
-| [stddevPopulation](data-flow-expression-functions.md#stddevPopulation) | Gets the population standard deviation of a column. |
-| [stddevPopulationIf](data-flow-expression-functions.md#stddevPopulationIf) | Based on a criteria, gets the population standard deviation of a column. |
-| [stddevSample](data-flow-expression-functions.md#stddevSample) | Gets the sample standard deviation of a column. |
-| [stddevSampleIf](data-flow-expression-functions.md#stddevSampleIf) | Based on a criteria, gets the sample standard deviation of a column. |
-| [sum](data-flow-expression-functions.md#sum) | Gets the aggregate sum of a numeric column. |
-| [sumDistinct](data-flow-expression-functions.md#sumDistinct) | Gets the aggregate sum of distinct values of a numeric column. |
-| [sumDistinctIf](data-flow-expression-functions.md#sumDistinctIf) | Based on criteria gets the aggregate sum of a numeric column. The condition can be based on any column. |
-| [sumIf](data-flow-expression-functions.md#sumIf) | Based on criteria gets the aggregate sum of a numeric column. The condition can be based on any column. |
-| [variance](data-flow-expression-functions.md#variance) | Gets the variance of a column. |
-| [varianceIf](data-flow-expression-functions.md#varianceIf) | Based on a criteria, gets the variance of a column. |
-| [variancePopulation](data-flow-expression-functions.md#variancePopulation) | Gets the population variance of a column. |
-| [variancePopulationIf](data-flow-expression-functions.md#variancePopulationIf) | Based on a criteria, gets the population variance of a column. |
-| [varianceSample](data-flow-expression-functions.md#varianceSample) | Gets the unbiased variance of a column. |
-| [varianceSampleIf](data-flow-expression-functions.md#varianceSampleIf) | Based on a criteria, gets the unbiased variance of a column. |
-|||
-
-## Array functions
-Array functions perform transformations on data structures that are arrays. These include special keywords to address array elements and indexes:
-
-* ```#acc``` represents a value that you wish to include in your single output when reducing an array
-* ```#index``` represents the current array index, along with array index numbers ```#index2, #index3 ...```
-* ```#item``` represents the current element value in the array
-
-| Array function | Task |
-|-|-|
-| [array](data-flow-expression-functions.md#array) | Creates an array of items. All items should be of the same type. If no items are specified, an empty string array is the default. Same as a [] creation operator. |
-| [at](data-flow-expression-functions.md#at) | Finds the element at an array index. The index is 1-based. Out of bounds index results in a null value. Finds a value in a map given a key. If the key is not found it returns null.|
-| [contains](data-flow-expression-functions.md#contains) | Returns true if any element in the provided array evaluates as true in the provided predicate. Contains expects a reference to one element in the predicate function as #item. |
-| [distinct](data-flow-expression-functions.md#distinct) | Returns a distinct set of items from an array.|
-| [except](data-flow-expression-functions.md#except) | Returns a difference set of one array from another dropping duplicates.|
-| [filter](data-flow-expression-functions.md#filter) | Filters elements out of the array that do not meet the provided predicate. Filter expects a reference to one element in the predicate function as #item. |
-| [find](data-flow-expression-functions.md#find) | Find the first item from an array that match the condition. It takes a filter function where you can address the item in the array as #item. For deeply nested maps you can refer to the parent maps using the #item_n(#item_1, #item_2...) notation. |
-| [flatten](data-flow-expression-functions.md#flatten) | Flattens array or arrays into a single array. Arrays of atomic items are returned unaltered. The last argument is optional and is defaulted to false to flatten recursively more than one level deep.|
-| [in](data-flow-expression-functions.md#in) | Checks if an item is in the array. |
-| [intersect](data-flow-expression-functions.md#intersect) | Returns an intersection set of distinct items from 2 arrays.|
-| [map](data-flow-expression-functions.md#map) | Maps each element of the array to a new element using the provided expression. Map expects a reference to one element in the expression function as #item. |
-| [mapIf](data-flow-expression-functions.md#mapIf) | Conditionally maps an array to another array of same or smaller length. The values can be of any datatype including structTypes. It takes a mapping function where you can address the item in the array as #item and current index as #index. For deeply nested maps you can refer to the parent maps using the ``#item_[n](#item_1, #index_1...)`` notation.|
-| [mapIndex](data-flow-expression-functions.md#mapIndex) | Maps each element of the array to a new element using the provided expression. Map expects a reference to one element in the expression function as #item and a reference to the element index as #index. |
-| [mapLoop](data-flow-expression-functions.md#mapLoop) | Loops through from 1 to length to create an array of that length. It takes a mapping function where you can address the index in the array as #index. For deeply nested maps you can refer to the parent maps using the #index_n(#index_1, #index_2...) notation.|
-| [reduce](data-flow-expression-functions.md#reduce) | Accumulates elements in an array. Reduce expects a reference to an accumulator and one element in the first expression function as #acc and #item and it expects the resulting value as #result to be used in the second expression function. |
-| [size](data-flow-expression-functions.md#size) | Finds the size of an array or map type |
-| [slice](data-flow-expression-functions.md#slice) | Extracts a subset of an array from a position. Position is 1 based. If the length is omitted, it is defaulted to end of the string. |
-| [sort](data-flow-expression-functions.md#sort) | Sorts the array using the provided predicate function. Sort expects a reference to two consecutive elements in the expression function as #item1 and #item2. |
-| [unfold](data-flow-expression-functions.md#unfold) | Unfolds an array into a set of rows and repeats the values for the remaining columns in every row.|
-| [union](data-flow-expression-functions.md#union) | Returns a union set of distinct items from 2 arrays.|
-|||
-
-## Cached lookup functions
-The following functions are only available when using a cached lookup when you've included a cached sink.
-
-| Cached lookup function | Task |
-|-|-|
-| [lookup](data-flow-expression-functions.md#lookup) | Looks up the first row from the cached sink using the specified keys that match the keys from the cached sink.|
-| [mlookup](data-flow-expression-functions.md#mlookup) | Looks up the all matching rows from the cached sink using the specified keys that match the keys from the cached sink.|
-| [output](data-flow-expression-functions.md#output) | Returns the first row of the results of the cache sink|
-| [outputs](data-flow-expression-functions.md#outputs) | Returns the entire output row set of the results of the cache sink|
-|||
-
-## Conversion functions
-
-Conversion functions are used to convert data and test for data types
-
-| Conversion function | Task |
-|-|-|
-| [isBitSet](data-flow-expression-functions.md#isBitSet) | Checks if a bit position is set in this bitset|
-| [setBitSet](data-flow-expression-functions.md#setBitSet) | Sets bit positions in this bitset|
-| [isBoolean](data-flow-expression-functions.md#isBoolean) | Checks if the string value is a boolean value according to the rules of ``toBoolean()``|
-| [isByte](data-flow-expression-functions.md#isByte) | Checks if the string value is a byte value given an optional format according to the rules of ``toByte()``|
-| [isDate](data-flow-expression-functions.md#isDate) | Checks if the input date string is a date using an optional input date format. Refer Java's SimpleDateFormat for available formats. If the input date format is omitted, default format is ``yyyy-[M]M-[d]d``. Accepted formats are ``[ yyyy, yyyy-[M]M, yyyy-[M]M-[d]d, yyyy-[M]M-[d]dT* ]``|
-| [isShort](data-flow-expression-functions.md#isShort) | Checks of the string value is a short value given an optional format according to the rules of ``toShort()``|
-| [isInteger](data-flow-expression-functions.md#isInteger) | Checks of the string value is a integer value given an optional format according to the rules of ``toInteger()``|
-| [isLong](data-flow-expression-functions.md#isLong) | Checks of the string value is a long value given an optional format according to the rules of ``toLong()``|
-| [isNan](data-flow-expression-functions.md#isNan) | Check if this is not a number.|
-| [isFloat](data-flow-expression-functions.md#isFloat) | Checks of the string value is a float value given an optional format according to the rules of ``toFloat()``|
-| [isDouble](data-flow-expression-functions.md#isDouble) | Checks of the string value is a double value given an optional format according to the rules of ``toDouble()``|
-| [isDecimal](data-flow-expression-functions.md#isDecimal) | Checks of the string value is a decimal value given an optional format according to the rules of ``toDecimal()``|
-| [isTimestamp](data-flow-expression-functions.md#isTimestamp) | Checks if the input date string is a timestamp using an optional input timestamp format. Refer to Java's SimpleDateFormat for available formats. If the timestamp is omitted the default pattern ``yyyy-[M]M-[d]d hh:mm:ss[.f...]`` is used. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. Timestamp supports up to millisecond accuracy with value of 999 Refer to Java's SimpleDateFormat for available formats.|
-| [toBase64](data-flow-expression-functions.md#toBase64) | Encodes the given string in base64. |
-| [toBinary](data-flow-expression-functions.md#toBinary) | Converts any numeric/date/timestamp/string to binary representation. |
-| [toBoolean](data-flow-expression-functions.md#toBoolean) | Converts a value of ('t', 'true', 'y', 'yes', '1') to true and ('f', 'false', 'n', 'no', '0') to false and NULL for any other value. |
-| [toByte](data-flow-expression-functions.md#toByte) | Converts any numeric or string to a byte value. An optional Java decimal format can be used for the conversion. |
-| [toDate](data-flow-expression-functions.md#toDate) | Converts input date string to date using an optional input date format. Refer Java's `SimpleDateFormat` class for available formats. If the input date format is omitted, default format is yyyy-[M]M-[d]d. Accepted formats are :[ yyyy, yyyy-[M]M, yyyy-[M]M-[d]d, yyyy-[M]M-[d]dT* ]. |
-| [toDecimal](data-flow-expression-functions.md#toDecimal) | Converts any numeric or string to a decimal value. If precision and scale are not specified, it is defaulted to (10,2).An optional Java decimal format can be used for the conversion. An optional locale format in the form of BCP47 language like en-US, de, zh-CN. |
-| [toDouble](data-flow-expression-functions.md#toDouble) | Converts any numeric or string to a double value. An optional Java decimal format can be used for the conversion. An optional locale format in the form of BCP47 language like en-US, de, zh-CN. |
-| [toFloat](data-flow-expression-functions.md#toFloat) | Converts any numeric or string to a float value. An optional Java decimal format can be used for the conversion. Truncates any double. |
-| [toInteger](data-flow-expression-functions.md#toInteger) | Converts any numeric or string to an integer value. An optional Java decimal format can be used for the conversion. Truncates any long, float, double. |
-| [toLong](data-flow-expression-functions.md#toLong) | Converts any numeric or string to a long value. An optional Java decimal format can be used for the conversion. Truncates any float, double. |
-| [toShort](data-flow-expression-functions.md#toShort) | Converts any numeric or string to a short value. An optional Java decimal format can be used for the conversion. Truncates any integer, long, float, double. |
-| [toString](data-flow-expression-functions.md#toString) | Converts a primitive datatype to a string. For numbers and date a format can be specified. If unspecified the system default is picked.Java decimal format is used for numbers. Refer to Java SimpleDateFormat for all possible date formats; the default format is yyyy-MM-dd. |
-| [toTimestamp](data-flow-expression-functions.md#toTimestamp) | Converts a string to a timestamp given an optional timestamp format. If the timestamp is omitted the default pattern yyyy-[M]M-[d]d hh:mm:ss[.f...] is used. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. Timestamp supports up to millisecond accuracy with value of 999. Refer Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html. |
-| [toUTC](data-flow-expression-functions.md#toUTC) | Converts the timestamp to UTC. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. It is defaulted to the current timezone. Refer Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html. |
-|||
-
-## Map functions
-
- Map functions perform operations on map data types
-
-| Map function | Task |
-|-|-|
-| [associate](data-flow-expression-functions.md#associate) | Creates a map of key/values. All the keys & values should be of the same type. If no items are specified, it is defaulted to a map of string to string type.Same as a ```[ -> ]``` creation operator. Keys and values should alternate with each other.|
-| [keyValues](data-flow-expression-functions.md#keyValues) | Creates a map of key/values. The first parameter is an array of keys and second is the array of values. Both arrays should have equal length.|
-| [mapAssociation](data-flow-expression-functions.md#mapAssociation) | Transforms a map by associating the keys to new values. Returns an array. It takes a mapping function where you can address the item as #key and current value as #value. |
-| [reassociate](data-flow-expression-functions.md#reassociate) | Transforms a map by associating the keys to new values. It takes a mapping function where you can address the item as #key and current value as #value. |
-|||
-
-## Metafunctions
-
-Metafunctions primarily function on metadata in your data flow
-
-| Metafunction | Task |
-|-|-|
-| [byItem](data-flow-expression-functions.md#byItem) | Find a sub item within a structure or array of structure If there are multiple matches, the first match is returned. If no match it returns a NULL value. The returned value has to be type converted by one of the type conversion actions(? date, ? string ...). Column names known at design time should be addressed just by their name. Computed inputs are not supported but you can use parameter substitutions |
-| [byOrigin](data-flow-expression-functions.md#byOrigin) | Selects a column value by name in the origin stream. The second argument is the origin stream name. If there are multiple matches, the first match is returned. If no match it returns a NULL value. The returned value has to be type converted by one of the type conversion functions(TO_DATE, TO_STRING ...). Column names known at design time should be addressed just by their name. Computed inputs are not supported but you can use parameter substitutions. |
-| [byOrigins](data-flow-expression-functions.md#byOrigins) | Selects an array of columns by name in the stream. The second argument is the stream where it originated from. If there are multiple matches, the first match is returned. If no match it returns a NULL value. The returned value has to be type converted by one of the type conversion functions(TO_DATE, TO_STRING ...) Column names known at design time should be addressed just by their name. Computed inputs are not supported but you can use parameter substitutions.|
-| [byName](data-flow-expression-functions.md#byName) | Selects a column value by name in the stream. You can pass a optional stream name as the second argument. If there are multiple matches, the first match is returned. If no match it returns a NULL value. The returned value has to be type converted by one of the type conversion functions(TO_DATE, TO_STRING ...). Column names known at design time should be addressed just by their name. Computed inputs are not supported but you can use parameter substitutions. |
-| [byNames](data-flow-expression-functions.md#byNames) | Select an array of columns by name in the stream. You can pass a optional stream name as the second argument. If there are multiple matches, the first match is returned. If there are no matches for a column, the entire output is a NULL value. The returned value requires a type conversion functions (toDate, toString, ...). Column names known at design time should be addressed just by their name. Computed inputs are not supported but you can use parameter substitutions.|
-| [byPath](data-flow-expression-functions.md#byPath) | Finds a hierarchical path by name in the stream. You can pass an optional stream name as the second argument. If no such path is found it returns null. Column names/paths known at design time should be addressed just by their name or dot notation path. Computed inputs are not supported but you can use parameter substitutions. |
-| [byPosition](data-flow-expression-functions.md#byPosition) | Selects a column value by its relative position(1 based) in the stream. If the position is out of bounds it returns a NULL value. The returned value has to be type converted by one of the type conversion functions(TO_DATE, TO_STRING ...) Computed inputs are not supported but you can use parameter substitutions. |
-| [hasPath](data-flow-expression-functions.md#hasPath) | Checks if a certain hierarchical path exists by name in the stream. You can pass an optional stream name as the second argument. Column names/paths known at design time should be addressed just by their name or dot notation path. Computed inputs are not supported but you can use parameter substitutions. |
-| [originColumns](data-flow-expression-functions.md#originColumns) | Gets all output columns for a origin stream where columns were created. Must be enclosed in another function.|
-| [hex](data-flow-expression-functions.md#hex) | Returns a hex string representation of a binary value|
-| [unhex](data-flow-expression-functions.md#unhex) | Unhexes a binary value from its string representation. This can be used in conjunction with sha2, md5 to convert from string to binary representation|
-|||
-
-## Window functions
-
-The following functions are only available in window transformations.
-
-| Windows function | Task |
-|-|-|
-| [cumeDist](data-flow-expression-functions.md#cumeDist) | The CumeDist function computes the position of a value relative to all values in the partition. The result is the number of rows preceding or equal to the current row in the ordering of the partition divided by the total number of rows in the window partition. Any tie values in the ordering will evaluate to the same position. |
-| [denseRank](data-flow-expression-functions.md#denseRank) | Computes the rank of a value in a group of values specified in a window's order by clause. The result is one plus the number of rows preceding or equal to the current row in the ordering of the partition. The values will not produce gaps in the sequence. Dense Rank works even when data is not sorted and looks for change in values. |
-| [lag](data-flow-expression-functions.md#lag) | Gets the value of the first parameter evaluated n rows before the current row. The second parameter is the number of rows to look back and the default value is 1. If there are not as many rows a value of null is returned unless a default value is specified. |
-| [lead](data-flow-expression-functions.md#lead) | Gets the value of the first parameter evaluated n rows after the current row. The second parameter is the number of rows to look forward and the default value is 1. If there are not as many rows a value of null is returned unless a default value is specified. |
-| [nTile](data-flow-expression-functions.md#nTile) | The ```NTile``` function divides the rows for each window partition into `n` buckets ranging from 1 to at most `n`. Bucket values will differ by at most 1. If the number of rows in the partition does not divide evenly into the number of buckets, then the remainder values are distributed one per bucket, starting with the first bucket. The ```NTile``` function is useful for the calculation of ```tertiles```, quartiles, deciles, and other common summary statistics. The function calculates two variables during initialization: The size of a regular bucket will have one extra row added to it. Both variables are based on the size of the current partition. During the calculation process the function keeps track of the current row number, the current bucket number, and the row number at which the bucket will change (bucketThreshold). When the current row number reaches bucket threshold, the bucket value is increased by one and the threshold is increased by the bucket size (plus one extra if the current bucket is padded). |
-| [rank](data-flow-expression-functions.md#rank) | Computes the rank of a value in a group of values specified in a window's order by clause. The result is one plus the number of rows preceding or equal to the current row in the ordering of the partition. The values will produce gaps in the sequence. Rank works even when data is not sorted and looks for change in values. |
-| [rowNumber](data-flow-expression-functions.md#rowNumber) | Assigns a sequential row numbering for rows in a window starting with 1. |
+| [abs](data-flow-expressions-usage.md#abs) | Absolute value of a number. |
+| [acos](data-flow-expressions-usage.md#acos) | Calculates a cosine inverse value. |
+| [add](data-flow-expressions-usage.md#add) | Adds a pair of strings or numbers. Adds a date to a number of days. Adds a duration to a timestamp. Appends one array of similar type to another. Same as the + operator. |
+| [and](data-flow-expressions-usage.md#and) | Logical AND operator. Same as &&. |
+| [asin](data-flow-expressions-usage.md#asin) | Calculates an inverse sine value. |
+| [assertErrorMessages](data-flow-expressions-usage.md#assertErrorMessages) | Returns map of all assert messages. |
+| [atan](data-flow-expressions-usage.md#atan) | Calculates an inverse tangent value. |
+| [atan2](data-flow-expressions-usage.md#atan2) | Returns the angle in radians between the positive x-axis of a plane and the point given by the coordinates. |
+| [between](data-flow-expressions-usage.md#between) | Checks if the first value is in between two other values inclusively. Numeric, string and datetime values can be compared |
+| [bitwiseAnd](data-flow-expressions-usage.md#bitwiseAnd) | Bitwise And operator across integral types. Same as & operator. |
+| [bitwiseOr](data-flow-expressions-usage.md#bitwiseOr) | Bitwise Or operator across integral types. Same as \| operator. |
+| [bitwiseXor](data-flow-expressions-usage.md#bitwiseXor) | Bitwise Or operator across integral types. Same as \| operator. |
+| [blake2b](data-flow-expressions-usage.md#blake2b) | Calculates the Blake2 digest of set of columns of varying primitive datatypes given a bit length. The bit length can only be multiples of 8 between 8 and 512. It can be used to calculate a fingerprint for a row. |
+| [blake2bBinary](data-flow-expressions-usage.md#blake2bBinary) | Calculates the Blake2 digest of set of column of varying primitive datatypes given a bit length, which can only be multiples of 8 between 8 & 512. It can be used to calculate a fingerprint for a row |
+| [case](data-flow-expressions-usage.md#case) | Based on alternating conditions applies one value or the other. If the number of inputs are even, the other is defaulted to NULL for last condition. |
+| [cbrt](data-flow-expressions-usage.md#cbrt) | Calculates the cube root of a number. |
+| [ceil](data-flow-expressions-usage.md#ceil) | Returns the smallest integer not smaller than the number. |
+| [coalesce](data-flow-expressions-usage.md#coalesce) | Returns the first not null value from a set of inputs. All inputs should be of the same type. |
+| [columnNames](data-flow-expressions-usage.md#columnNames) | Gets the names of all output columns for a stream. You can pass an optional stream name as the second argument. |
+| [columns](data-flow-expressions-usage.md#columns) | Gets the values of all output columns for a stream. You can pass an optional stream name as the second argument. |
+| [compare](data-flow-expressions-usage.md#compare) | Compares two values of the same type. Returns a negative integer if value1 < value2, 0 if value1 == value2, positive value if value1 > value2. |
+| [concat](data-flow-expressions-usage.md#concat) | Concatenates a variable number of strings together. Same as the + operator with strings. |
+| [concatWS](data-flow-expressions-usage.md#concatWS) | Concatenates a variable number of strings together with a separator. The first parameter is the separator. |
+| [cos](data-flow-expressions-usage.md#cos) | Calculates a cosine value. |
+| [cosh](data-flow-expressions-usage.md#cosh) | Calculates a hyperbolic cosine of a value. |
+| [crc32](data-flow-expressions-usage.md#crc32) | Calculates the CRC32 hash of set of column of varying primitive datatypes given a bit length. The bit length must be of values 0 (256), 224, 256, 384, or 512. It can be used to calculate a fingerprint for a row. |
+| [degrees](data-flow-expressions-usage.md#degrees) | Converts radians to degrees. |
+| [divide](data-flow-expressions-usage.md#divide) | Divides pair of numbers. Same as the `/` operator. |
+| [dropLeft](data-flow-expressions-usage.md#dropLeft) | Removes as many characters from the left of the string. If the drop requested exceeds the length of the string, an empty string is returned.|
+| [dropRight](data-flow-expressions-usage.md#dropRight) | Removes as many characters from the right of the string. If the drop requested exceeds the length of the string, an empty string is returned.|
+| [endsWith](data-flow-expressions-usage.md#endsWith) | Checks if the string ends with the supplied string. |
+| [equals](data-flow-expressions-usage.md#equals) | Comparison equals operator. Same as == operator. |
+| [equalsIgnoreCase](data-flow-expressions-usage.md#equalsIgnoreCase) | Comparison equals operator, ignoring case. Same as <=> operator. |
+| [escape](data-flow-expressions-usage.md#escape) | Escapes a string according to a format. Literal values for acceptable format are 'json', 'xml', 'ecmascript', 'html', 'java'.|
+| [expr](data-flow-expressions-usage.md#expr) | Results in an expression from a string. It is equivalent to writing the expression in a non-literal form and can be used to pass parameters as string representations.|
+| [factorial](data-flow-expressions-usage.md#factorial) | Calculates the factorial of a number. |
+| [false](data-flow-expressions-usage.md#false) | Always returns a false value. Use the function `syntax(false())` if there's a column named 'false'. |
+| [floor](data-flow-expressions-usage.md#floor) | Returns the largest integer not greater than the number. |
+| [fromBase64](data-flow-expressions-usage.md#fromBase64) | Decodes the given base64-encoded string.|
+| [greater](data-flow-expressions-usage.md#greater) | Comparison greater operator. Same as > operator. |
+| [greaterOrEqual](data-flow-expressions-usage.md#greaterOrEqual) | Comparison greater than or equal operator. Same as >= operator. |
+| [greatest](data-flow-expressions-usage.md#greatest) | Returns the greatest value among the list of values as input skipping null values. Returns null if all inputs are null. |
+| [hasColumn](data-flow-expressions-usage.md#hasColumn) | Checks for a column value by name in the stream. You can pass an optional stream name as the second argument. Column names known at design time should be addressed just by their name. Computed inputs aren't supported but you can use parameter substitutions. |
+| [hasError](data-flow-expressions-usage.md#hasError) | Checks if the assert with provided ID is marked as error. |
+| [iif](data-flow-expressions-usage.md#iif) | Based on a condition applies one value or the other. If other is unspecified, it's considered NULL. Both the values must be compatible(numeric, string...). |
+| [iifNull](data-flow-expressions-usage.md#iifNull) | Checks if the first parameter is null. If not null, the first parameter is returned. If null, the second parameter is returned. If three parameters are specified, the behavior is the same as iif(isNull(value1), value2, value3) and the third parameter is returned if the first value isn't null. |
+| [initCap](data-flow-expressions-usage.md#initCap) | Converts the first letter of every word to uppercase. Words are identified as separated by whitespace. |
+| [instr](data-flow-expressions-usage.md#instr) | Finds the position(1 based) of the substring within a string. 0 is returned if not found. |
+| [isDelete](data-flow-expressions-usage.md#isDelete) | Checks if the row is marked for delete. For transformations taking more than one input stream you can pass the (1-based) index of the stream. The stream index should be either 1 or 2 and the default value is 1. |
+| [isError](data-flow-expressions-usage.md#isError) | Checks if the row is marked as error. For transformations taking more than one input stream you can pass the (1-based) index of the stream. The stream index should be either 1 or 2 and the default value is 1. |
+| [isIgnore](data-flow-expressions-usage.md#isIgnore) | Checks if the row is marked to be ignored. For transformations taking more than one input stream you can pass the (1-based) index of the stream. The stream index should be either 1 or 2 and the default value is 1. |
+| [isInsert](data-flow-expressions-usage.md#isInsert) | Checks if the row is marked for insert. For transformations taking more than one input stream you can pass the (1-based) index of the stream. The stream index should be either 1 or 2 and the default value is 1. |
+| [isMatch](data-flow-expressions-usage.md#isMatch) | Checks if the row is matched at lookup. For transformations taking more than one input stream you can pass the (1-based) index of the stream. The stream index should be either 1 or 2 and the default value is 1. |
+| [isNull](data-flow-expressions-usage.md#isNull) | Checks if the value is NULL. |
+| [isUpdate](data-flow-expressions-usage.md#isUpdate) | Checks if the row is marked for update. For transformations taking more than one input stream you can pass the (1-based) index of the stream. The stream index should be either 1 or 2 and the default value is 1. |
+| [isUpsert](data-flow-expressions-usage.md#isUpsert) | Checks if the row is marked for insert. For transformations taking more than one input stream you can pass the (1-based) index of the stream. The stream index should be either 1 or 2 and the default value is 1. |
+| [jaroWinkler](data-flow-expressions-usage.md#jaroWinkler) | Gets the JaroWinkler distance between two strings. |
+| [least](data-flow-expressions-usage.md#least) | Comparison lesser than or equal operator. Same as <= operator. |
+| [left](data-flow-expressions-usage.md#left) | Extracts a substring start at index 1 with number of characters. Same as SUBSTRING(str, 1, n). |
+| [length](data-flow-expressions-usage.md#length) | Returns the length of the string. |
+| [lesser](data-flow-expressions-usage.md#lesser) | Comparison less operator. Same as < operator. |
+| [lesserOrEqual](data-flow-expressions-usage.md#lesserOrEqual) | Comparison lesser than or equal operator. Same as <= operator. |
+| [levenshtein](data-flow-expressions-usage.md#levenshtein) | Gets the levenshtein distance between two strings. |
+| [like](data-flow-expressions-usage.md#like) | The pattern is a string that is matched literally. The exceptions are the following special symbols: _ matches any one character in the input (similar to. in ```posix``` regular expressions)|
+| [locate](data-flow-expressions-usage.md#locate) | Finds the position(1 based) of the substring within a string starting a certain position. If the position is omitted, it's considered from the beginning of the string. 0 is returned if not found. |
+| [log](data-flow-expressions-usage.md#log) | Calculates log value. An optional base can be supplied else a Euler number if used. |
+| [log10](data-flow-expressions-usage.md#log10) | Calculates log value based on 10 base. |
+| [lower](data-flow-expressions-usage.md#lower) | Lowercases a string. |
+| [lpad](data-flow-expressions-usage.md#lpad) | Left pads the string by the supplied padding until it is of a certain length. If the string is equal to or greater than the length, then it's trimmed to the length. |
+| [ltrim](data-flow-expressions-usage.md#ltrim) | Left trims a string of leading characters. If second parameter is unspecified, it trims whitespace. Else it trims any character specified in the second parameter. |
+| [md5](data-flow-expressions-usage.md#md5) | Calculates the MD5 digest of set of column of varying primitive datatypes and returns a 32-character hex string. It can be used to calculate a fingerprint for a row. |
+| [minus](data-flow-expressions-usage.md#minus) | Subtracts numbers. Subtract number of days from a date. Subtract duration from a timestamp. Subtract two timestamps to get difference in milliseconds. Same as the - operator. |
+| [mod](data-flow-expressions-usage.md#mod) | Modulus of pair of numbers. Same as the % operator. |
+| [multiply](data-flow-expressions-usage.md#multiply) | Multiplies pair of numbers. Same as the * operator. |
+| [negate](data-flow-expressions-usage.md#negate) | Negates a number. Turns positive numbers to negative and vice versa. |
+| [nextSequence](data-flow-expressions-usage.md#nextSequence) | Returns the next unique sequence. The number is consecutive only within a partition and is prefixed by the partitionId. |
+| [normalize](data-flow-expressions-usage.md#normalize) | Normalizes the string value to separate accented unicode characters. |
+| [not](data-flow-expressions-usage.md#not) | Logical negation operator. |
+| [notEquals](data-flow-expressions-usage.md#notEquals) | Comparison not equals operator. Same as != operator. |
+| [notNull](data-flow-expressions-usage.md#notNull) | Checks if the value isn't NULL. |
+| [null](data-flow-expressions-usage.md#null) | Returns a NULL value. Use the function `syntax(null())` if there's a column named 'null'. Any operation that uses will result in a NULL. |
+| [or](data-flow-expressions-usage.md#or) | Logical OR operator. Same as \|\|. |
+| [pMod](data-flow-expressions-usage.md#pMod) | Positive Modulus of pair of numbers. |
+| [partitionId](data-flow-expressions-usage.md#partitionId) | Returns the current partition ID the input row is in. |
+| [power](data-flow-expressions-usage.md#power) | Raises one number to the power of another. |
+| [radians](data-flow-expressions-usage.md#radians) | Converts degrees to radians|
+| [random](data-flow-expressions-usage.md#random) | Returns a random number given an optional seed within a partition. The seed should be a fixed value and is used with the partitionId to produce random values |
+| [regexExtract](data-flow-expressions-usage.md#regexExtract) | Extract a matching substring for a given regex pattern. The last parameter identifies the match group and is defaulted to 1 if omitted. Use `<regex>`(back quote) to match a string without escaping. |
+| [regexMatch](data-flow-expressions-usage.md#regexMatch) | Checks if the string matches the given regex pattern. Use `<regex>`(back quote) to match a string without escaping. |
+| [regexReplace](data-flow-expressions-usage.md#regexReplace) | Replace all occurrences of a regex pattern with another substring in the given string Use `<regex>`(back quote) to match a string without escaping. |
+| [regexSplit](data-flow-expressions-usage.md#regexSplit) | Splits a string based on a delimiter based on regex and returns an array of strings. |
+| [replace](data-flow-expressions-usage.md#replace) | Replace all occurrences of a substring with another substring in the given string. If the last parameter is omitted, it's default to empty string. |
+| [reverse](data-flow-expressions-usage.md#reverse) | Reverses a string. |
+| [right](data-flow-expressions-usage.md#right) | Extracts a substring with number of characters from the right. Same as SUBSTRING(str, LENGTH(str) - n, n). |
+| [rlike](data-flow-expressions-usage.md#rlike) | Checks if the string matches the given regex pattern. |
+| [round](data-flow-expressions-usage.md#round) | Rounds a number given an optional scale and an optional rounding mode. If the scale is omitted, it's defaulted to 0. If the mode is omitted, it's defaulted to ROUND_HALF_UP(5). The values for rounding include|
+| [rpad](data-flow-expressions-usage.md#rpad) | Right pads the string by the supplied padding until it is of a certain length. If the string is equal to or greater than the length, then it's trimmed to the length. |
+| [rtrim](data-flow-expressions-usage.md#rtrim) | Right trims a string of trailing characters. If second parameter is unspecified, it trims whitespace. Else it trims any character specified in the second parameter. |
+| [sha1](data-flow-expressions-usage.md#sha1) | Calculates the SHA-1 digest of set of column of varying primitive datatypes and returns a 40 character hex string. It can be used to calculate a fingerprint for a row. |
+| [sha2](data-flow-expressions-usage.md#sha2) | Calculates the SHA-2 digest of set of column of varying primitive datatypes given a bit length, which can only be of values 0(256), 224, 256, 384, 512. It can be used to calculate a fingerprint for a row. |
+| [sin](data-flow-expressions-usage.md#sin) | Calculates a sine value. |
+| [sinh](data-flow-expressions-usage.md#sinh) | Calculates a hyperbolic sine value. |
+| [soundex](data-flow-expressions-usage.md#soundex) | Gets the ```soundex``` code for the string. |
+| [split](data-flow-expressions-usage.md#split) | Splits a string based on a delimiter and returns an array of strings. |
+| [sqrt](data-flow-expressions-usage.md#sqrt) | Calculates the square root of a number. |
+| [startsWith](data-flow-expressions-usage.md#startsWith) | Checks if the string starts with the supplied string. |
+| [substring](data-flow-expressions-usage.md#substring) | Extracts a substring of a certain length from a position. Position is 1 based. If the length is omitted, it's defaulted to end of the string. |
+| [tan](data-flow-expressions-usage.md#tan) | Calculates a tangent value. |
+| [tanh](data-flow-expressions-usage.md#tanh) | Calculates a hyperbolic tangent value. |
+| [translate](data-flow-expressions-usage.md#translate) | Replace one set of characters by another set of characters in the string. Characters have 1 to 1 replacement. |
+| [trim](data-flow-expressions-usage.md#trim) | Trims a string of leading and trailing characters. If second parameter is unspecified, it trims whitespace. Else it trims any character specified in the second parameter. |
+| [true](data-flow-expressions-usage.md#true) | Always returns a true value. Use the function `syntax(true())` if there's a column named 'true'. |
+| [typeMatch](data-flow-expressions-usage.md#typeMatch) | Matches the type of the column. Can only be used in pattern expressions.number matches short, integer, long, double, float or decimal, integral matches short, integer, long, fractional matches double, float, decimal and datetime matches date or timestamp type. |
+| [unescape](data-flow-expressions-usage.md#unescape) | Unescapes a string according to a format. Literal values for acceptable format are 'json', 'xml', 'ecmascript', 'html', 'java'.|
+| [upper](data-flow-expressions-usage.md#upper) | Uppercases a string. |
+| [uuid](data-flow-expressions-usage.md#uuid) | Returns the generated UUID. |
+| [xor](data-flow-expressions-usage.md#xor) | Logical XOR operator. Same as ^ operator. |
|||
-## Alphabetical listing of all functions
-
-Following is an alphabetical listing of all functions available in mapping data flows.
-
-<a name="abs" ></a>
-
-### <code>abs</code>
-<code><b>abs(<i>&lt;value1&gt;</i> : number) => number</b></code><br/><br/>
-Absolute value of a number.
-* ``abs(-20) -> 20``
-* ``abs(10) -> 10``
-___
--
-<a name="acos" ></a>
-
-### <code>acos</code>
-<code><b>acos(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
-Calculates a cosine inverse value.
-* ``acos(1) -> 0.0``
-___
--
-<a name="add" ></a>
-
-### <code>add</code>
-<code><b>add(<i>&lt;value1&gt;</i> : any, <i>&lt;value2&gt;</i> : any) => any</b></code><br/><br/>
-Adds a pair of strings or numbers. Adds a date to a number of days. Adds a duration to a timestamp. Appends one array of similar type to another. Same as the + operator.
-* ``add(10, 20) -> 30``
-* ``10 + 20 -> 30``
-* ``add('ice', 'cream') -> 'icecream'``
-* ``'ice' + 'cream' + ' cone' -> 'icecream cone'``
-* ``add(toDate('2012-12-12'), 3) -> toDate('2012-12-15')``
-* ``toDate('2012-12-12') + 3 -> toDate('2012-12-15')``
-* ``[10, 20] + [30, 40] -> [10, 20, 30, 40]``
-* ``toTimestamp('2019-02-03 05:19:28.871', 'yyyy-MM-dd HH:mm:ss.SSS') + (days(1) + hours(2) - seconds(10)) -> toTimestamp('2019-02-04 07:19:18.871', 'yyyy-MM-dd HH:mm:ss.SSS')``
-___
--
-<a name="addDays" ></a>
-
-### <code>addDays</code>
-<code><b>addDays(<i>&lt;date/timestamp&gt;</i> : datetime, <i>&lt;days to add&gt;</i> : integral) => datetime</b></code><br/><br/>
-Add days to a date or timestamp. Same as the + operator for date.
-* ``addDays(toDate('2016-08-08'), 1) -> toDate('2016-08-09')``
-___
--
-<a name="addMonths" ></a>
-
-### <code>addMonths</code>
-<code><b>addMonths(<i>&lt;date/timestamp&gt;</i> : datetime, <i>&lt;months to add&gt;</i> : integral, [<i>&lt;value3&gt;</i> : string]) => datetime</b></code><br/><br/>
-Add months to a date or timestamp. You can optionally pass a timezone.
-* ``addMonths(toDate('2016-08-31'), 1) -> toDate('2016-09-30')``
-* ``addMonths(toTimestamp('2016-09-30 10:10:10'), -1) -> toTimestamp('2016-08-31 10:10:10')``
-___
--
-<a name="and" ></a>
-
-### <code>and</code>
-<code><b>and(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : boolean) => boolean</b></code><br/><br/>
-Logical AND operator. Same as &&.
-* ``and(true, false) -> false``
-* ``true && false -> false``
-___
--
-<a name="approxDistinctCount" ></a>
-
-### <code>approxDistinctCount</code>
-<code><b>approxDistinctCount(<i>&lt;value1&gt;</i> : any, [ <i>&lt;value2&gt;</i> : double ]) => long</b></code><br/><br/>
-Gets the approximate aggregate count of distinct values for a column. The optional second parameter is to control the estimation error.
-* ``approxDistinctCount(ProductID, .05) => long``
-___
--
-<a name="array" ></a>
-
-### <code>array</code>
-<code><b>array([<i>&lt;value1&gt;</i> : any], ...) => array</b></code><br/><br/>
-Creates an array of items. All items should be of the same type. If no items are specified, an empty string array is the default. Same as a [] creation operator.
-* ``array('Seattle', 'Washington')``
-* ``['Seattle', 'Washington']``
-* ``['Seattle', 'Washington'][1]``
-* ``'Washington'``
-___
-
-<a name="assertErrorMessages" ></a>
-
-### <code>assertErrorMessages</code>
-<code><b>assertErrorMessages() => map</b></code><br/><br/>
-Returns a map of all error messages for the row with assert ID as the key.
-
-Examples
-* ``assertErrorMessages() => ['assert1': 'This row failed on assert1.', 'assert2': 'This row failed on assert2.']. In this example, at(assertErrorMessages(), 'assert1') would return 'This row failed on assert1.'``
-
-___
--
-<a name="asin" ></a>
-
-### <code>asin</code>
-<code><b>asin(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
-Calculates an inverse sine value.
-* ``asin(0) -> 0.0``
-___
--
-<a name="associate" ></a>
-
-### <code>associate</code>
-<code><b>reassociate(<i>&lt;value1&gt;</i> : map, <i>&lt;value2&gt;</i> : binaryFunction) => map</b></code><br/><br/>
-Creates a map of key/values. All the keys & values should be of the same type. If no items are specified, it is defaulted to a map of string to string type.Same as a ```[ -> ]``` creation operator. Keys and values should alternate with each other.
-* ``associate('fruit', 'apple', 'vegetable', 'carrot' )=> ['fruit' -> 'apple', 'vegetable' -> 'carrot']``
-___
--
-<a name="at" ></a>
-
-### <code>at</code>
-<code><b>at(<i>&lt;value1&gt;</i> : array/map, <i>&lt;value2&gt;</i> : integer/key type) => array</b></code><br/><br/>
-Finds the element at an array index. The index is 1-based. Out of bounds index results in a null value. Finds a value in a map given a key. If the key is not found it returns null.
-* ``at(['apples', 'pears'], 1) => 'apples'``
-* ``at(['fruit' -> 'apples', 'vegetable' -> 'carrot'], 'fruit') => 'apples'``
-___
--
-<a name="atan" ></a>
-
-### <code>atan</code>
-<code><b>atan(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
-Calculates a inverse tangent value.
-* ``atan(0) -> 0.0``
-___
--
-<a name="atan2" ></a>
-
-### <code>atan2</code>
-<code><b>atan2(<i>&lt;value1&gt;</i> : number, <i>&lt;value2&gt;</i> : number) => double</b></code><br/><br/>
-Returns the angle in radians between the positive x-axis of a plane and the point given by the coordinates.
-* ``atan2(0, 0) -> 0.0``
-___
--
-<a name="avg" ></a>
-
-### <code>avg</code>
-<code><b>avg(<i>&lt;value1&gt;</i> : number) => number</b></code><br/><br/>
-Gets the average of values of a column.
-* ``avg(sales)``
-___
--
-<a name="avgIf" ></a>
-
-### <code>avgIf</code>
-<code><b>avgIf(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : number) => number</b></code><br/><br/>
-Based on a criteria gets the average of values of a column.
-* ``avgIf(region == 'West', sales)``
-___
--
-<a name="between" ></a>
-
-### <code>between</code>
-<code><b>between(<i>&lt;value1&gt;</i> : any, <i>&lt;value2&gt;</i> : any, <i>&lt;value3&gt;</i> : any) => boolean</b></code><br/><br/>
-Checks if the first value is in between two other values inclusively. Numeric, string and datetime values can be compared
-* ``between(10, 5, 24)``
-* ``true``
-* ``between(currentDate(), currentDate() + 10, currentDate() + 20)``
-* ``false``
-___
--
-<a name="bitwiseAnd" ></a>
-
-### <code>bitwiseAnd</code>
-<code><b>bitwiseAnd(<i>&lt;value1&gt;</i> : integral, <i>&lt;value2&gt;</i> : integral) => integral</b></code><br/><br/>
-Bitwise And operator across integral types. Same as & operator
-* ``bitwiseAnd(0xf4, 0xef)``
-* ``0xe4``
-* ``(0xf4 & 0xef)``
-* ``0xe4``
-___
--
-<a name="bitwiseOr" ></a>
-
-### <code>bitwiseOr</code>
-<code><b>bitwiseOr(<i>&lt;value1&gt;</i> : integral, <i>&lt;value2&gt;</i> : integral) => integral</b></code><br/><br/>
-Bitwise Or operator across integral types. Same as | operator
-* ``bitwiseOr(0xf4, 0xef)``
-* ``0xff``
-* ``(0xf4 | 0xef)``
-* ``0xff``
-___
--
-<a name="bitwiseXor" ></a>
-
-### <code>bitwiseXor</code>
-<code><b>bitwiseXor(<i>&lt;value1&gt;</i> : any, <i>&lt;value2&gt;</i> : any) => any</b></code><br/><br/>
-Bitwise Or operator across integral types. Same as | operator
-* ``bitwiseXor(0xf4, 0xef)``
-* ``0x1b``
-* ``(0xf4 ^ 0xef)``
-* ``0x1b``
-* ``(true ^ false)``
-* ``true``
-* ``(true ^ true)``
-* ``false``
-___
--
-<a name="blake2b" ></a>
-
-### <code>blake2b</code>
-<code><b>blake2b(<i>&lt;value1&gt;</i> : integer, <i>&lt;value2&gt;</i> : any, ...) => string</b></code><br/><br/>
-Calculates the Blake2 digest of set of column of varying primitive datatypes given a bit length which can only be multiples of 8 between 8 & 512. It can be used to calculate a fingerprint for a row
-* ``blake2b(256, 'gunchus', 8.2, 'bojjus', true, toDate('2010-4-4'))``
-* ``'c9521a5080d8da30dffb430c50ce253c345cc4c4effc315dab2162dac974711d'``
-___
--
-<a name="blake2bBinary" ></a>
-
-### <code>blake2bBinary</code>
-<code><b>blake2bBinary(<i>&lt;value1&gt;</i> : integer, <i>&lt;value2&gt;</i> : any, ...) => binary</b></code><br/><br/>
-Calculates the Blake2 digest of set of column of varying primitive datatypes given a bit length which can only be multiples of 8 between 8 & 512. It can be used to calculate a fingerprint for a row
-* ``blake2bBinary(256, 'gunchus', 8.2, 'bojjus', true, toDate('2010-4-4'))``
-* ``unHex('c9521a5080d8da30dffb430c50ce253c345cc4c4effc315dab2162dac974711d')``
-___
--
-<a name="byItem" ></a>
-
-### <code>byItem</code>
-<code><b>byItem(<i>&lt;parent column&gt;</i> : any, <i>&lt;column name&gt;</i> : string) => any</b></code><br/><br/>
-Find a sub item within a structure or array of structure If there are multiple matches, the first match is returned. If no match it returns a NULL value. The returned value has to be type converted by one of the type conversion actions(? date, ? string ...). Column names known at design time should be addressed just by their name. Computed inputs are not supported but you can use parameter substitutions
-* ``byItem( byName('customer'), 'orderItems') ? (itemName as string, itemQty as integer)``
-* ``byItem( byItem( byName('customer'), 'orderItems'), 'itemName') ? string``
-___
--
-<a name="byName" ></a>
-
-### <code>byName</code>
-<code><b>byName(<i>&lt;column name&gt;</i> : string, [<i>&lt;stream name&gt;</i> : string]) => any</b></code><br/><br/>
-Selects a column value by name in the stream. You can pass a optional stream name as the second argument. If there are multiple matches, the first match is returned. If no match it returns a NULL value. The returned value has to be type converted by one of the type conversion functions(TO_DATE, TO_STRING ...). Column names known at design time should be addressed just by their name. Computed inputs are not supported but you can use parameter substitutions.
-* ``toString(byName('parent'))``
-* ``toLong(byName('income'))``
-* ``toBoolean(byName('foster'))``
-* ``toLong(byName($debtCol))``
-* ``toString(byName('Bogus Column'))``
-* ``toString(byName('Bogus Column', 'DeriveStream'))``
-___
--
-<a name="byNames" ></a>
-
-### <code>byNames</code>
-<code><b>byNames(<i>&lt;column names&gt;</i> : array, [<i>&lt;stream name&gt;</i> : string]) => any</b></code><br/><br/>
-Select an array of columns by name in the stream. You can pass a optional stream name as the second argument. If there are multiple matches, the first match is returned. If there are no matches for a column, the entire output is a NULL value. The returned value requires a type conversion functions (toDate, toString, ...). Column names known at design time should be addressed just by their name. Computed inputs are not supported but you can use parameter substitutions.
-* ``toString(byNames(['parent', 'child']))``
-* ``byNames(['parent']) ? string``
-* ``toLong(byNames(['income']))``
-* ``byNames(['income']) ? long``
-* ``toBoolean(byNames(['foster']))``
-* ``toLong(byNames($debtCols))``
-* ``toString(byNames(['a Column']))``
-* ``toString(byNames(['a Column'], 'DeriveStream'))``
-* ``byNames(['orderItem']) ? (itemName as string, itemQty as integer)``
-___
--
-<a name="byOrigin" ></a>
-
-### <code>byOrigin</code>
-<code><b>byOrigin(<i>&lt;column name&gt;</i> : string, [<i>&lt;origin stream name&gt;</i> : string]) => any</b></code><br/><br/>
-Selects a column value by name in the origin stream. The second argument is the origin stream name. If there are multiple matches, the first match is returned. If no match it returns a NULL value. The returned value has to be type converted by one of the type conversion functions(TO_DATE, TO_STRING ...). Column names known at design time should be addressed just by their name. Computed inputs are not supported but you can use parameter substitutions.
-* ``toString(byOrigin('ancestor', 'ancestorStream'))``
-___
--
-<a name="byOrigins" ></a>
-
-### <code>byOrigins</code>
-<code><b>byOrigins(<i>&lt;column names&gt;</i> : array, [<i>&lt;origin stream name&gt;</i> : string]) => any</b></code><br/><br/>
-Selects an array of columns by name in the stream. The second argument is the stream where it originated from. If there are multiple matches, the first match is returned. If no match it returns a NULL value. The returned value has to be type converted by one of the type conversion functions(TO_DATE, TO_STRING ...) Column names known at design time should be addressed just by their name. Computed inputs are not supported but you can use parameter substitutions.
-* ``toString(byOrigins(['ancestor1', 'ancestor2'], 'ancestorStream'))``
-___
--
-<a name="byPath" ></a>
-
-### <code>byPath</code>
-<code><b>byPath(<i>&lt;value1&gt;</i> : string, [<i>&lt;streamName&gt;</i> : string]) => any</b></code><br/><br/>
-Finds a hierarchical path by name in the stream. You can pass an optional stream name as the second argument. If no such path is found it returns null. Column names/paths known at design time should be addressed just by their name or dot notation path. Computed inputs are not supported but you can use parameter substitutions.
-* ``byPath('grandpa.parent.child') => column``
-___
--
-<a name="byPosition" ></a>
-
-### <code>byPosition</code>
-<code><b>byPosition(<i>&lt;position&gt;</i> : integer) => any</b></code><br/><br/>
-Selects a column value by its relative position(1 based) in the stream. If the position is out of bounds it returns a NULL value. The returned value has to be type converted by one of the type conversion functions(TO_DATE, TO_STRING ...) Computed inputs are not supported but you can use parameter substitutions.
-* ``toString(byPosition(1))``
-* ``toDecimal(byPosition(2), 10, 2)``
-* ``toBoolean(byName(4))``
-* ``toString(byName($colName))``
-* ``toString(byPosition(1234))``
-___
--
-<a name="case" ></a>
-
-### <code>case</code>
-<code><b>case(<i>&lt;condition&gt;</i> : boolean, <i>&lt;true_expression&gt;</i> : any, <i>&lt;false_expression&gt;</i> : any, ...) => any</b></code><br/><br/>
-Based on alternating conditions applies one value or the other. If the number of inputs are even, the other is defaulted to NULL for last condition.
-* ``case(10 + 20 == 30, 'dumbo', 'gumbo') -> 'dumbo'``
-* ``case(10 + 20 == 25, 'bojjus', 'do' < 'go', 'gunchus') -> 'gunchus'``
-* ``isNull(case(10 + 20 == 25, 'bojjus', 'do' > 'go', 'gunchus')) -> true``
-* ``case(10 + 20 == 25, 'bojjus', 'do' > 'go', 'gunchus', 'dumbo') -> 'dumbo'``
-___
--
-<a name="cbrt" ></a>
-
-### <code>cbrt</code>
-<code><b>cbrt(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
-Calculates the cube root of a number.
-* ``cbrt(8) -> 2.0``
-___
--
-<a name="ceil" ></a>
-
-### <code>ceil</code>
-<code><b>ceil(<i>&lt;value1&gt;</i> : number) => number</b></code><br/><br/>
-Returns the smallest integer not smaller than the number.
-* ``ceil(-0.1) -> 0``
-___
--
-<a name="coalesce" ></a>
-
-### <code>coalesce</code>
-<code><b>coalesce(<i>&lt;value1&gt;</i> : any, ...) => any</b></code><br/><br/>
-Returns the first not null value from a set of inputs. All inputs should be of the same type.
-* ``coalesce(10, 20) -> 10``
-* ``coalesce(toString(null), toString(null), 'dumbo', 'bo', 'go') -> 'dumbo'``
-___
--
-<a name="collect" ></a>
-
-### <code>collect</code>
-<code><b>collect(<i>&lt;value1&gt;</i> : any) => array</b></code><br/><br/>
-Collects all values of the expression in the aggregated group into an array. Structures can be collected and transformed to alternate structures during this process. The number of items will be equal to the number of rows in that group and can contain null values. The number of collected items should be small.
-* ``collect(salesPerson)``
-* ``collect(firstName + lastName))``
-* ``collect(@(name = salesPerson, sales = salesAmount) )``
-___
--
-<a name="columnNames" ></a>
-
-### <code>columnNames</code>
-<code><b>columnNames(<i>&lt;value1&gt;</i> : string) => array</b></code><br/><br/>
-Gets the names of all output columns for a stream. You can pass an optional stream name as the second argument.
-* ``columnNames()``
-* ``columnNames('DeriveStream')``
-___
--
-<a name="columns" ></a>
-
-### <code>columns</code>
-<code><b>columns([<i>&lt;stream name&gt;</i> : string]) => any</b></code><br/><br/>
-Gets the values of all output columns for a stream. You can pass an optional stream name as the second argument.
-* ``columns()``
-* ``columns('DeriveStream')``
-___
--
-<a name="compare" ></a>
-
-### <code>compare</code>
-<code><b>compare(<i>&lt;value1&gt;</i> : any, <i>&lt;value2&gt;</i> : any) => integer</b></code><br/><br/>
-Compares two values of the same type. Returns negative integer if value1 < value2, 0 if value1 == value2, positive value if value1 > value2.
-* ``(compare(12, 24) < 1) -> true``
-* ``(compare('dumbo', 'dum') > 0) -> true``
-___
--
-<a name="concat" ></a>
-
-### <code>concat</code>
-<code><b>concat(<i>&lt;this&gt;</i> : string, <i>&lt;that&gt;</i> : string, ...) => string</b></code><br/><br/>
-Concatenates a variable number of strings together. Same as the + operator with strings.
-* ``concat('dataflow', 'is', 'awesome') -> 'dataflowisawesome'``
-* ``'dataflow' + 'is' + 'awesome' -> 'dataflowisawesome'``
-* ``isNull('sql' + null) -> true``
-___
--
-<a name="concatWS" ></a>
-
-### <code>concatWS</code>
-<code><b>concatWS(<i>&lt;separator&gt;</i> : string, <i>&lt;this&gt;</i> : string, <i>&lt;that&gt;</i> : string, ...) => string</b></code><br/><br/>
-Concatenates a variable number of strings together with a separator. The first parameter is the separator.
-* ``concatWS(' ', 'dataflow', 'is', 'awesome') -> 'dataflow is awesome'``
-* ``isNull(concatWS(null, 'dataflow', 'is', 'awesome')) -> true``
-* ``concatWS(' is ', 'dataflow', 'awesome') -> 'dataflow is awesome'``
-___
--
-<a name="contains" ></a>
-
-### <code>contains</code>
-<code><b>contains(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : unaryfunction) => boolean</b></code><br/><br/>
-Returns true if any element in the provided array evaluates as true in the provided predicate. Contains expects a reference to one element in the predicate function as #item.
-* ``contains([1, 2, 3, 4], #item == 3) -> true``
-* ``contains([1, 2, 3, 4], #item > 5) -> false``
-___
--
-<a name="cos" ></a>
-
-### <code>cos</code>
-<code><b>cos(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
-Calculates a cosine value.
-* ``cos(10) -> -0.8390715290764524``
-___
--
-<a name="cosh" ></a>
-
-### <code>cosh</code>
-<code><b>cosh(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
-Calculates a hyperbolic cosine of a value.
-* ``cosh(0) -> 1.0``
-___
--
-<a name="count" ></a>
-
-### <code>count</code>
-<code><b>count([<i>&lt;value1&gt;</i> : any]) => long</b></code><br/><br/>
-Gets the aggregate count of values. If the optional column(s) is specified, it ignores NULL values in the count.
-* ``count(custId)``
-* ``count(custId, custName)``
-* ``count()``
-* ``count(iif(isNull(custId), 1, NULL))``
-___
-
-<a name="countAll" ></a>
-
-### <code>countAll</code>
-<code><b>countAll([<i>&lt;value1&gt;</i> : any]) => long</b></code><br/><br/>
-Gets the aggregate count of values including nulls.
-* ``countAll(custId)``
-* ``countAll()``
-
-___
--
-<a name="countDistinct" ></a>
-
-### <code>countDistinct</code>
-<code><b>countDistinct(<i>&lt;value1&gt;</i> : any, [<i>&lt;value2&gt;</i> : any], ...) => long</b></code><br/><br/>
-Gets the aggregate count of distinct values of a set of columns.
-* ``countDistinct(custId, custName)``
-___
--
-<a name="countAllDistinct" ></a>
-
-### <code>countAllDistinct</code>
-<code><b>countAllDistinct(<i>&lt;value1&gt;</i> : any, [<i>&lt;value2&gt;</i> : any], ...) => long</b></code><br/><br/>
-Gets the aggregate count of distinct values of a set of columns including nulls.
-* ``countAllDistinct(custId, custName)``
-___
--
-<a name="countIf" ></a>
-
-### <code>countIf</code>
-<code><b>countIf(<i>&lt;value1&gt;</i> : boolean, [<i>&lt;value2&gt;</i> : any]) => long</b></code><br/><br/>
-Based on a criteria gets the aggregate count of values. If the optional column is specified, it ignores NULL values in the count.
-* ``countIf(state == 'CA' && commission < 10000, name)``
-___
--
-<a name="covariancePopulation" ></a>
-
-### <code>covariancePopulation</code>
-<code><b>covariancePopulation(<i>&lt;value1&gt;</i> : number, <i>&lt;value2&gt;</i> : number) => double</b></code><br/><br/>
-Gets the population covariance between two columns.
-* ``covariancePopulation(sales, profit)``
-___
--
-<a name="covariancePopulationIf" ></a>
-
-### <code>covariancePopulationIf</code>
-<code><b>covariancePopulationIf(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : number, <i>&lt;value3&gt;</i> : number) => double</b></code><br/><br/>
-Based on a criteria, gets the population covariance of two columns.
-* ``covariancePopulationIf(region == 'West', sales)``
-___
--
-<a name="covarianceSample" ></a>
-
-### <code>covarianceSample</code>
-<code><b>covarianceSample(<i>&lt;value1&gt;</i> : number, <i>&lt;value2&gt;</i> : number) => double</b></code><br/><br/>
-Gets the sample covariance of two columns.
-* ``covarianceSample(sales, profit)``
-___
--
-<a name="covarianceSampleIf" ></a>
-
-### <code>covarianceSampleIf</code>
-<code><b>covarianceSampleIf(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : number, <i>&lt;value3&gt;</i> : number) => double</b></code><br/><br/>
-Based on a criteria, gets the sample covariance of two columns.
-* ``covarianceSampleIf(region == 'West', sales, profit)``
-___
---
-<a name="crc32" ></a>
-
-### <code>crc32</code>
-<code><b>crc32(<i>&lt;value1&gt;</i> : any, ...) => long</b></code><br/><br/>
-Calculates the CRC32 hash of set of column of varying primitive datatypes given a bit length which can only be of values 0(256), 224, 256, 384, 512. It can be used to calculate a fingerprint for a row.
-* ``crc32(256, 'gunchus', 8.2, 'bojjus', true, toDate('2010-4-4')) -> 3630253689L``
-___
--
-<a name="cumeDist" ></a>
-
-### <code>cumeDist</code>
-<code><b>cumeDist() => integer</b></code><br/><br/>
-The CumeDist function computes the position of a value relative to all values in the partition. The result is the number of rows preceding or equal to the current row in the ordering of the partition divided by the total number of rows in the window partition. Any tie values in the ordering will evaluate to the same position.
-* ``cumeDist()``
-___
--
-<a name="currentDate" ></a>
-
-### <code>currentDate</code>
-<code><b>currentDate([<i>&lt;value1&gt;</i> : string]) => date</b></code><br/><br/>
-Gets the current date when this job starts to run. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. The local timezone is used as the default. Refer Java's `SimpleDateFormat` class for available formats. [https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html](https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html).
-* ``currentDate() == toDate('2250-12-31') -> false``
-* ``currentDate('PST') == toDate('2250-12-31') -> false``
-* ``currentDate('America/New_York') == toDate('2250-12-31') -> false``
-___
--
-<a name="currentTimestamp" ></a>
-
-### <code>currentTimestamp</code>
-<code><b>currentTimestamp() => timestamp</b></code><br/><br/>
-Gets the current timestamp when the job starts to run with local time zone.
-* ``currentTimestamp() == toTimestamp('2250-12-31 12:12:12') -> false``
-___
--
-<a name="currentUTC" ></a>
-
-### <code>currentUTC</code>
-<code><b>currentUTC([<i>&lt;value1&gt;</i> : string]) => timestamp</b></code><br/><br/>
-Gets the current timestamp as UTC. If you want your current time to be interpreted in a different timezone than your cluster time zone, you can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. It is defaulted to the current timezone. Refer Java's `SimpleDateFormat` class for available formats. [https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html](https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html). To convert the UTC time to a different timezone use `fromUTC()`.
-* ``currentUTC() == toTimestamp('2050-12-12 19:18:12') -> false``
-* ``currentUTC() != toTimestamp('2050-12-12 19:18:12') -> true``
-* ``fromUTC(currentUTC(), 'Asia/Seoul') != toTimestamp('2050-12-12 19:18:12') -> true``
-___
--
-<a name="dayOfMonth" ></a>
-
-### <code>dayOfMonth</code>
-<code><b>dayOfMonth(<i>&lt;value1&gt;</i> : datetime) => integer</b></code><br/><br/>
-Gets the day of the month given a date.
-* ``dayOfMonth(toDate('2018-06-08')) -> 8``
-___
--
-<a name="dayOfWeek" ></a>
-
-### <code>dayOfWeek</code>
-<code><b>dayOfWeek(<i>&lt;value1&gt;</i> : datetime) => integer</b></code><br/><br/>
-Gets the day of the week given a date. 1 - Sunday, 2 - Monday ..., 7 - Saturday.
-* ``dayOfWeek(toDate('2018-06-08')) -> 6``
-___
--
-<a name="dayOfYear" ></a>
-
-### <code>dayOfYear</code>
-<code><b>dayOfYear(<i>&lt;value1&gt;</i> : datetime) => integer</b></code><br/><br/>
-Gets the day of the year given a date.
-* ``dayOfYear(toDate('2016-04-09')) -> 100``
-___
--
-<a name="days" ></a>
-
-### <code>days</code>
-<code><b>days(<i>&lt;value1&gt;</i> : integer) => long</b></code><br/><br/>
-Duration in milliseconds for number of days.
-* ``days(2) -> 172800000L``
-___
--
-<a name="degrees" ></a>
-
-### <code>degrees</code>
-<code><b>degrees(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
-Converts radians to degrees.
-* ``degrees(3.141592653589793) -> 180``
-___
--
-<a name="denseRank" ></a>
-
-### <code>denseRank</code>
-<code><b>denseRank() => integer</b></code><br/><br/>
-Computes the rank of a value in a group of values specified in a window's order by clause. The result is one plus the number of rows preceding or equal to the current row in the ordering of the partition. The values will not produce gaps in the sequence. Dense Rank works even when data is not sorted and looks for change in values.
-* ``denseRank()``
-___
--
-<a name="distinct" ></a>
-
-### <code>distinct</code>
-<code><b>distinct(<i>&lt;value1&gt;</i> : array) => array</b></code><br/><br/>
-Returns a distinct set of items from an array.
-* ``distinct([10, 20, 30, 10]) => [10, 20, 30]``
-___
--
-<a name="divide" ></a>
-
-### <code>divide</code>
-<code><b>divide(<i>&lt;value1&gt;</i> : any, <i>&lt;value2&gt;</i> : any) => any</b></code><br/><br/>
-Divides pair of numbers. Same as the `/` operator.
-* ``divide(20, 10) -> 2``
-* ``20 / 10 -> 2``
-___
--
-<a name="dropLeft" ></a>
-
-### <code>dropLeft</code>
-<code><b>dropLeft(<i>&lt;value1&gt;</i> : string, <i>&lt;value2&gt;</i> : integer) => string</b></code><br/><br/>
-Removes as many characters from the left of the string. If the drop requested exceeds the length of the string, an empty string is returned.
-* dropLeft('bojjus', 2) => 'jjus'
-* dropLeft('cake', 10) => ''
-___
--
-<a name="dropRight" ></a>
-
-### <code>dropRight</code>
-<code><b>dropRight(<i>&lt;value1&gt;</i> : string, <i>&lt;value2&gt;</i> : integer) => string</b></code><br/><br/>
-Removes as many characters from the right of the string. If the drop requested exceeds the length of the string, an empty string is returned.
-* dropRight('bojjus', 2) => 'bojj'
-* dropRight('cake', 10) => ''
-___
--
-<a name="endsWith" ></a>
-
-### <code>endsWith</code>
-<code><b>endsWith(<i>&lt;string&gt;</i> : string, <i>&lt;substring to check&gt;</i> : string) => boolean</b></code><br/><br/>
-Checks if the string ends with the supplied string.
-* ``endsWith('dumbo', 'mbo') -> true``
-___
--
-<a name="equals" ></a>
-
-### <code>equals</code>
-<code><b>equals(<i>&lt;value1&gt;</i> : any, <i>&lt;value2&gt;</i> : any) => boolean</b></code><br/><br/>
-Comparison equals operator. Same as == operator.
-* ``equals(12, 24) -> false``
-* ``12 == 24 -> false``
-* ``'bad' == 'bad' -> true``
-* ``isNull('good' == toString(null)) -> true``
-* ``isNull(null == null) -> true``
-___
--
-<a name="equalsIgnoreCase" ></a>
-
-### <code>equalsIgnoreCase</code>
-<code><b>equalsIgnoreCase(<i>&lt;value1&gt;</i> : string, <i>&lt;value2&gt;</i> : string) => boolean</b></code><br/><br/>
-Comparison equals operator ignoring case. Same as <=> operator.
-* ``'abc'<=>'Abc' -> true``
-* ``equalsIgnoreCase('abc', 'Abc') -> true``
-___
--
-<a name="escape" ></a>
-
-### <code>escape</code>
-<code><b>escape(<i>&lt;string_to_escape&gt;</i> : string, <i>&lt;format&gt;</i> : string) => string</b></code><br/><br/>
-Escapes a string according to a format. Literal values for acceptable format are 'json', 'xml', 'ecmascript', 'html', 'java'.
-___
--
-<a name="except" ></a>
-
-### <code>except</code>
-<code><b>except(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : array) => array</b></code><br/><br/>
-Returns a difference set of one array from another dropping duplicates.
-* ``except([10, 20, 30], [20, 40]) => [10, 30]``
-___
--
-<a name="expr" ></a>
-
-### <code>expr</code>
-<code><b>expr(<i>&lt;expr&gt;</i> : string) => any</b></code><br/><br/>
-Results in a expression from a string. This is the same as writing this expression in a non-literal form. This can be used to pass parameters as string representations.
-* expr('price * discount') => any
-___
--
-<a name="factorial" ></a>
-
-### <code>factorial</code>
-<code><b>factorial(<i>&lt;value1&gt;</i> : number) => long</b></code><br/><br/>
-Calculates the factorial of a number.
-* ``factorial(5) -> 120``
-___
--
-<a name="false" ></a>
-
-### <code>false</code>
-<code><b>false() => boolean</b></code><br/><br/>
-Always returns a false value. Use the function `syntax(false())` if there is a column named 'false'.
-* ``(10 + 20 > 30) -> false``
-* ``(10 + 20 > 30) -> false()``
-___
--
-<a name="filter" ></a>
-
-### <code>filter</code>
-<code><b>filter(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : unaryfunction) => array</b></code><br/><br/>
-Filters elements out of the array that do not meet the provided predicate. Filter expects a reference to one element in the predicate function as #item.
-* ``filter([1, 2, 3, 4], #item > 2) -> [3, 4]``
-* ``filter(['a', 'b', 'c', 'd'], #item == 'a' || #item == 'b') -> ['a', 'b']``
-___
--
-<a name="find" ></a>
-
-### <code>find</code>
-<code><b>find(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : unaryfunction) => any</b></code><br/><br/>
-Find the first item from an array that match the condition. It takes a filter function where you can address the item in the array as #item. For deeply nested maps you can refer to the parent maps using the #item_n(#item_1, #item_2...) notation.
-* ``find([10, 20, 30], #item > 10) -> 20``
-* ``find(['azure', 'data', 'factory'], length(#item) > 4) -> 'azure'``
-* ``find([
- @(
- name = 'Daniel',
- types = [
- @(mood = 'jovial', behavior = 'terrific'),
- @(mood = 'grumpy', behavior = 'bad')
- ]
- ),
- @(
- name = 'Mark',
- types = [
- @(mood = 'happy', behavior = 'awesome'),
- @(mood = 'calm', behavior = 'reclusive')
- ]
- )
- ],
- contains(#item.types, #item.mood=='happy') /*Filter out the happy kid*/
- )``
-* ``
- @(
- name = 'Mark',
- types = [
- @(mood = 'happy', behavior = 'awesome'),
- @(mood = 'calm', behavior = 'reclusive')
- ]
- )
- ``
-___
--
-<a name="first" ></a>
-
-### <code>first</code>
-<code><b>first(<i>&lt;value1&gt;</i> : any, [<i>&lt;value2&gt;</i> : boolean]) => any</b></code><br/><br/>
-Gets the first value of a column group. If the second parameter ignoreNulls is omitted, it is assumed false.
-* ``first(sales)``
-* ``first(sales, false)``
-___
---
-<a name="flatten" ></a>
-
-### <code>flatten</code>
-<code><b>flatten(<i>&lt;array&gt;</i> : array, <i>&lt;value2&gt;</i> : array ..., <i>&lt;value2&gt;</i> : boolean) => array</b></code><br/><br/>
-Flattens array or arrays into a single array. Arrays of atomic items are returned unaltered. The last argument is optional and is defaulted to false to flatten recursively more than one level deep.
-* ``flatten([['bojjus', 'girl'], ['gunchus', 'boy']]) => ['bojjus', 'girl', 'gunchus', 'boy']``
-* ``flatten([[['bojjus', 'gunchus']]] , true) => ['bojjus', 'gunchus']``
-___
--
-<a name="floor" ></a>
-
-### <code>floor</code>
-<code><b>floor(<i>&lt;value1&gt;</i> : number) => number</b></code><br/><br/>
-Returns the largest integer not greater than the number.
-* ``floor(-0.1) -> -1``
-___
--
-<a name="fromBase64" ></a>
-
-### <code>fromBase64</code>
-<code><b>fromBase64(<i>&lt;value1&gt;</i> : string, <i>&lt;encoding type&gt;</i> : string) => string</b></code><br/><br/>
-Decodes the given base64-encoded string. You can optionally pass the encoding type.
-* ``fromBase64('Z3VuY2h1cw==') -> 'gunchus'``
-* ``fromBase64('SGVsbG8gV29ybGQ=', 'Windows-1252') -> 'Hello World'``
-___
--
-<a name="fromUTC" ></a>
-
-### <code>fromUTC</code>
-<code><b>fromUTC(<i>&lt;value1&gt;</i> : timestamp, [<i>&lt;value2&gt;</i> : string]) => timestamp</b></code><br/><br/>
-Converts to the timestamp from UTC. You can optionally pass the timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. It is defaulted to the current timezone. Refer Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html.
-* ``fromUTC(currentTimestamp()) == toTimestamp('2050-12-12 19:18:12') -> false``
-* ``fromUTC(currentTimestamp(), 'Asia/Seoul') != toTimestamp('2050-12-12 19:18:12') -> true``
-___
--
-<a name="greater" ></a>
-
-### <code>greater</code>
-<code><b>greater(<i>&lt;value1&gt;</i> : any, <i>&lt;value2&gt;</i> : any) => boolean</b></code><br/><br/>
-Comparison greater operator. Same as > operator.
-* ``greater(12, 24) -> false``
-* ``('dumbo' > 'dum') -> true``
-* ``(toTimestamp('2019-02-05 08:21:34.890', 'yyyy-MM-dd HH:mm:ss.SSS') > toTimestamp('2019-02-03 05:19:28.871', 'yyyy-MM-dd HH:mm:ss.SSS')) -> true``
-___
--
-<a name="greaterOrEqual" ></a>
-
-### <code>greaterOrEqual</code>
-<code><b>greaterOrEqual(<i>&lt;value1&gt;</i> : any, <i>&lt;value2&gt;</i> : any) => boolean</b></code><br/><br/>
-Comparison greater than or equal operator. Same as >= operator.
-* ``greaterOrEqual(12, 12) -> true``
-* ``('dumbo' >= 'dum') -> true``
-___
--
-<a name="greatest" ></a>
-
-### <code>greatest</code>
-<code><b>greatest(<i>&lt;value1&gt;</i> : any, ...) => any</b></code><br/><br/>
-Returns the greatest value among the list of values as input skipping null values. Returns null if all inputs are null.
-* ``greatest(10, 30, 15, 20) -> 30``
-* ``greatest(10, toInteger(null), 20) -> 20``
-* ``greatest(toDate('2010-12-12'), toDate('2011-12-12'), toDate('2000-12-12')) -> toDate('2011-12-12')``
-* ``greatest(toTimestamp('2019-02-03 05:19:28.871', 'yyyy-MM-dd HH:mm:ss.SSS'), toTimestamp('2019-02-05 08:21:34.890', 'yyyy-MM-dd HH:mm:ss.SSS')) -> toTimestamp('2019-02-05 08:21:34.890', 'yyyy-MM-dd HH:mm:ss.SSS')``
-___
--
-<a name="hasColumn" ></a>
-
-### <code>hasColumn</code>
-<code><b>hasColumn(<i>&lt;column name&gt;</i> : string, [<i>&lt;stream name&gt;</i> : string]) => boolean</b></code><br/><br/>
-Checks for a column value by name in the stream. You can pass a optional stream name as the second argument. Column names known at design time should be addressed just by their name. Computed inputs are not supported but you can use parameter substitutions.
-* ``hasColumn('parent')``
-___
--
-<a name="hasError" ></a>
-
-### <code>hasError</code>
-<code><b>hasError([<i>&lt;value1&gt;</i> : string]) => boolean</b></code><br/><br/>
-Checks if the assert with provided ID is marked as error.
-
-Examples
-* ``hasError('assert1')``
-* ``hasError('assert2')``
-
-___
-
-<a name="hasPath" ></a>
-
-### <code>hasPath</code>
-<code><b>hasPath(<i>&lt;value1&gt;</i> : string, [<i>&lt;streamName&gt;</i> : string]) => boolean</b></code><br/><br/>
-Checks if a certain hierarchical path exists by name in the stream. You can pass an optional stream name as the second argument. Column names/paths known at design time should be addressed just by their name or dot notation path. Computed inputs are not supported but you can use parameter substitutions.
-* ``hasPath('grandpa.parent.child') => boolean``
-___
--
-<a name="hex" ></a>
-
-### <code>hex</code>
-<code><b>hex(<i>\<value1\></i>: binary) => string</b></code><br/><br/>
-Returns a hex string representation of a binary value
-* ``hex(toBinary([toByte(0x1f), toByte(0xad), toByte(0xbe)])) -> '1fadbe'``
-___
--
-<a name="hour" ></a>
-
-### <code>hour</code>
-<code><b>hour(<i>&lt;value1&gt;</i> : timestamp, [<i>&lt;value2&gt;</i> : string]) => integer</b></code><br/><br/>
-Gets the hour value of a timestamp. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. The local timezone is used as the default. Refer Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html.
-* ``hour(toTimestamp('2009-07-30 12:58:59')) -> 12``
-* ``hour(toTimestamp('2009-07-30 12:58:59'), 'PST') -> 12``
-___
--
-<a name="hours" ></a>
-
-### <code>hours</code>
-<code><b>hours(<i>&lt;value1&gt;</i> : integer) => long</b></code><br/><br/>
-Duration in milliseconds for number of hours.
-* ``hours(2) -> 7200000L``
-___
--
-<a name="iif" ></a>
-
-### <code>iif</code>
-<code><b>iif(<i>&lt;condition&gt;</i> : boolean, <i>&lt;true_expression&gt;</i> : any, [<i>&lt;false_expression&gt;</i> : any]) => any</b></code><br/><br/>
-Based on a condition applies one value or the other. If other is unspecified it is considered NULL. Both the values must be compatible(numeric, string...).
-* ``iif(10 + 20 == 30, 'dumbo', 'gumbo') -> 'dumbo'``
-* ``iif(10 > 30, 'dumbo', 'gumbo') -> 'gumbo'``
-* ``iif(month(toDate('2018-12-01')) == 12, 345.12, 102.67) -> 345.12``
-___
--
-<a name="iifNull" ></a>
-
-### <code>iifNull</code>
-<code><b>iifNull(<i>&lt;value1&gt;</i> : any, [<i>&lt;value2&gt;</i> : any], ...) => any</b></code><br/><br/>
-Checks if the first parameter is null. If not null, the first parameter is returned. If null, the second parameter is returned. If three parameters are specified, the behavior is the same as iif(isNull(value1), value2, value3) and the third parameter is returned if the first value is not null.
-* ``iifNull(10, 20) -> 10``
-* ``iifNull(null, 20, 40) -> 20``
-* ``iifNull('azure', 'data', 'factory') -> 'factory'``
-* ``iifNull(null, 'data', 'factory') -> 'data'``
-___
--
-<a name="in" ></a>
-
-### <code>in</code>
-<code><b>in(<i>&lt;array of items&gt;</i> : array, <i>&lt;item to find&gt;</i> : any) => boolean</b></code><br/><br/>
-Checks if an item is in the array.
-* ``in([10, 20, 30], 10) -> true``
-* ``in(['good', 'kid'], 'bad') -> false``
-___
--
-<a name="initCap" ></a>
-
-### <code>initCap</code>
-<code><b>initCap(<i>&lt;value1&gt;</i> : string) => string</b></code><br/><br/>
-Converts the first letter of every word to uppercase. Words are identified as separated by whitespace.
-* ``initCap('cool iceCREAM') -> 'Cool Icecream'``
-___
--
-<a name="instr" ></a>
-
-### <code>instr</code>
-<code><b>instr(<i>&lt;string&gt;</i> : string, <i>&lt;substring to find&gt;</i> : string) => integer</b></code><br/><br/>
-Finds the position(1 based) of the substring within a string. 0 is returned if not found.
-* ``instr('dumbo', 'mbo') -> 3``
-* ``instr('microsoft', 'o') -> 5``
-* ``instr('good', 'bad') -> 0``
-___
--
-<a name="intersect" ></a>
-
-### <code>intersect</code>
-<code><b>intersect(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : array) => array</b></code><br/><br/>
-Returns an intersection set of distinct items from 2 arrays.
-* ``intersect([10, 20, 30], [20, 40]) => [20]``
-___
--
-<a name="isBitSet" ></a>
-
-### <code>isBitSet</code>
-<code><b>isBitSet (<i><i>\<value1\></i></i> : array, <i>\<value2\></i>:integer ) => boolean</b></code><br/><br/>
-Checks if a bit position is set in this bitset
-* ``isBitSet(toBitSet([10, 32, 98]), 10) => true``
-___
--
-<a name="isBoolean" ></a>
-
-### <code>isBoolean</code>
-<code><b>isBoolean(<i>\<value1\></i>: string) => boolean</b></code><br/><br/>
-Checks if the string value is a boolean value according to the rules of ``toBoolean()``
-* ``isBoolean('true') -> true``
-* ``isBoolean('no') -> true``
-* ``isBoolean('microsoft') -> false``
-___
--
-<a name="isByte" ></a>
-
-### <code>isByte</code>
-<code><b>isByte(<i>\<value1\></i> : string) => boolean</b></code><br/><br/>
-Checks if the string value is a byte value given an optional format according to the rules of ``toByte()``
-* ``isByte('123') -> true``
-* ``isByte('chocolate') -> false``
-___
--
-<a name="isDate" ></a>
-
-### <code>isDate</code>
-<code><b>isDate (<i>\<value1\></i> : string, [&lt;format&gt;: string]) => boolean</b></code><br/><br/>
-Checks if the input date string is a date using an optional input date format. Refer Java's SimpleDateFormat for available formats. If the input date format is omitted, default format is ``yyyy-[M]M-[d]d``. Accepted formats are ``[ yyyy, yyyy-[M]M, yyyy-[M]M-[d]d, yyyy-[M]M-[d]dT* ]``
-* ``isDate('2012-8-18') -> true``
-* ``isDate('12/18--234234' -> 'MM/dd/yyyy') -> false``
-___
--
-<a name="isDecimal" ></a>
-
-### <code>isDecimal</code>
-<code><b>isDecimal (<i>\<value1\></i> : string) => boolean</b></code><br/><br/>
-Checks of the string value is a decimal value given an optional format according to the rules of ``toDecimal()``
-* ``isDecimal('123.45') -> true``
-* ``isDecimal('12/12/2000') -> false``
-___
--
-<a name="isDelete" ></a>
-
-### <code>isDelete</code>
-<code><b>isDelete([<i>&lt;value1&gt;</i> : integer]) => boolean</b></code><br/><br/>
-Checks if the row is marked for delete. For transformations taking more than one input stream you can pass the (1-based) index of the stream. The stream index should be either 1 or 2 and the default value is 1.
-* ``isDelete()``
-* ``isDelete(1)``
-___
--
-<a name="isDistinct" ></a>
-
-### <code>isDistinct</code>
-<code><b>isDistinct(<i>&lt;value1&gt;</i> : any , <i>&lt;value1&gt;</i> : any) => boolean</b></code><br/><br/>
-Finds if a column or set of columns is distinct. It does not count null as a distinct value
-* ``isDistinct(custId, custName) => boolean``
-___
---
-<a name="isDouble" ></a>
-
-### <code>isDouble</code>
-<code><b>isDouble (<i>\<value1\></i> : string, [&lt;format&gt;: string]) => boolean</b></code><br/><br/>
-Checks of the string value is a double value given an optional format according to the rules of ``toDouble()``
-* ``isDouble('123') -> true``
-* ``isDouble('$123.45' -> '$###.00') -> true``
-* ``isDouble('icecream') -> false``
-___
-
-<a name="isError" ></a>
-
-### <code>isError</code>
-<code><b>isError([<i>&lt;value1&gt;</i> : integer]) => boolean</b></code><br/><br/>
-Checks if the row is marked as error. For transformations taking more than one input stream you can pass the (1-based) index of the stream. The stream index should be either 1 or 2 and the default value is 1.
-* ``isError()``
-* ``isError(1)``
-___
-
-<a name="isFloat" ></a>
-
-### <code>isFloat</code>
-<code><b>isFloat (<i>\<value1\></i> : string, [&lt;format&gt;: string]) => boolean</b></code><br/><br/>
-Checks of the string value is a float value given an optional format according to the rules of ``toFloat()``
-* ``isFloat('123') -> true``
-* ``isFloat('$123.45' -> '$###.00') -> true``
-* ``isFloat('icecream') -> false``
-___
--
-<a name="isIgnore" ></a>
-
-### <code>isIgnore</code>
-<code><b>isIgnore([<i>&lt;value1&gt;</i> : integer]) => boolean</b></code><br/><br/>
-Checks if the row is marked to be ignored. For transformations taking more than one input stream you can pass the (1-based) index of the stream. The stream index should be either 1 or 2 and the default value is 1.
-* ``isIgnore()``
-* ``isIgnore(1)``
-___
--
-<a name="isInsert" ></a>
-
-### <code>isInsert</code>
-<code><b>isInsert([<i>&lt;value1&gt;</i> : integer]) => boolean</b></code><br/><br/>
-Checks if the row is marked for insert. For transformations taking more than one input stream you can pass the (1-based) index of the stream. The stream index should be either 1 or 2 and the default value is 1.
-* ``isInsert()``
-* ``isInsert(1)``
-___
--
-<a name="isInteger" ></a>
-
-### <code>isInteger</code>
-<code><b>isInteger (<i>\<value1\></i> : string, [&lt;format&gt;: string]) => boolean</b></code><br/><br/>
-Checks of the string value is a integer value given an optional format according to the rules of ``toInteger()``
-* ``isInteger('123') -> true``
-* ``isInteger('$123' -> '$###') -> true``
-* ``isInteger('microsoft') -> false``
-___
--
-<a name="isLong" ></a>
-
-### <code>isLong</code>
-<code><b>isLong (<i>\<value1\></i> : string, [&lt;format&gt;: string]) => boolean</b></code><br/><br/>
-Checks of the string value is a long value given an optional format according to the rules of ``toLong()``
-* ``isLong('123') -> true``
-* ``isLong('$123' -> '$###') -> true``
-* ``isLong('gunchus') -> false``
-___
--
-<a name="isMatch" ></a>
-
-### <code>isMatch</code>
-<code><b>isMatch([<i>&lt;value1&gt;</i> : integer]) => boolean</b></code><br/><br/>
-Checks if the row is matched at lookup. For transformations taking more than one input stream you can pass the (1-based) index of the stream. The stream index should be either 1 or 2 and the default value is 1.
-* ``isMatch()``
-* ``isMatch(1)``
-___
--
-<a name="isNan" ></a>
-
-### <code>isNan</code>
-<code><b>isNan (<i>\<value1\></i> : integral) => boolean</b></code><br/><br/>
-Check if this is not a number.
-* ``isNan(10.2) => false``
-___
--
-<a name="isNull" ></a>
-
-### <code>isNull</code>
-<code><b>isNull(<i>&lt;value1&gt;</i> : any) => boolean</b></code><br/><br/>
-Checks if the value is NULL.
-* ``isNull(NULL()) -> true``
-* ``isNull('') -> false``
-___
--
-<a name="isShort" ></a>
-
-### <code>isShort</code>
-<code><b>isShort (<i>\<value1\></i> : string, [&lt;format&gt;: string]) => boolean</b></code><br/><br/>
-Checks of the string value is a short value given an optional format according to the rules of ``toShort()``
-* ``isShort('123') -> true``
-* ``isShort('$123' -> '$###') -> true``
-* ``isShort('microsoft') -> false``
-___
--
-<a name="isTimestamp" ></a>
-
-### <code>isTimestamp</code>
-<code><b>isTimestamp (<i>\<value1\></i> : string, [&lt;format&gt;: string]) => boolean</b></code><br/><br/>
-Checks if the input date string is a timestamp using an optional input timestamp format. Refer to Java's SimpleDateFormat for available formats. If the timestamp is omitted the default pattern ``yyyy-[M]M-[d]d hh:mm:ss[.f...]`` is used. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. Timestamp supports up to millisecond accuracy with value of 999 Refer to Java's SimpleDateFormat for available formats.
-* ``isTimestamp('2016-12-31 00:12:00') -> true``
-* ``isTimestamp('2016-12-31T00:12:00' -> 'yyyy-MM-dd\\'T\\'HH:mm:ss' -> 'PST') -> true``
-* ``isTimestamp('2012-8222.18') -> false``
-___
--
-<a name="isUpdate" ></a>
-
-### <code>isUpdate</code>
-<code><b>isUpdate([<i>&lt;value1&gt;</i> : integer]) => boolean</b></code><br/><br/>
-Checks if the row is marked for update. For transformations taking more than one input stream you can pass the (1-based) index of the stream. The stream index should be either 1 or 2 and the default value is 1.
-* ``isUpdate()``
-* ``isUpdate(1)``
-___
--
-<a name="isUpsert" ></a>
-
-### <code>isUpsert</code>
-<code><b>isUpsert([<i>&lt;value1&gt;</i> : integer]) => boolean</b></code><br/><br/>
-Checks if the row is marked for insert. For transformations taking more than one input stream you can pass the (1-based) index of the stream. The stream index should be either 1 or 2 and the default value is 1.
-* ``isUpsert()``
-* ``isUpsert(1)``
-___
--
-<a name="jaroWinkler" ></a>
-
-### <code>jaroWinkler</code>
-<code><b>jaroWinkler(<i>&lt;value1&gt;</i> : string, <i>&lt;value2&gt;</i> : string) => double</b></code><br/><br/>
-Gets the JaroWinkler distance between two strings.
-* ``jaroWinkler('frog', 'frog') => 1.0``
-___
--
-<a name="keyValues" ></a>
-
-### <code>keyValues</code>
-<code><b>keyValues(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : array) => map</b></code><br/><br/>
-Creates a map of key/values. The first parameter is an array of keys and second is the array of values. Both arrays should have equal length.
-* ``keyValues(['bojjus', 'appa'], ['gunchus', 'ammi']) => ['bojjus' -> 'gunchus', 'appa' -> 'ammi']``
-___
--
-<a name="kurtosis" ></a>
-
-### <code>kurtosis</code>
-<code><b>kurtosis(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
-Gets the kurtosis of a column.
-* ``kurtosis(sales)``
-___
--
-<a name="kurtosisIf" ></a>
-
-### <code>kurtosisIf</code>
-<code><b>kurtosisIf(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : number) => double</b></code><br/><br/>
-Based on a criteria, gets the kurtosis of a column.
-* ``kurtosisIf(region == 'West', sales)``
-___
--
-<a name="lag" ></a>
-
-### <code>lag</code>
-<code><b>lag(<i>&lt;value&gt;</i> : any, [<i>&lt;number of rows to look before&gt;</i> : number], [<i>&lt;default value&gt;</i> : any]) => any</b></code><br/><br/>
-Gets the value of the first parameter evaluated n rows before the current row. The second parameter is the number of rows to look back and the default value is 1. If there are not as many rows a value of null is returned unless a default value is specified.
-* ``lag(amount, 2)``
-* ``lag(amount, 2000, 100)``
-___
--
-<a name="last" ></a>
-
-### <code>last</code>
-<code><b>last(<i>&lt;value1&gt;</i> : any, [<i>&lt;value2&gt;</i> : boolean]) => any</b></code><br/><br/>
-Gets the last value of a column group. If the second parameter ignoreNulls is omitted, it is assumed false.
-* ``last(sales)``
-* ``last(sales, false)``
-___
--
-<a name="lastDayOfMonth" ></a>
-
-### <code>lastDayOfMonth</code>
-<code><b>lastDayOfMonth(<i>&lt;value1&gt;</i> : datetime) => date</b></code><br/><br/>
-Gets the last date of the month given a date.
-* ``lastDayOfMonth(toDate('2009-01-12')) -> toDate('2009-01-31')``
-___
--
-<a name="lead" ></a>
-
-### <code>lead</code>
-<code><b>lead(<i>&lt;value&gt;</i> : any, [<i>&lt;number of rows to look after&gt;</i> : number], [<i>&lt;default value&gt;</i> : any]) => any</b></code><br/><br/>
-Gets the value of the first parameter evaluated n rows after the current row. The second parameter is the number of rows to look forward and the default value is 1. If there are not as many rows a value of null is returned unless a default value is specified.
-* ``lead(amount, 2)``
-* ``lead(amount, 2000, 100)``
-___
--
-<a name="least" ></a>
-
-### <code>least</code>
-<code><b>least(<i>&lt;value1&gt;</i> : any, ...) => any</b></code><br/><br/>
-Comparison lesser than or equal operator. Same as <= operator.
-* ``least(10, 30, 15, 20) -> 10``
-* ``least(toDate('2010-12-12'), toDate('2011-12-12'), toDate('2000-12-12')) -> toDate('2000-12-12')``
-___
--
-<a name="left" ></a>
-
-### <code>left</code>
-<code><b>left(<i>&lt;string to subset&gt;</i> : string, <i>&lt;number of characters&gt;</i> : integral) => string</b></code><br/><br/>
-Extracts a substring start at index 1 with number of characters. Same as SUBSTRING(str, 1, n).
-* ``left('bojjus', 2) -> 'bo'``
-* ``left('bojjus', 20) -> 'bojjus'``
-___
--
-<a name="length" ></a>
-
-### <code>length</code>
-<code><b>length(<i>&lt;value1&gt;</i> : string) => integer</b></code><br/><br/>
-Returns the length of the string.
-* ``length('dumbo') -> 5``
-___
--
-<a name="lesser" ></a>
-
-### <code>lesser</code>
-<code><b>lesser(<i>&lt;value1&gt;</i> : any, <i>&lt;value2&gt;</i> : any) => boolean</b></code><br/><br/>
-Comparison less operator. Same as < operator.
-* ``lesser(12, 24) -> true``
-* ``('abcd' < 'abc') -> false``
-* ``(toTimestamp('2019-02-03 05:19:28.871', 'yyyy-MM-dd HH:mm:ss.SSS') < toTimestamp('2019-02-05 08:21:34.890', 'yyyy-MM-dd HH:mm:ss.SSS')) -> true``
-___
--
-<a name="lesserOrEqual" ></a>
-
-### <code>lesserOrEqual</code>
-<code><b>lesserOrEqual(<i>&lt;value1&gt;</i> : any, <i>&lt;value2&gt;</i> : any) => boolean</b></code><br/><br/>
-Comparison lesser than or equal operator. Same as <= operator.
-* ``lesserOrEqual(12, 12) -> true``
-* ``('dumbo' <= 'dum') -> false``
-___
--
-<a name="levenshtein" ></a>
-
-### <code>levenshtein</code>
-<code><b>levenshtein(<i>&lt;from string&gt;</i> : string, <i>&lt;to string&gt;</i> : string) => integer</b></code><br/><br/>
-Gets the levenshtein distance between two strings.
-* ``levenshtein('boys', 'girls') -> 4``
-___
--
-<a name="like" ></a>
-
-### <code>like</code>
-<code><b>like(<i>&lt;string&gt;</i> : string, <i>&lt;pattern match&gt;</i> : string) => boolean</b></code><br/><br/>
-The pattern is a string that is matched literally. The exceptions are the following special symbols: _ matches any one character in the input (similar to . in ```posix``` regular expressions)
- % matches zero or more characters in the input (similar to .* in ```posix``` regular expressions).
- The escape character is ''. If an escape character precedes a special symbol or another escape character, the following character is matched literally. It is invalid to escape any other character.
-* ``like('icecream', 'ice%') -> true``
-___
--
-<a name="locate" ></a>
-
-### <code>locate</code>
-<code><b>locate(<i>&lt;substring to find&gt;</i> : string, <i>&lt;string&gt;</i> : string, [<i>&lt;from index - 1-based&gt;</i> : integral]) => integer</b></code><br/><br/>
-Finds the position(1 based) of the substring within a string starting a certain position. If the position is omitted it is considered from the beginning of the string. 0 is returned if not found.
-* ``locate('mbo', 'dumbo') -> 3``
-* ``locate('o', 'microsoft', 6) -> 7``
-* ``locate('bad', 'good') -> 0``
-___
--
-<a name="log" ></a>
-
-### <code>log</code>
-<code><b>log(<i>&lt;value1&gt;</i> : number, [<i>&lt;value2&gt;</i> : number]) => double</b></code><br/><br/>
-Calculates log value. An optional base can be supplied else a Euler number if used.
-* ``log(100, 10) -> 2``
-___
--
-<a name="log10" ></a>
-
-### <code>log10</code>
-<code><b>log10(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
-Calculates log value based on 10 base.
-* ``log10(100) -> 2``
-___
--
-<a name="lookup" ></a>
-
-### <code>lookup</code>
-<code><b>lookup(key, key2, ...) => complex[]</b></code><br/><br/>
-Looks up the first row from the cached sink using the specified keys that match the keys from the cached sink.
-* ``cacheSink#lookup(movieId)``
-___
--
-<a name="lower" ></a>
-
-### <code>lower</code>
-<code><b>lower(<i>&lt;value1&gt;</i> : string) => string</b></code><br/><br/>
-Lowercases a string.
-* ``lower('GunChus') -> 'gunchus'``
-___
--
-<a name="lpad" ></a>
-
-### <code>lpad</code>
-<code><b>lpad(<i>&lt;string to pad&gt;</i> : string, <i>&lt;final padded length&gt;</i> : integral, <i>&lt;padding&gt;</i> : string) => string</b></code><br/><br/>
-Left pads the string by the supplied padding until it is of a certain length. If the string is equal to or greater than the length, then it is trimmed to the length.
-* ``lpad('dumbo', 10, '-') -> '--dumbo'``
-* ``lpad('dumbo', 4, '-') -> 'dumb'``
-
-___
--
-<a name="ltrim" ></a>
-
-### <code>ltrim</code>
-<code><b>ltrim(<i>&lt;string to trim&gt;</i> : string, [<i>&lt;trim characters&gt;</i> : string]) => string</b></code><br/><br/>
-Left trims a string of leading characters. If second parameter is unspecified, it trims whitespace. Else it trims any character specified in the second parameter.
-* ``ltrim(' dumbo ') -> 'dumbo '``
-* ``ltrim('!--!du!mbo!', '-!') -> 'du!mbo!'``
-___
--
-<a name="map" ></a>
-
-### <code>map</code>
-<code><b>map(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : unaryfunction) => any</b></code><br/><br/>
-Maps each element of the array to a new element using the provided expression. Map expects a reference to one element in the expression function as #item.
-* ``map([1, 2, 3, 4], #item + 2) -> [3, 4, 5, 6]``
-* ``map(['a', 'b', 'c', 'd'], #item + '_processed') -> ['a_processed', 'b_processed', 'c_processed', 'd_processed']``
-___
--
-<a name="mapAssociation" ></a>
-
-### <code>mapAssociation</code>
-<code><b>mapAssociation(<i>&lt;value1&gt;</i> : map, <i>&lt;value2&gt;</i> : binaryFunction) => array</b></code><br/><br/>
-Transforms a map by associating the keys to new values. Returns an array. It takes a mapping function where you can address the item as #key and current value as #value.
-* ``mapAssociation(['bojjus' -> 'gunchus', 'appa' -> 'ammi'], @(key = #key, value = #value)) => [@(key = 'bojjus', value = 'gunchus'), @(key = 'appa', value = 'ammi')]``
-___
--
-<a name="mapIf" ></a>
-
-### <code>mapIf</code>
-<code><b>mapIf (<i>\<value1\></i> : array, <i>\<value2\></i> : binaryfunction, \<value3\>: binaryFunction) => any</b></code><br/><br/>
-Conditionally maps an array to another array of same or smaller length. The values can be of any datatype including structTypes. It takes a mapping function where you can address the item in the array as #item and current index as #index. For deeply nested maps you can refer to the parent maps using the ``#item_[n](#item_1, #index_1...)`` notation.
-* ``mapIf([10, 20, 30], #item > 10, #item + 5) -> [25, 35]``
-* ``mapIf(['icecream', 'cake', 'soda'], length(#item) > 4, upper(#item)) -> ['ICECREAM', 'CAKE']``
-___
--
-<a name="mapIndex" ></a>
-
-### <code>mapIndex</code>
-<code><b>mapIndex(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : binaryfunction) => any</b></code><br/><br/>
-Maps each element of the array to a new element using the provided expression. Map expects a reference to one element in the expression function as #item and a reference to the element index as #index.
-* ``mapIndex([1, 2, 3, 4], #item + 2 + #index) -> [4, 6, 8, 10]``
-___
--
-<a name="mapLoop" ></a>
-
-### <code>mapLoop</code>
-<code><b>mapLoop(<i>\<value1\></i> : integer, <i>\<value2\></i> : unaryfunction) => any</b></code><br/><br/>
-Loops through from 1 to length to create an array of that length. It takes a mapping function where you can address the index in the array as #index. For deeply nested maps you can refer to the parent maps using the #index_n(#index_1, #index_2...) notation.
-* ``mapLoop(3, #index * 10) -> [10, 20, 30]``
-___
--
-<a name="max" ></a>
-
-### <code>max</code>
-<code><b>max(<i>&lt;value1&gt;</i> : any) => any</b></code><br/><br/>
-Gets the maximum value of a column.
-* ``max(sales)``
-___
--
-<a name="maxIf" ></a>
-
-### <code>maxIf</code>
-<code><b>maxIf(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : any) => any</b></code><br/><br/>
-Based on a criteria, gets the maximum value of a column.
-* ``maxIf(region == 'West', sales)``
-___
--
-<a name="md5" ></a>
-
-### <code>md5</code>
-<code><b>md5(<i>&lt;value1&gt;</i> : any, ...) => string</b></code><br/><br/>
-Calculates the MD5 digest of set of column of varying primitive datatypes and returns a 32 character hex string. It can be used to calculate a fingerprint for a row.
-* ``md5(5, 'gunchus', 8.2, 'bojjus', true, toDate('2010-4-4')) -> '4ce8a880bd621a1ffad0bca905e1bc5a'``
-___
--
-<a name="mean" ></a>
-
-### <code>mean</code>
-<code><b>mean(<i>&lt;value1&gt;</i> : number) => number</b></code><br/><br/>
-Gets the mean of values of a column. Same as AVG.
-* ``mean(sales)``
-___
--
-<a name="meanIf" ></a>
-
-### <code>meanIf</code>
-<code><b>meanIf(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : number) => number</b></code><br/><br/>
-Based on a criteria gets the mean of values of a column. Same as avgIf.
-* ``meanIf(region == 'West', sales)``
-___
--
-<a name="millisecond" ></a>
-
-### <code>millisecond</code>
-<code><b>millisecond(<i>&lt;value1&gt;</i> : timestamp, [<i>&lt;value2&gt;</i> : string]) => integer</b></code><br/><br/>
-Gets the millisecond value of a date. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. The local timezone is used as the default. Refer Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html.
-* ``millisecond(toTimestamp('2009-07-30 12:58:59.871', 'yyyy-MM-dd HH:mm:ss.SSS')) -> 871``
-___
--
-<a name="milliseconds" ></a>
-
-### <code>milliseconds</code>
-<code><b>milliseconds(<i>&lt;value1&gt;</i> : integer) => long</b></code><br/><br/>
-Duration in milliseconds for number of milliseconds.
-* ``milliseconds(2) -> 2L``
-___
--
-<a name="min" ></a>
-
-### <code>min</code>
-<code><b>min(<i>&lt;value1&gt;</i> : any) => any</b></code><br/><br/>
-Gets the minimum value of a column.
-* ``min(sales)``
-___
--
-<a name="minIf" ></a>
-
-### <code>minIf</code>
-<code><b>minIf(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : any) => any</b></code><br/><br/>
-Based on a criteria, gets the minimum value of a column.
-* ``minIf(region == 'West', sales)``
-___
--
-<a name="minus" ></a>
-
-### <code>minus</code>
-<code><b>minus(<i>&lt;value1&gt;</i> : any, <i>&lt;value2&gt;</i> : any) => any</b></code><br/><br/>
-Subtracts numbers. Subtract number of days from a date. Subtract duration from a timestamp. Subtract two timestamps to get difference in milliseconds. Same as the - operator.
-* ``minus(20, 10) -> 10``
-* ``20 - 10 -> 10``
-* ``minus(toDate('2012-12-15'), 3) -> toDate('2012-12-12')``
-* ``toDate('2012-12-15') - 3 -> toDate('2012-12-12')``
-* ``toTimestamp('2019-02-03 05:19:28.871', 'yyyy-MM-dd HH:mm:ss.SSS') + (days(1) + hours(2) - seconds(10)) -> toTimestamp('2019-02-04 07:19:18.871', 'yyyy-MM-dd HH:mm:ss.SSS')``
-* ``toTimestamp('2019-02-03 05:21:34.851', 'yyyy-MM-dd HH:mm:ss.SSS') - toTimestamp('2019-02-03 05:21:36.923', 'yyyy-MM-dd HH:mm:ss.SSS') -> -2072``
-___
--
-<a name="minute" ></a>
-
-### <code>minute</code>
-<code><b>minute(<i>&lt;value1&gt;</i> : timestamp, [<i>&lt;value2&gt;</i> : string]) => integer</b></code><br/><br/>
-Gets the minute value of a timestamp. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. The local timezone is used as the default. Refer Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html.
-* ``minute(toTimestamp('2009-07-30 12:58:59')) -> 58``
-* ``minute(toTimestamp('2009-07-30 12:58:59'), 'PST') -> 58``
-___
--
-<a name="minutes" ></a>
-
-### <code>minutes</code>
-<code><b>minutes(<i>&lt;value1&gt;</i> : integer) => long</b></code><br/><br/>
-Duration in milliseconds for number of minutes.
-* ``minutes(2) -> 120000L``
-___
--
-<a name="mlookup" ></a>
-
-### <code>mlookup</code>
-<code><b>mlookup(key, key2, ...) => complex[]</b></code><br/><br/>
-Looks up the all matching rows from the cached sink using the specified keys that match the keys from the cached sink.
-* ``cacheSink#mlookup(movieId)``
-___
--
-<a name="mod" ></a>
-
-### <code>mod</code>
-<code><b>mod(<i>&lt;value1&gt;</i> : any, <i>&lt;value2&gt;</i> : any) => any</b></code><br/><br/>
-Modulus of pair of numbers. Same as the % operator.
-* ``mod(20, 8) -> 4``
-* ``20 % 8 -> 4``
-___
--
-<a name="month" ></a>
-
-### <code>month</code>
-<code><b>month(<i>&lt;value1&gt;</i> : datetime) => integer</b></code><br/><br/>
-Gets the month value of a date or timestamp.
-* ``month(toDate('2012-8-8')) -> 8``
-___
--
-<a name="monthsBetween" ></a>
-
-### <code>monthsBetween</code>
-<code><b>monthsBetween(<i>&lt;from date/timestamp&gt;</i> : datetime, <i>&lt;to date/timestamp&gt;</i> : datetime, [<i>&lt;roundoff&gt;</i> : boolean], [<i>&lt;time zone&gt;</i> : string]) => double</b></code><br/><br/>
-Gets the number of months between two dates. You can round off the calculation.You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. The local timezone is used as the default. Refer Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html.
-* ``monthsBetween(toTimestamp('1997-02-28 10:30:00'), toDate('1996-10-30')) -> 3.94959677``
-___
--
-<a name="multiply" ></a>
-
-### <code>multiply</code>
-<code><b>multiply(<i>&lt;value1&gt;</i> : any, <i>&lt;value2&gt;</i> : any) => any</b></code><br/><br/>
-Multiplies pair of numbers. Same as the * operator.
-* ``multiply(20, 10) -> 200``
-* ``20 * 10 -> 200``
-___
--
-<a name="negate" ></a>
-
-### <code>negate</code>
-<code><b>negate(<i>&lt;value1&gt;</i> : number) => number</b></code><br/><br/>
-Negates a number. Turns positive numbers to negative and vice versa.
-* ``negate(13) -> -13``
-___
--
-<a name="nextSequence" ></a>
-
-### <code>nextSequence</code>
-<code><b>nextSequence() => long</b></code><br/><br/>
-Returns the next unique sequence. The number is consecutive only within a partition and is prefixed by the partitionId.
-* ``nextSequence() == 12313112 -> false``
-___
--
-<a name="normalize" ></a>
-
-### <code>normalize</code>
-<code><b>normalize(<i>&lt;String to normalize&gt;</i> : string) => string</b></code><br/><br/>
-Normalizes the string value to separate accented unicode characters.
-* ``regexReplace(normalize('bo┬▓s'), `\p{M}`, '') -> 'boys'``
-___
--
-<a name="not" ></a>
-
-### <code>not</code>
-<code><b>not(<i>&lt;value1&gt;</i> : boolean) => boolean</b></code><br/><br/>
-Logical negation operator.
-* ``not(true) -> false``
-* ``not(10 == 20) -> true``
-___
--
-<a name="notEquals" ></a>
-
-### <code>notEquals</code>
-<code><b>notEquals(<i>&lt;value1&gt;</i> : any, <i>&lt;value2&gt;</i> : any) => boolean</b></code><br/><br/>
-Comparison not equals operator. Same as != operator.
-* ``12 != 24 -> true``
-* ``'bojjus' != 'bo' + 'jjus' -> false``
-___
--
-<a name="notNull" ></a>
-
-### <code>notNull</code>
-<code><b>notNull(<i>&lt;value1&gt;</i> : any) => boolean</b></code><br/><br/>
-Checks if the value is not NULL.
-* ``notNull(NULL()) -> false``
-* ``notNull('') -> true``
-___
--
-<a name="nTile" ></a>
-
-### <code>nTile</code>
-<code><b>nTile([<i>&lt;value1&gt;</i> : integer]) => integer</b></code><br/><br/>
-The ```NTile``` function divides the rows for each window partition into `n` buckets ranging from 1 to at most `n`. Bucket values will differ by at most 1. If the number of rows in the partition does not divide evenly into the number of buckets, then the remainder values are distributed one per bucket, starting with the first bucket. The ```NTile``` function is useful for the calculation of ```tertiles```, quartiles, deciles, and other common summary statistics. The function calculates two variables during initialization: The size of a regular bucket will have one extra row added to it. Both variables are based on the size of the current partition. During the calculation process the function keeps track of the current row number, the current bucket number, and the row number at which the bucket will change (bucketThreshold). When the current row number reaches bucket threshold, the bucket value is increased by one and the threshold is increased by the bucket size (plus one extra if the current bucket is padded).
-* ``nTile()``
-* ``nTile(numOfBuckets)``
-___
--
-<a name="null" ></a>
-
-### <code>null</code>
-<code><b>null() => null</b></code><br/><br/>
-Returns a NULL value. Use the function `syntax(null())` if there is a column named 'null'. Any operation that uses will result in a NULL.
-* ``isNull('dumbo' + null) -> true``
-* ``isNull(10 * null) -> true``
-* ``isNull('') -> false``
-* ``isNull(10 + 20) -> false``
-* ``isNull(10/0) -> true``
-___
--
-<a name="or" ></a>
-
-### <code>or</code>
-<code><b>or(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : boolean) => boolean</b></code><br/><br/>
-Logical OR operator. Same as ||.
-* ``or(true, false) -> true``
-* ``true || false -> true``
-___
--
-<a name="originColumns" ></a>
-
-### <code>originColumns</code>
-<code><b>originColumns(<i>&lt;streamName&gt;</i> : string) => any</b></code><br/><br/>
-Gets all output columns for a origin stream where columns were created. Must be enclosed in another function.
-* ``array(toString(originColumns('source1')))``
-___
--
-<a name="output" ></a>
-
-### <code>output</code>
-<code><b>output() => any</b></code><br/><br/>
-Returns the first row of the results of the cache sink
-* ``cacheSink#output()``
-___
--
-<a name="outputs" ></a>
-
-### <code>outputs</code>
-<code><b>output() => any</b></code><br/><br/>
-Returns the entire output row set of the results of the cache sink
-* ``cacheSink#outputs()``
-___
---
-<a name="partitionId" ></a>
-
-### <code>partitionId</code>
-<code><b>partitionId() => integer</b></code><br/><br/>
-Returns the current partition ID the input row is in.
-* ``partitionId()``
-___
--
-<a name="pMod" ></a>
-
-### <code>pMod</code>
-<code><b>pMod(<i>&lt;value1&gt;</i> : any, <i>&lt;value2&gt;</i> : any) => any</b></code><br/><br/>
-Positive Modulus of pair of numbers.
-* ``pmod(-20, 8) -> 4``
-___
--
-<a name="power" ></a>
-
-### <code>power</code>
-<code><b>power(<i>&lt;value1&gt;</i> : number, <i>&lt;value2&gt;</i> : number) => double</b></code><br/><br/>
-Raises one number to the power of another.
-* ``power(10, 2) -> 100``
-___
--
-<a name="radians" ></a>
-
-### <code>radians</code>
-<code><b>radians(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
-Converts degrees to radians
-* ``radians(180) => 3.141592653589793``
-___
--
-<a name="random" ></a>
-
-### <code>random</code>
-<code><b>random(<i>&lt;value1&gt;</i> : integral) => long</b></code><br/><br/>
-Returns a random number given an optional seed within a partition. The seed should be a fixed value and is used in conjunction with the partitionId to produce random values
-* ``random(1) == 1 -> false``
-___
--
-<a name="rank" ></a>
-
-### <code>rank</code>
-<code><b>rank() => integer</b></code><br/><br/>
-Computes the rank of a value in a group of values specified in a window's order by clause. The result is one plus the number of rows preceding or equal to the current row in the ordering of the partition. The values will produce gaps in the sequence. Rank works even when data is not sorted and looks for change in values.
-* ``rank()``
-___
--
-<a name="reassociate" ></a>
-
-### <code>reassociate</code>
-<code><b>reassociate(<i>&lt;value1&gt;</i> : map, <i>&lt;value2&gt;</i> : binaryFunction) => map</b></code><br/><br/>
-Transforms a map by associating the keys to new values. It takes a mapping function where you can address the item as #key and current value as #value.
-* ``reassociate(['fruit' -> 'apple', 'vegetable' -> 'tomato'], substring(#key, 1, 1) + substring(#value, 1, 1)) => ['fruit' -> 'fa', 'vegetable' -> 'vt']``
-___
-
--
-<a name="reduce" ></a>
-
-### <code>reduce</code>
-<code><b>reduce(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : any, <i>&lt;value3&gt;</i> : binaryfunction, <i>&lt;value4&gt;</i> : unaryfunction) => any</b></code><br/><br/>
-Accumulates elements in an array. Reduce expects a reference to an accumulator and one element in the first expression function as #acc and #item and it expects the resulting value as #result to be used in the second expression function.
-* ``toString(reduce(['1', '2', '3', '4'], '0', #acc + #item, #result)) -> '01234'``
-___
--
-<a name="regexExtract" ></a>
-
-### <code>regexExtract</code>
-<code><b>regexExtract(<i>&lt;string&gt;</i> : string, <i>&lt;regex to find&gt;</i> : string, [<i>&lt;match group 1-based index&gt;</i> : integral]) => string</b></code><br/><br/>
-Extract a matching substring for a given regex pattern. The last parameter identifies the match group and is defaulted to 1 if omitted. Use `<regex>`(back quote) to match a string without escaping.
-* ``regexExtract('Cost is between 600 and 800 dollars', '(\\d+) and (\\d+)', 2) -> '800'``
-* ``regexExtract('Cost is between 600 and 800 dollars', `(\d+) and (\d+)`, 2) -> '800'``
-___
--
-<a name="regexMatch" ></a>
-
-### <code>regexMatch</code>
-<code><b>regexMatch(<i>&lt;string&gt;</i> : string, <i>&lt;regex to match&gt;</i> : string) => boolean</b></code><br/><br/>
-Checks if the string matches the given regex pattern. Use `<regex>`(back quote) to match a string without escaping.
-* ``regexMatch('200.50', '(\\d+).(\\d+)') -> true``
-* ``regexMatch('200.50', `(\d+).(\d+)`) -> true``
-___
--
-<a name="regexReplace" ></a>
-
-### <code>regexReplace</code>
-<code><b>regexReplace(<i>&lt;string&gt;</i> : string, <i>&lt;regex to find&gt;</i> : string, <i>&lt;substring to replace&gt;</i> : string) => string</b></code><br/><br/>
-Replace all occurrences of a regex pattern with another substring in the given string Use `<regex>`(back quote) to match a string without escaping.
-* ``regexReplace('100 and 200', '(\\d+)', 'bojjus') -> 'bojjus and bojjus'``
-* ``regexReplace('100 and 200', `(\d+)`, 'gunchus') -> 'gunchus and gunchus'``
-___
--
-<a name="regexSplit" ></a>
-
-### <code>regexSplit</code>
-<code><b>regexSplit(<i>&lt;string to split&gt;</i> : string, <i>&lt;regex expression&gt;</i> : string) => array</b></code><br/><br/>
-Splits a string based on a delimiter based on regex and returns an array of strings.
-* ``regexSplit('bojjusAgunchusBdumbo', `[CAB]`) -> ['bojjus', 'gunchus', 'dumbo']``
-* ``regexSplit('bojjusAgunchusBdumboC', `[CAB]`) -> ['bojjus', 'gunchus', 'dumbo', '']``
-* ``(regexSplit('bojjusAgunchusBdumboC', `[CAB]`)[1]) -> 'bojjus'``
-* ``isNull(regexSplit('bojjusAgunchusBdumboC', `[CAB]`)[20]) -> true``
-___
--
-<a name="replace" ></a>
-
-### <code>replace</code>
-<code><b>replace(<i>&lt;string&gt;</i> : string, <i>&lt;substring to find&gt;</i> : string, [<i>&lt;substring to replace&gt;</i> : string]) => string</b></code><br/><br/>
-Replace all occurrences of a substring with another substring in the given string. If the last parameter is omitted, it is default to empty string.
-* ``replace('doggie dog', 'dog', 'cat') -> 'catgie cat'``
-* ``replace('doggie dog', 'dog', '') -> 'gie '``
-* ``replace('doggie dog', 'dog') -> 'gie '``
-___
--
-<a name="reverse" ></a>
-
-### <code>reverse</code>
-<code><b>reverse(<i>&lt;value1&gt;</i> : string) => string</b></code><br/><br/>
-Reverses a string.
-* ``reverse('gunchus') -> 'suhcnug'``
-___
--
-<a name="right" ></a>
-
-### <code>right</code>
-<code><b>right(<i>&lt;string to subset&gt;</i> : string, <i>&lt;number of characters&gt;</i> : integral) => string</b></code><br/><br/>
-Extracts a substring with number of characters from the right. Same as SUBSTRING(str, LENGTH(str) - n, n).
-* ``right('bojjus', 2) -> 'us'``
-* ``right('bojjus', 20) -> 'bojjus'``
-___
--
-<a name="rlike" ></a>
-
-### <code>rlike</code>
-<code><b>rlike(<i>&lt;string&gt;</i> : string, <i>&lt;pattern match&gt;</i> : string) => boolean</b></code><br/><br/>
-Checks if the string matches the given regex pattern.
-* ``rlike('200.50', `(\d+).(\d+)`) -> true``
-* ``rlike('bogus', `M[0-9]+.*`) -> false``
-___
--
-<a name="round" ></a>
-
-### <code>round</code>
-<code><b>round(<i>&lt;number&gt;</i> : number, [<i>&lt;scale to round&gt;</i> : number], [<i>&lt;rounding option&gt;</i> : integral]) => double</b></code><br/><br/>
-Rounds a number given an optional scale and an optional rounding mode. If the scale is omitted, it is defaulted to 0. If the mode is omitted, it is defaulted to ROUND_HALF_UP(5). The values for rounding include
-1 - ROUND_UP
-2 - ROUND_DOWN
-3 - ROUND_CEILING
-4 - ROUND_FLOOR
-5 - ROUND_HALF_UP
-6 - ROUND_HALF_DOWN
-7 - ROUND_HALF_EVEN
-8 - ROUND_UNNECESSARY.
-* ``round(100.123) -> 100.0``
-* ``round(2.5, 0) -> 3.0``
-* ``round(5.3999999999999995, 2, 7) -> 5.40``
-___
--
-<a name="rowNumber" ></a>
-
-### <code>rowNumber</code>
-<code><b>rowNumber() => integer</b></code><br/><br/>
-Assigns a sequential row numbering for rows in a window starting with 1.
-* ``rowNumber()``
---
-<a name="rpad" ></a>
-
-### <code>rpad</code>
-<code><b>rpad(<i>&lt;string to pad&gt;</i> : string, <i>&lt;final padded length&gt;</i> : integral, <i>&lt;padding&gt;</i> : string) => string</b></code><br/><br/>
-Right pads the string by the supplied padding until it is of a certain length. If the string is equal to or greater than the length, then it is trimmed to the length.
-* ``rpad('dumbo', 10, '-') -> 'dumbo--'``
-* ``rpad('dumbo', 4, '-') -> 'dumb'``
-* ``rpad('dumbo', 8, '<>') -> 'dumbo<><'``
-___
--
-<a name="rtrim" ></a>
-
-### <code>rtrim</code>
-<code><b>rtrim(<i>&lt;string to trim&gt;</i> : string, [<i>&lt;trim characters&gt;</i> : string]) => string</b></code><br/><br/>
-Right trims a string of trailing characters. If second parameter is unspecified, it trims whitespace. Else it trims any character specified in the second parameter.
-* ``rtrim(' dumbo ') -> ' dumbo'``
-* ``rtrim('!--!du!mbo!', '-!') -> '!--!du!mbo'``
-___
--
-<a name="second" ></a>
-
-### <code>second</code>
-<code><b>second(<i>&lt;value1&gt;</i> : timestamp, [<i>&lt;value2&gt;</i> : string]) => integer</b></code><br/><br/>
-Gets the second value of a date. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. The local timezone is used as the default. Refer Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html.
-* ``second(toTimestamp('2009-07-30 12:58:59')) -> 59``
-___
--
-<a name="seconds" ></a>
-
-### <code>seconds</code>
-<code><b>seconds(<i>&lt;value1&gt;</i> : integer) => long</b></code><br/><br/>
-Duration in milliseconds for number of seconds.
-* ``seconds(2) -> 2000L``
-___
--
-<a name="setBitSet" ></a>
-
-### <code>setBitSet</code>
-<code><b>setBitSet (<i>\<value1\></i>: array, <i>\<value2\></i>:array) => array</b></code><br/><br/>
-Sets bit positions in this bitset
-* ``setBitSet(toBitSet([10, 32]), [98]) => [4294968320L, 17179869184L]``
-___
--
-<a name="sha1" ></a>
-
-### <code>sha1</code>
-<code><b>sha1(<i>&lt;value1&gt;</i> : any, ...) => string</b></code><br/><br/>
-Calculates the SHA-1 digest of set of column of varying primitive datatypes and returns a 40 character hex string. It can be used to calculate a fingerprint for a row.
-* ``sha1(5, 'gunchus', 8.2, 'bojjus', true, toDate('2010-4-4')) -> '46d3b478e8ec4e1f3b453ac3d8e59d5854e282bb'``
-___
--
-<a name="sha2" ></a>
-
-### <code>sha2</code>
-<code><b>sha2(<i>&lt;value1&gt;</i> : integer, <i>&lt;value2&gt;</i> : any, ...) => string</b></code><br/><br/>
-Calculates the SHA-2 digest of set of column of varying primitive datatypes given a bit length which can only be of values 0(256), 224, 256, 384, 512. It can be used to calculate a fingerprint for a row.
-* ``sha2(256, 'gunchus', 8.2, 'bojjus', true, toDate('2010-4-4')) -> 'afe8a553b1761c67d76f8c31ceef7f71b66a1ee6f4e6d3b5478bf68b47d06bd3'``
-___
--
-<a name="sin" ></a>
-
-### <code>sin</code>
-<code><b>sin(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
-Calculates a sine value.
-* ``sin(2) -> 0.9092974268256817``
-___
--
-<a name="sinh" ></a>
-
-### <code>sinh</code>
-<code><b>sinh(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
-Calculates a hyperbolic sine value.
-* ``sinh(0) -> 0.0``
-___
--
-<a name="size" ></a>
-
-### <code>size</code>
-<code><b>size(<i>&lt;value1&gt;</i> : any) => integer</b></code><br/><br/>
-Finds the size of an array or map type
-* ``size(['element1', 'element2']) -> 2``
-* ``size([1,2,3]) -> 3``
-___
--
-<a name="skewness" ></a>
-
-### <code>skewness</code>
-<code><b>skewness(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
-Gets the skewness of a column.
-* ``skewness(sales)``
-___
--
-<a name="skewnessIf" ></a>
-
-### <code>skewnessIf</code>
-<code><b>skewnessIf(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : number) => double</b></code><br/><br/>
-Based on a criteria, gets the skewness of a column.
-* ``skewnessIf(region == 'West', sales)``
-___
--
-<a name="slice" ></a>
-
-### <code>slice</code>
-<code><b>slice(<i>&lt;array to slice&gt;</i> : array, <i>&lt;from 1-based index&gt;</i> : integral, [<i>&lt;number of items&gt;</i> : integral]) => array</b></code><br/><br/>
-Extracts a subset of an array from a position. Position is 1 based. If the length is omitted, it is defaulted to end of the string.
-* ``slice([10, 20, 30, 40], 1, 2) -> [10, 20]``
-* ``slice([10, 20, 30, 40], 2) -> [20, 30, 40]``
-* ``slice([10, 20, 30, 40], 2)[1] -> 20``
-* ``isNull(slice([10, 20, 30, 40], 2)[0]) -> true``
-* ``isNull(slice([10, 20, 30, 40], 2)[20]) -> true``
-* ``slice(['a', 'b', 'c', 'd'], 8) -> []``
-___
--
-<a name="sort" ></a>
-
-### <code>sort</code>
-<code><b>sort(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : binaryfunction) => array</b></code><br/><br/>
-Sorts the array using the provided predicate function. Sort expects a reference to two consecutive elements in the expression function as #item1 and #item2.
-* ``sort([4, 8, 2, 3], compare(#item1, #item2)) -> [2, 3, 4, 8]``
-* ``sort(['a3', 'b2', 'c1'], iif(right(#item1, 1) >= right(#item2, 1), 1, -1)) -> ['c1', 'b2', 'a3']``
-___
--
-<a name="soundex" ></a>
-
-### <code>soundex</code>
-<code><b>soundex(<i>&lt;value1&gt;</i> : string) => string</b></code><br/><br/>
-Gets the ```soundex``` code for the string.
-* ``soundex('genius') -> 'G520'``
-___
--
-<a name="split" ></a>
-
-### <code>split</code>
-<code><b>split(<i>&lt;string to split&gt;</i> : string, <i>&lt;split characters&gt;</i> : string) => array</b></code><br/><br/>
-Splits a string based on a delimiter and returns an array of strings.
-* ``split('bojjus,guchus,dumbo', ',') -> ['bojjus', 'guchus', 'dumbo']``
-* ``split('bojjus,guchus,dumbo', '|') -> ['bojjus,guchus,dumbo']``
-* ``split('bojjus, guchus, dumbo', ', ') -> ['bojjus', 'guchus', 'dumbo']``
-* ``split('bojjus, guchus, dumbo', ', ')[1] -> 'bojjus'``
-* ``isNull(split('bojjus, guchus, dumbo', ', ')[0]) -> true``
-* ``isNull(split('bojjus, guchus, dumbo', ', ')[20]) -> true``
-* ``split('bojjusguchusdumbo', ',') -> ['bojjusguchusdumbo']``
-___
--
-<a name="sqrt" ></a>
-
-### <code>sqrt</code>
-<code><b>sqrt(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
-Calculates the square root of a number.
-* ``sqrt(9) -> 3``
-___
--
-<a name="startsWith" ></a>
-
-### <code>startsWith</code>
-<code><b>startsWith(<i>&lt;string&gt;</i> : string, <i>&lt;substring to check&gt;</i> : string) => boolean</b></code><br/><br/>
-Checks if the string starts with the supplied string.
-* ``startsWith('dumbo', 'du') -> true``
-___
--
-<a name="stddev" ></a>
-
-### <code>stddev</code>
-<code><b>stddev(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
-Gets the standard deviation of a column.
-* ``stdDev(sales)``
-___
--
-<a name="stddevIf" ></a>
-
-### <code>stddevIf</code>
-<code><b>stddevIf(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : number) => double</b></code><br/><br/>
-Based on a criteria, gets the standard deviation of a column.
-* ``stddevIf(region == 'West', sales)``
-___
--
-<a name="stddevPopulation" ></a>
-
-### <code>stddevPopulation</code>
-<code><b>stddevPopulation(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
-Gets the population standard deviation of a column.
-* ``stddevPopulation(sales)``
-___
--
-<a name="stddevPopulationIf" ></a>
-
-### <code>stddevPopulationIf</code>
-<code><b>stddevPopulationIf(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : number) => double</b></code><br/><br/>
-Based on a criteria, gets the population standard deviation of a column.
-* ``stddevPopulationIf(region == 'West', sales)``
-___
--
-<a name="stddevSample" ></a>
-
-### <code>stddevSample</code>
-<code><b>stddevSample(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
-Gets the sample standard deviation of a column.
-* ``stddevSample(sales)``
-___
--
-<a name="stddevSampleIf" ></a>
-
-### <code>stddevSampleIf</code>
-<code><b>stddevSampleIf(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : number) => double</b></code><br/><br/>
-Based on a criteria, gets the sample standard deviation of a column.
-* ``stddevSampleIf(region == 'West', sales)``
-___
--
-<a name="subDays" ></a>
-
-### <code>subDays</code>
-<code><b>subDays(<i>&lt;date/timestamp&gt;</i> : datetime, <i>&lt;days to subtract&gt;</i> : integral) => datetime</b></code><br/><br/>
-Subtract days from a date or timestamp. Same as the - operator for date.
-* ``subDays(toDate('2016-08-08'), 1) -> toDate('2016-08-07')``
-___
--
-<a name="subMonths" ></a>
-
-### <code>subMonths</code>
-<code><b>subMonths(<i>&lt;date/timestamp&gt;</i> : datetime, <i>&lt;months to subtract&gt;</i> : integral) => datetime</b></code><br/><br/>
-Subtract months from a date or timestamp.
-* ``subMonths(toDate('2016-09-30'), 1) -> toDate('2016-08-31')``
-___
--
-<a name="substring" ></a>
-
-### <code>substring</code>
-<code><b>substring(<i>&lt;string to subset&gt;</i> : string, <i>&lt;from 1-based index&gt;</i> : integral, [<i>&lt;number of characters&gt;</i> : integral]) => string</b></code><br/><br/>
-Extracts a substring of a certain length from a position. Position is 1 based. If the length is omitted, it is defaulted to end of the string.
-* ``substring('Cat in the hat', 5, 2) -> 'in'``
-* ``substring('Cat in the hat', 5, 100) -> 'in the hat'``
-* ``substring('Cat in the hat', 5) -> 'in the hat'``
-* ``substring('Cat in the hat', 100, 100) -> ''``
-___
--
-<a name="sum" ></a>
-
-### <code>sum</code>
-<code><b>sum(<i>&lt;value1&gt;</i> : number) => number</b></code><br/><br/>
-Gets the aggregate sum of a numeric column.
-* ``sum(col)``
-___
--
-<a name="sumDistinct" ></a>
-
-### <code>sumDistinct</code>
-<code><b>sumDistinct(<i>&lt;value1&gt;</i> : number) => number</b></code><br/><br/>
-Gets the aggregate sum of distinct values of a numeric column.
-* ``sumDistinct(col)``
-___
--
-<a name="sumDistinctIf" ></a>
-
-### <code>sumDistinctIf</code>
-<code><b>sumDistinctIf(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : number) => number</b></code><br/><br/>
-Based on criteria gets the aggregate sum of a numeric column. The condition can be based on any column.
-* ``sumDistinctIf(state == 'CA' && commission < 10000, sales)``
-* ``sumDistinctIf(true, sales)``
-___
--
-<a name="sumIf" ></a>
-
-### <code>sumIf</code>
-<code><b>sumIf(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : number) => number</b></code><br/><br/>
-Based on criteria gets the aggregate sum of a numeric column. The condition can be based on any column.
-* ``sumIf(state == 'CA' && commission < 10000, sales)``
-* ``sumIf(true, sales)``
-___
--
-<a name="tan" ></a>
-
-### <code>tan</code>
-<code><b>tan(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
-Calculates a tangent value.
-* ``tan(0) -> 0.0``
-___
--
-<a name="tanh" ></a>
-
-### <code>tanh</code>
-<code><b>tanh(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
-Calculates a hyperbolic tangent value.
-* ``tanh(0) -> 0.0``
-___
--
-<a name="toBase64" ></a>
-
-### <code>toBase64</code>
-<code><b>toBase64(<i>&lt;value1&gt;</i> : string, <i>&lt;encoding type&gt;</i> : string]) => string</b></code><br/><br/>
-Encodes the given string in base64. You can optionally pass the encoding type
-* ``toBase64('bojjus') -> 'Ym9qanVz'``
-* ``toBase64('┬▒ 25000, Γé¼ 5.000,- |', 'Windows-1252') -> 'sSAyNTAwMCwggCA1LjAwMCwtIHw='``
-
-___
-
-<a name="toBinary" ></a>
-
-### <code>toBinary</code>
-<code><b>toBinary(<i>&lt;value1&gt;</i> : any) => binary</b></code><br/><br/>
-Converts any numeric/date/timestamp/string to binary representation.
-* ``toBinary(3) -> [0x11]``
-___
--
-<a name="toBoolean" ></a>
-
-### <code>toBoolean</code>
-<code><b>toBoolean(<i>&lt;value1&gt;</i> : string) => boolean</b></code><br/><br/>
-Converts a value of ('t', 'true', 'y', 'yes', '1') to true and ('f', 'false', 'n', 'no', '0') to false and NULL for any other value.
-* ``toBoolean('true') -> true``
-* ``toBoolean('n') -> false``
-* ``isNull(toBoolean('truthy')) -> true``
-___
--
-<a name="toByte" ></a>
-
-### <code>toByte</code>
-<code><b>toByte(<i>&lt;value&gt;</i> : any, [<i>&lt;format&gt;</i> : string], [<i>&lt;locale&gt;</i> : string]) => byte</b></code><br/><br/>
-Converts any numeric or string to a byte value. An optional Java decimal format can be used for the conversion.
-* ``toByte(123)``
-* ``123``
-* ``toByte(0xFF)``
-* ``-1``
-* ``toByte('123')``
-* ``123``
-___
--
-<a name="toDate" ></a>
-
-### <code>toDate</code>
-<code><b>toDate(<i>&lt;string&gt;</i> : any, [<i>&lt;date format&gt;</i> : string]) => date</b></code><br/><br/>
-Converts input date string to date using an optional input date format. Refer Java's `SimpleDateFormat` class for available formats. If the input date format is omitted, default format is yyyy-[M]M-[d]d. Accepted formats are :[ yyyy, yyyy-[M]M, yyyy-[M]M-[d]d, yyyy-[M]M-[d]dT* ].
-* ``toDate('2012-8-18') -> toDate('2012-08-18')``
-* ``toDate('12/18/2012', 'MM/dd/yyyy') -> toDate('2012-12-18')``
-___
--
-<a name="toDecimal" ></a>
-
-### <code>toDecimal</code>
-<code><b>toDecimal(<i>&lt;value&gt;</i> : any, [<i>&lt;precision&gt;</i> : integral], [<i>&lt;scale&gt;</i> : integral], [<i>&lt;format&gt;</i> : string], [<i>&lt;locale&gt;</i> : string]) => decimal(10,0)</b></code><br/><br/>
-Converts any numeric or string to a decimal value. If precision and scale are not specified, it is defaulted to (10,2).An optional Java decimal format can be used for the conversion. An optional locale format in the form of BCP47 language like en-US, de, zh-CN.
-* ``toDecimal(123.45) -> 123.45``
-* ``toDecimal('123.45', 8, 4) -> 123.4500``
-* ``toDecimal('$123.45', 8, 4,'$###.00') -> 123.4500``
-* ``toDecimal('Ç123,45', 10, 2, 'Ç###,##', 'de') -> 123.45``
-___
--
-<a name="toDouble" ></a>
-
-### <code>toDouble</code>
-<code><b>toDouble(<i>&lt;value&gt;</i> : any, [<i>&lt;format&gt;</i> : string], [<i>&lt;locale&gt;</i> : string]) => double</b></code><br/><br/>
-Converts any numeric or string to a double value. An optional Java decimal format can be used for the conversion. An optional locale format in the form of BCP47 language like en-US, de, zh-CN.
-* ``toDouble(123.45) -> 123.45``
-* ``toDouble('123.45') -> 123.45``
-* ``toDouble('$123.45', '$###.00') -> 123.45``
-* ``toDouble('Ç123,45', 'Ç###,##', 'de') -> 123.45``
-___
--
-<a name="toFloat" ></a>
-
-### <code>toFloat</code>
-<code><b>toFloat(<i>&lt;value&gt;</i> : any, [<i>&lt;format&gt;</i> : string], [<i>&lt;locale&gt;</i> : string]) => float</b></code><br/><br/>
-Converts any numeric or string to a float value. An optional Java decimal format can be used for the conversion. Truncates any double.
-* ``toFloat(123.45) -> 123.45f``
-* ``toFloat('123.45') -> 123.45f``
-* ``toFloat('$123.45', '$###.00') -> 123.45f``
-___
--
-<a name="toInteger" ></a>
-
-### <code>toInteger</code>
-<code><b>toInteger(<i>&lt;value&gt;</i> : any, [<i>&lt;format&gt;</i> : string], [<i>&lt;locale&gt;</i> : string]) => integer</b></code><br/><br/>
-Converts any numeric or string to an integer value. An optional Java decimal format can be used for the conversion. Truncates any long, float, double.
-* ``toInteger(123) -> 123``
-* ``toInteger('123') -> 123``
-* ``toInteger('$123', '$###') -> 123``
-___
--
-<a name="toLong" ></a>
-
-### <code>toLong</code>
-<code><b>toLong(<i>&lt;value&gt;</i> : any, [<i>&lt;format&gt;</i> : string], [<i>&lt;locale&gt;</i> : string]) => long</b></code><br/><br/>
-Converts any numeric or string to a long value. An optional Java decimal format can be used for the conversion. Truncates any float, double.
-* ``toLong(123) -> 123``
-* ``toLong('123') -> 123``
-* ``toLong('$123', '$###') -> 123``
-___
--
-<a name="toShort" ></a>
-
-### <code>toShort</code>
-<code><b>toShort(<i>&lt;value&gt;</i> : any, [<i>&lt;format&gt;</i> : string], [<i>&lt;locale&gt;</i> : string]) => short</b></code><br/><br/>
-Converts any numeric or string to a short value. An optional Java decimal format can be used for the conversion. Truncates any integer, long, float, double.
-* ``toShort(123) -> 123``
-* ``toShort('123') -> 123``
-* ``toShort('$123', '$###') -> 123``
-___
--
-<a name="toString" ></a>
-
-### <code>toString</code>
-<code><b>toString(<i>&lt;value&gt;</i> : any, [<i>&lt;number format/date format&gt;</i> : string], [<i>&lt;date locale&gt;</i> : string]) => string</b></code><br/><br/>
-Converts a primitive datatype to a string. For numbers and date a format can be specified. If unspecified the system default is picked.Java decimal format is used for numbers. Refer to Java SimpleDateFormat for all possible date formats; the default format is yyyy-MM-dd. For date or timestamp a locale can be optionally specified.
-* ``toString(10) -> '10'``
-* ``toString('engineer') -> 'engineer'``
-* ``toString(123456.789, '##,###.##') -> '123,456.79'``
-* ``toString(123.78, '000000.000') -> '000123.780'``
-* ``toString(12345, '##0.#####E0') -> '12.345E3'``
-* ``toString(toDate('2018-12-31')) -> '2018-12-31'``
-* ``isNull(toString(toDate('2018-12-31', 'MM/dd/yy'))) -> true``
-* ``toString(4 == 20) -> 'false'``
-* ``toString(toDate('12/31/18', 'MM/dd/yy', 'es-ES'), 'MM/dd/yy', 'de-DE')``
- ___
-
-<a name="toTimestamp" ></a>
-
-### <code>toTimestamp</code>
-<code><b>toTimestamp(<i>&lt;string&gt;</i> : any, [<i>&lt;timestamp format&gt;</i> : string], [<i>&lt;time zone&gt;</i> : string]) => timestamp</b></code><br/><br/>
-Converts a string to a timestamp given an optional timestamp format. If the timestamp is omitted the default pattern yyyy-[M]M-[d]d hh:mm:ss[.f...] is used. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. Timestamp supports up to millisecond accuracy with value of 999. Refer Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html.
-* ``toTimestamp('2016-12-31 00:12:00') -> toTimestamp('2016-12-31 00:12:00')``
-* ``toTimestamp('2016-12-31T00:12:00', 'yyyy-MM-dd\'T\'HH:mm:ss', 'PST') -> toTimestamp('2016-12-31 00:12:00')``
-* ``toTimestamp('12/31/2016T00:12:00', 'MM/dd/yyyy\'T\'HH:mm:ss') -> toTimestamp('2016-12-31 00:12:00')``
-* ``millisecond(toTimestamp('2019-02-03 05:19:28.871', 'yyyy-MM-dd HH:mm:ss.SSS')) -> 871``
-___
--
-<a name="toUTC" ></a>
-
-### <code>toUTC</code>
-<code><b>toUTC(<i>&lt;value1&gt;</i> : timestamp, [<i>&lt;value2&gt;</i> : string]) => timestamp</b></code><br/><br/>
-Converts the timestamp to UTC. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. It is defaulted to the current timezone. Refer Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html.
-* ``toUTC(currentTimestamp()) == toTimestamp('2050-12-12 19:18:12') -> false``
-* ``toUTC(currentTimestamp(), 'Asia/Seoul') != toTimestamp('2050-12-12 19:18:12') -> true``
---
-<a name="translate" ></a>
-
-### <code>translate</code>
-<code><b>translate(<i>&lt;string to translate&gt;</i> : string, <i>&lt;lookup characters&gt;</i> : string, <i>&lt;replace characters&gt;</i> : string) => string</b></code><br/><br/>
-Replace one set of characters by another set of characters in the string. Characters have 1 to 1 replacement.
-* ``translate('(bojjus)', '()', '[]') -> '[bojjus]'``
-* ``translate('(gunchus)', '()', '[') -> '[gunchus'``
-___
--
-<a name="trim" ></a>
-
-### <code>trim</code>
-<code><b>trim(<i>&lt;string to trim&gt;</i> : string, [<i>&lt;trim characters&gt;</i> : string]) => string</b></code><br/><br/>
-Trims a string of leading and trailing characters. If second parameter is unspecified, it trims whitespace. Else it trims any character specified in the second parameter.
-* ``trim(' dumbo ') -> 'dumbo'``
-* ``trim('!--!du!mbo!', '-!') -> 'du!mbo'``
-___
--
-<a name="true" ></a>
-
-### <code>true</code>
-<code><b>true() => boolean</b></code><br/><br/>
-Always returns a true value. Use the function `syntax(true())` if there is a column named 'true'.
-* ``(10 + 20 == 30) -> true``
-* ``(10 + 20 == 30) -> true()``
-___
--
-<a name="typeMatch" ></a>
-
-### <code>typeMatch</code>
-<code><b>typeMatch(<i>&lt;type&gt;</i> : string, <i>&lt;base type&gt;</i> : string) => boolean</b></code><br/><br/>
-Matches the type of the column. Can only be used in pattern expressions.number matches short, integer, long, double, float or decimal, integral matches short, integer, long, fractional matches double, float, decimal and datetime matches date or timestamp type.
-* ``typeMatch(type, 'number')``
-* ``typeMatch('date', 'datetime')``
-___
--
-<a name="unescape" ></a>
-
-### <code>unescape</code>
-<code><b>unescape(<i>&lt;string_to_escape&gt;</i> : string, <i>&lt;format&gt;</i> : string) => string</b></code><br/><br/>
-Unescapes a string according to a format. Literal values for acceptable format are 'json', 'xml', 'ecmascript', 'html', 'java'.
-* ```unescape('{\\\\\"value\\\\\": 10}', 'json')```
-* ```'{\\\"value\\\": 10}'```
-___
--
-<a name="unfold" ></a>
-
-### <code>unfold</code>
-<code><b>unfold (<i>&lt;value1&gt;</i>: array) => any</b></code><br/><br/>
-Unfolds an array into a set of rows and repeats the values for the remaining columns in every row.
-* ``unfold(addresses) => any``
-* ``unfold( @(name = salesPerson, sales = salesAmount) ) => any``
-___
--
-<a name="unhex" ></a>
-
-### <code>unhex</code>
-<code><b>unhex(<i>\<value1\></i>: string) => binary</b></code><br/><br/>
-Unhexes a binary value from its string representation. This can be used in conjunction with sha2, md5 to convert from string to binary representation
-* ``unhex('1fadbe') -> toBinary([toByte(0x1f), toByte(0xad), toByte(0xbe)])``
-* ``unhex(md5(5, 'gunchus', 8.2, 'bojjus', true, toDate('2010-4-4'))) -> toBinary([toByte(0x4c),toByte(0xe8),toByte(0xa8),toByte(0x80),toByte(0xbd),toByte(0x62),toByte(0x1a),toByte(0x1f),toByte(0xfa),toByte(0xd0),toByte(0xbc),toByte(0xa9),toByte(0x05),toByte(0xe1),toByte(0xbc),toByte(0x5a)])``
---
-<a name="union" ></a>
-
-### <code>union</code>
-<code><b>union(<i>&lt;value1&gt;</i>: array, <i>&lt;value2&gt;</i> : array) => array</b></code><br/><br/>
-Returns a union set of distinct items from 2 arrays.
-* ``union([10, 20, 30], [20, 40]) => [10, 20, 30, 40]``
-___
-
--
-<a name="upper" ></a>
-
-### <code>upper</code>
-<code><b>upper(<i>&lt;value1&gt;</i> : string) => string</b></code><br/><br/>
-Uppercases a string.
-* ``upper('bojjus') -> 'BOJJUS'``
-___
--
-<a name="uuid" ></a>
-
-### <code>uuid</code>
-<code><b>uuid() => string</b></code><br/><br/>
-Returns the generated UUID.
-* ``uuid()``
-___
--
-<a name="variance" ></a>
-
-### <code>variance</code>
-<code><b>variance(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
-Gets the variance of a column.
-* ``variance(sales)``
-___
--
-<a name="varianceIf" ></a>
-
-### <code>varianceIf</code>
-<code><b>varianceIf(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : number) => double</b></code><br/><br/>
-Based on a criteria, gets the variance of a column.
-* ``varianceIf(region == 'West', sales)``
-___
--
-<a name="variancePopulation" ></a>
-
-### <code>variancePopulation</code>
-<code><b>variancePopulation(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
-Gets the population variance of a column.
-* ``variancePopulation(sales)``
-___
--
-<a name="variancePopulationIf" ></a>
-
-### <code>variancePopulationIf</code>
-<code><b>variancePopulationIf(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : number) => double</b></code><br/><br/>
-Based on a criteria, gets the population variance of a column.
-* ``variancePopulationIf(region == 'West', sales)``
-___
--
-<a name="varianceSample" ></a>
-
-### <code>varianceSample</code>
-<code><b>varianceSample(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
-Gets the unbiased variance of a column.
-* ``varianceSample(sales)``
-___
--
-<a name="varianceSampleIf" ></a>
-
-### <code>varianceSampleIf</code>
-<code><b>varianceSampleIf(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : number) => double</b></code><br/><br/>
-Based on a criteria, gets the unbiased variance of a column.
-* ``varianceSampleIf(region == 'West', sales)``
---
-<a name="weekOfYear" ></a>
-
-### <code>weekOfYear</code>
-<code><b>weekOfYear(<i>&lt;value1&gt;</i> : datetime) => integer</b></code><br/><br/>
-Gets the week of the year given a date.
-* ``weekOfYear(toDate('2008-02-20')) -> 8``
-___
--
-<a name="weeks" ></a>
-
-### <code>weeks</code>
-<code><b>weeks(<i>&lt;value1&gt;</i> : integer) => long</b></code><br/><br/>
-Duration in milliseconds for number of weeks.
-* ``weeks(2) -> 1209600000L``
-___
--
-<a name="xor" ></a>
-
-### <code>xor</code>
-<code><b>xor(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : boolean) => boolean</b></code><br/><br/>
-Logical XOR operator. Same as ^ operator.
-* ``xor(true, false) -> true``
-* ``xor(true, true) -> false``
-* ``true ^ false -> true``
-___
--
-<a name="year" ></a>
-
-### <code>year</code>
-<code><b>year(<i>&lt;value1&gt;</i> : datetime) => integer</b></code><br/><br/>
-Gets the year value of a date.
-* ``year(toDate('2012-8-8')) -> 2012``
- ## Next steps
-[Learn how to use Expression Builder](concepts-data-flow-expression-builder.md).
+- List of all [aggregate functions](data-flow-aggregate-functions.md).
+- List of all [array functions](data-flow-array-functions.md).
+- List of all [cached lookup functions](data-flow-cached-lookup-functions.md).
+- List of all [conversion functions](data-flow-conversion-functions.md).
+- List of all [date and time functions](data-flow-date-time-functions.md).
+- List of all [map functions](data-flow-map-functions.md).
+- List of all [metafunctions](data-flow-metafunctions.md).
+- List of all [window functions](data-flow-window-functions.md).
+- [Usage details of all data transformation expressions](data-flow-expressions-usage.md).
+- [Learn how to use Expression Builder](concepts-data-flow-expression-builder.md).
data-factory Data Flow Expressions Usage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-expressions-usage.md
+
+ Title: Details and usage for all mapping data flow functions
+
+description: Learn about details of usage and functionality for all expression functions in mapping data flow.
++++++ Last updated : 02/02/2022++
+# Data transformation expression usage in mapping data flow
+++
+The following articles provide details about usage of all expressions and functions supported by Azure Data Factory and Azure Synapse Analytics in mapping data flows. For summaries of each type of function supported, reference the following articles:
+
+- [Aggregate functions](data-flow-aggregate-functions.md)
+- [Array functions](data-flow-array-functions.md)
+- [Cached lookup functions](data-flow-cached-lookup-functions.md)
+- [Conversion functions](data-flow-conversion-functions.md)
+- [Date and time functions](data-flow-date-time-functions.md)
+- [Expression functions](data-flow-expression-functions.md)
+- [Map functions](data-flow-map-functions.md)
+- [Metafunctions](data-flow-metafunctions.md)
+- [Window functions](data-flow-window-functions.md)
+
+## Alphabetical listing of all functions
+
+Following is an alphabetical listing of all functions available in mapping data flows.
+
+## A
+
+<a name="abs" ></a>
+
+### <code>abs</code>
+<code><b>abs(<i>&lt;value1&gt;</i> : number) => number</b></code><br/><br/>
+Absolute value of a number.
+* ``abs(-20) -> 20``
+* ``abs(10) -> 10``
+___
++
+<a name="acos" ></a>
+
+### <code>acos</code>
+<code><b>acos(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
+Calculates a cosine inverse value.
+* ``acos(1) -> 0.0``
+___
++
+<a name="add" ></a>
+
+### <code>add</code>
+<code><b>add(<i>&lt;value1&gt;</i> : any, <i>&lt;value2&gt;</i> : any) => any</b></code><br/><br/>
+Adds a pair of strings or numbers. Adds a date to a number of days. Adds a duration to a timestamp. Appends one array of similar type to another. Same as the + operator.
+* ``add(10, 20) -> 30``
+* ``10 + 20 -> 30``
+* ``add('ice', 'cream') -> 'icecream'``
+* ``'ice' + 'cream' + ' cone' -> 'icecream cone'``
+* ``add(toDate('2012-12-12'), 3) -> toDate('2012-12-15')``
+* ``toDate('2012-12-12') + 3 -> toDate('2012-12-15')``
+* ``[10, 20] + [30, 40] -> [10, 20, 30, 40]``
+* ``toTimestamp('2019-02-03 05:19:28.871', 'yyyy-MM-dd HH:mm:ss.SSS') + (days(1) + hours(2) - seconds(10)) -> toTimestamp('2019-02-04 07:19:18.871', 'yyyy-MM-dd HH:mm:ss.SSS')``
+___
++
+<a name="addDays" ></a>
+
+### <code>addDays</code>
+<code><b>addDays(<i>&lt;date/timestamp&gt;</i> : datetime, <i>&lt;days to add&gt;</i> : integral) => datetime</b></code><br/><br/>
+Add days to a date or timestamp. Same as the + operator for date.
+* ``addDays(toDate('2016-08-08'), 1) -> toDate('2016-08-09')``
+___
++
+<a name="addMonths" ></a>
+
+### <code>addMonths</code>
+<code><b>addMonths(<i>&lt;date/timestamp&gt;</i> : datetime, <i>&lt;months to add&gt;</i> : integral, [<i>&lt;value3&gt;</i> : string]) => datetime</b></code><br/><br/>
+Add months to a date or timestamp. You can optionally pass a timezone.
+* ``addMonths(toDate('2016-08-31'), 1) -> toDate('2016-09-30')``
+* ``addMonths(toTimestamp('2016-09-30 10:10:10'), -1) -> toTimestamp('2016-08-31 10:10:10')``
+___
++
+<a name="and" ></a>
+
+### <code>and</code>
+<code><b>and(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : boolean) => boolean</b></code><br/><br/>
+Logical AND operator. Same as &&.
+* ``and(true, false) -> false``
+* ``true && false -> false``
+___
++
+<a name="approxDistinctCount" ></a>
+
+### <code>approxDistinctCount</code>
+<code><b>approxDistinctCount(<i>&lt;value1&gt;</i> : any, [ <i>&lt;value2&gt;</i> : double ]) => long</b></code><br/><br/>
+Gets the approximate aggregate count of distinct values for a column. The optional second parameter is to control the estimation error.
+* ``approxDistinctCount(ProductID, .05) => long``
+___
++
+<a name="array" ></a>
+
+### <code>array</code>
+<code><b>array([<i>&lt;value1&gt;</i> : any], ...) => array</b></code><br/><br/>
+Creates an array of items. All items should be of the same type. If no items are specified, an empty string array is the default. Same as a [] creation operator.
+* ``array('Seattle', 'Washington')``
+* ``['Seattle', 'Washington']``
+* ``['Seattle', 'Washington'][1]``
+* ``'Washington'``
+___
+
+<a name="assertErrorMessages" ></a>
+
+### <code>assertErrorMessages</code>
+<code><b>assertErrorMessages() => map</b></code><br/><br/>
+Returns a map of all error messages for the row with assert ID as the key.
+
+Examples
+* ``assertErrorMessages() => ['assert1': 'This row failed on assert1.', 'assert2': 'This row failed on assert2.']. In this example, at(assertErrorMessages(), 'assert1') would return 'This row failed on assert1.'``
+
+___
++
+<a name="asin" ></a>
+
+### <code>asin</code>
+<code><b>asin(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
+Calculates an inverse sine value.
+* ``asin(0) -> 0.0``
+___
++
+<a name="associate" ></a>
+
+### <code>associate</code>
+<code><b>reassociate(<i>&lt;value1&gt;</i> : map, <i>&lt;value2&gt;</i> : binaryFunction) => map</b></code><br/><br/>
+Creates a map of key/values. All the keys & values should be of the same type. If no items are specified, it's defaulted to a map of string to string type. Same as a ```[ -> ]``` creation operator. Keys and values should alternate with each other.
+* ``associate('fruit', 'apple', 'vegetable', 'carrot' )=> ['fruit' -> 'apple', 'vegetable' -> 'carrot']``
+___
++
+<a name="at" ></a>
+
+### <code>at</code>
+<code><b>at(<i>&lt;value1&gt;</i> : array/map, <i>&lt;value2&gt;</i> : integer/key type) => array</b></code><br/><br/>
+Finds the element at an array index. The index is 1-based. Out of bounds index results in a null value. Finds a value in a map given a key. If the key is not found, it returns null.
+* ``at(['apples', 'pears'], 1) => 'apples'``
+* ``at(['fruit' -> 'apples', 'vegetable' -> 'carrot'], 'fruit') => 'apples'``
+___
++
+<a name="atan" ></a>
+
+### <code>atan</code>
+<code><b>atan(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
+Calculates an inverse tangent value.
+* ``atan(0) -> 0.0``
+___
++
+<a name="atan2" ></a>
+
+### <code>atan2</code>
+<code><b>atan2(<i>&lt;value1&gt;</i> : number, <i>&lt;value2&gt;</i> : number) => double</b></code><br/><br/>
+Returns the angle in radians between the positive x-axis of a plane and the point given by the coordinates.
+* ``atan2(0, 0) -> 0.0``
+___
++
+<a name="avg" ></a>
+
+### <code>avg</code>
+<code><b>avg(<i>&lt;value1&gt;</i> : number) => number</b></code><br/><br/>
+Gets the average of values of a column.
+* ``avg(sales)``
+___
++
+<a name="avgIf" ></a>
+
+### <code>avgIf</code>
+<code><b>avgIf(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : number) => number</b></code><br/><br/>
+Based on a criteria gets the average of values of a column.
+* ``avgIf(region == 'West', sales)``
+___
+
+## B
+
+<a name="between" ></a>
+
+### <code>between</code>
+<code><b>between(<i>&lt;value1&gt;</i> : any, <i>&lt;value2&gt;</i> : any, <i>&lt;value3&gt;</i> : any) => boolean</b></code><br/><br/>
+Checks if the first value is in between two other values inclusively. Numeric, string and datetime values can be compared
+* ``between(10, 5, 24)``
+* ``true``
+* ``between(currentDate(), currentDate() + 10, currentDate() + 20)``
+* ``false``
+___
++
+<a name="bitwiseAnd" ></a>
+
+### <code>bitwiseAnd</code>
+<code><b>bitwiseAnd(<i>&lt;value1&gt;</i> : integral, <i>&lt;value2&gt;</i> : integral) => integral</b></code><br/><br/>
+Bitwise And operator across integral types. Same as & operator
+* ``bitwiseAnd(0xf4, 0xef)``
+* ``0xe4``
+* ``(0xf4 & 0xef)``
+* ``0xe4``
+___
++
+<a name="bitwiseOr" ></a>
+
+### <code>bitwiseOr</code>
+<code><b>bitwiseOr(<i>&lt;value1&gt;</i> : integral, <i>&lt;value2&gt;</i> : integral) => integral</b></code><br/><br/>
+Bitwise Or operator across integral types. Same as | operator
+* ``bitwiseOr(0xf4, 0xef)``
+* ``0xff``
+* ``(0xf4 | 0xef)``
+* ``0xff``
+___
++
+<a name="bitwiseXor" ></a>
+
+### <code>bitwiseXor</code>
+<code><b>bitwiseXor(<i>&lt;value1&gt;</i> : any, <i>&lt;value2&gt;</i> : any) => any</b></code><br/><br/>
+Bitwise Or operator across integral types. Same as | operator
+* ``bitwiseXor(0xf4, 0xef)``
+* ``0x1b``
+* ``(0xf4 ^ 0xef)``
+* ``0x1b``
+* ``(true ^ false)``
+* ``true``
+* ``(true ^ true)``
+* ``false``
+___
++
+<a name="blake2b" ></a>
+
+### <code>blake2b</code>
+<code><b>blake2b(<i>&lt;value1&gt;</i> : integer, <i>&lt;value2&gt;</i> : any, ...) => string</b></code><br/><br/>
+Calculates the Blake2 digest of set of column of varying primitive datatypes given a bit length, which can only be multiples of 8 between 8 & 512. It can be used to calculate a fingerprint for a row
+* ``blake2b(256, 'gunchus', 8.2, 'bojjus', true, toDate('2010-4-4'))``
+* ``'c9521a5080d8da30dffb430c50ce253c345cc4c4effc315dab2162dac974711d'``
+___
++
+<a name="blake2bBinary" ></a>
+
+### <code>blake2bBinary</code>
+<code><b>blake2bBinary(<i>&lt;value1&gt;</i> : integer, <i>&lt;value2&gt;</i> : any, ...) => binary</b></code><br/><br/>
+Calculates the Blake2 digest of set of column of varying primitive datatypes given a bit length, which can only be multiples of 8 between 8 & 512. It can be used to calculate a fingerprint for a row
+* ``blake2bBinary(256, 'gunchus', 8.2, 'bojjus', true, toDate('2010-4-4'))``
+* ``unHex('c9521a5080d8da30dffb430c50ce253c345cc4c4effc315dab2162dac974711d')``
+___
++
+<a name="byItem" ></a>
+
+### <code>byItem</code>
+<code><b>byItem(<i>&lt;parent column&gt;</i> : any, <i>&lt;column name&gt;</i> : string) => any</b></code><br/><br/>
+Find a sub item within a structure or array of structure. If there are multiple matches, the first match is returned. If no match it returns a NULL value. The returned value has to be type converted by one of the type conversion actions(? date, ? string ...). Column names known at design time should be addressed just by their name. Computed inputs aren't supported but you can use parameter substitutions
+* ``byItem( byName('customer'), 'orderItems') ? (itemName as string, itemQty as integer)``
+* ``byItem( byItem( byName('customer'), 'orderItems'), 'itemName') ? string``
+___
++
+<a name="byName" ></a>
+
+### <code>byName</code>
+<code><b>byName(<i>&lt;column name&gt;</i> : string, [<i>&lt;stream name&gt;</i> : string]) => any</b></code><br/><br/>
+Selects a column value by name in the stream. You can pass an optional stream name as the second argument. If there are multiple matches, the first match is returned. If no match it returns a NULL value. The returned value has to be type converted by one of the type conversion functions(TO_DATE, TO_STRING ...). Column names known at design time should be addressed just by their name. Computed inputs aren't supported but you can use parameter substitutions.
+* ``toString(byName('parent'))``
+* ``toLong(byName('income'))``
+* ``toBoolean(byName('foster'))``
+* ``toLong(byName($debtCol))``
+* ``toString(byName('Bogus Column'))``
+* ``toString(byName('Bogus Column', 'DeriveStream'))``
+___
++
+<a name="byNames" ></a>
+
+### <code>byNames</code>
+<code><b>byNames(<i>&lt;column names&gt;</i> : array, [<i>&lt;stream name&gt;</i> : string]) => any</b></code><br/><br/>
+Select an array of columns by name in the stream. You can pass an optional stream name as the second argument. If there are multiple matches, the first match is returned. If there are no matches for a column, the entire output is a NULL value. The returned value requires a type conversion function (toDate, toString, ...). Column names known at design time should be addressed just by their name. Computed inputs aren't supported but you can use parameter substitutions.
+* ``toString(byNames(['parent', 'child']))``
+* ``byNames(['parent']) ? string``
+* ``toLong(byNames(['income']))``
+* ``byNames(['income']) ? long``
+* ``toBoolean(byNames(['foster']))``
+* ``toLong(byNames($debtCols))``
+* ``toString(byNames(['a Column']))``
+* ``toString(byNames(['a Column'], 'DeriveStream'))``
+* ``byNames(['orderItem']) ? (itemName as string, itemQty as integer)``
+___
++
+<a name="byOrigin" ></a>
+
+### <code>byOrigin</code>
+<code><b>byOrigin(<i>&lt;column name&gt;</i> : string, [<i>&lt;origin stream name&gt;</i> : string]) => any</b></code><br/><br/>
+Selects a column value by name in the origin stream. The second argument is the origin stream name. If there are multiple matches, the first match is returned. If no match it returns a NULL value. The returned value has to be type converted by one of the type conversion functions(TO_DATE, TO_STRING ...). Column names known at design time should be addressed just by their name. Computed inputs aren't supported but you can use parameter substitutions.
+* ``toString(byOrigin('ancestor', 'ancestorStream'))``
+___
++
+<a name="byOrigins" ></a>
+
+### <code>byOrigins</code>
+<code><b>byOrigins(<i>&lt;column names&gt;</i> : array, [<i>&lt;origin stream name&gt;</i> : string]) => any</b></code><br/><br/>
+Selects an array of columns by name in the stream. The second argument is the stream where it originated from. If there are multiple matches, the first match is returned. If no match it returns a NULL value. The returned value has to be type converted by one of the type conversion functions(TO_DATE, TO_STRING ...) Column names known at design time should be addressed just by their name. Computed inputs aren't supported but you can use parameter substitutions.
+* ``toString(byOrigins(['ancestor1', 'ancestor2'], 'ancestorStream'))``
+___
++
+<a name="byPath" ></a>
+
+### <code>byPath</code>
+<code><b>byPath(<i>&lt;value1&gt;</i> : string, [<i>&lt;streamName&gt;</i> : string]) => any</b></code><br/><br/>
+Finds a hierarchical path by name in the stream. You can pass an optional stream name as the second argument. If no such path is found, it returns null. Column names/paths known at design time should be addressed just by their name or dot notation path. Computed inputs aren't supported but you can use parameter substitutions.
+* ``byPath('grandpa.parent.child') => column``
+___
++
+<a name="byPosition" ></a>
+
+### <code>byPosition</code>
+<code><b>byPosition(<i>&lt;position&gt;</i> : integer) => any</b></code><br/><br/>
+Selects a column value by its relative position(1 based) in the stream. If the position is out of bounds, it returns a NULL value. The returned value has to be type converted by one of the type conversion functions(TO_DATE, TO_STRING ...) Computed inputs aren't supported but you can use parameter substitutions.
+* ``toString(byPosition(1))``
+* ``toDecimal(byPosition(2), 10, 2)``
+* ``toBoolean(byName(4))``
+* ``toString(byName($colName))``
+* ``toString(byPosition(1234))``
+___
+
+## C
+
+<a name="case" ></a>
+
+### <code>case</code>
+<code><b>case(<i>&lt;condition&gt;</i> : boolean, <i>&lt;true_expression&gt;</i> : any, <i>&lt;false_expression&gt;</i> : any, ...) => any</b></code><br/><br/>
+Based on alternating conditions applies one value or the other. If the number of inputs are even, the other is defaulted to NULL for last condition.
+* ``case(10 + 20 == 30, 'dumbo', 'gumbo') -> 'dumbo'``
+* ``case(10 + 20 == 25, 'bojjus', 'do' < 'go', 'gunchus') -> 'gunchus'``
+* ``isNull(case(10 + 20 == 25, 'bojjus', 'do' > 'go', 'gunchus')) -> true``
+* ``case(10 + 20 == 25, 'bojjus', 'do' > 'go', 'gunchus', 'dumbo') -> 'dumbo'``
+___
++
+<a name="cbrt" ></a>
+
+### <code>cbrt</code>
+<code><b>cbrt(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
+Calculates the cube root of a number.
+* ``cbrt(8) -> 2.0``
+___
++
+<a name="ceil" ></a>
+
+### <code>ceil</code>
+<code><b>ceil(<i>&lt;value1&gt;</i> : number) => number</b></code><br/><br/>
+Returns the smallest integer not smaller than the number.
+* ``ceil(-0.1) -> 0``
+___
++
+<a name="coalesce" ></a>
+
+### <code>coalesce</code>
+<code><b>coalesce(<i>&lt;value1&gt;</i> : any, ...) => any</b></code><br/><br/>
+Returns the first not null value from a set of inputs. All inputs should be of the same type.
+* ``coalesce(10, 20) -> 10``
+* ``coalesce(toString(null), toString(null), 'dumbo', 'bo', 'go') -> 'dumbo'``
+___
++
+<a name="collect" ></a>
+
+### <code>collect</code>
+<code><b>collect(<i>&lt;value1&gt;</i> : any) => array</b></code><br/><br/>
+Collects all values of the expression in the aggregated group into an array. Structures can be collected and transformed to alternate structures during this process. The number of items will be equal to the number of rows in that group and can contain null values. The number of collected items should be small.
+* ``collect(salesPerson)``
+* ``collect(firstName + lastName))``
+* ``collect(@(name = salesPerson, sales = salesAmount) )``
+___
++
+<a name="columnNames" ></a>
+
+### <code>columnNames</code>
+<code><b>columnNames(<i>&lt;value1&gt;</i> : string) => array</b></code><br/><br/>
+Gets the names of all output columns for a stream. You can pass an optional stream name as the second argument.
+* ``columnNames()``
+* ``columnNames('DeriveStream')``
+___
++
+<a name="columns" ></a>
+
+### <code>columns</code>
+<code><b>columns([<i>&lt;stream name&gt;</i> : string]) => any</b></code><br/><br/>
+Gets the values of all output columns for a stream. You can pass an optional stream name as the second argument.
+* ``columns()``
+* ``columns('DeriveStream')``
+___
++
+<a name="compare" ></a>
+
+### <code>compare</code>
+<code><b>compare(<i>&lt;value1&gt;</i> : any, <i>&lt;value2&gt;</i> : any) => integer</b></code><br/><br/>
+Compares two values of the same type. Returns negative integer if value1 < value2, 0 if value1 == value2, positive value if value1 > value2.
+* ``(compare(12, 24) < 1) -> true``
+* ``(compare('dumbo', 'dum') > 0) -> true``
+___
++
+<a name="concat" ></a>
+
+### <code>concat</code>
+<code><b>concat(<i>&lt;this&gt;</i> : string, <i>&lt;that&gt;</i> : string, ...) => string</b></code><br/><br/>
+Concatenates a variable number of strings together. Same as the + operator with strings.
+* ``concat('dataflow', 'is', 'awesome') -> 'dataflowisawesome'``
+* ``'dataflow' + 'is' + 'awesome' -> 'dataflowisawesome'``
+* ``isNull('sql' + null) -> true``
+___
++
+<a name="concatWS" ></a>
+
+### <code>concatWS</code>
+<code><b>concatWS(<i>&lt;separator&gt;</i> : string, <i>&lt;this&gt;</i> : string, <i>&lt;that&gt;</i> : string, ...) => string</b></code><br/><br/>
+Concatenates a variable number of strings together with a separator. The first parameter is the separator.
+* ``concatWS(' ', 'dataflow', 'is', 'awesome') -> 'dataflow is awesome'``
+* ``isNull(concatWS(null, 'dataflow', 'is', 'awesome')) -> true``
+* ``concatWS(' is ', 'dataflow', 'awesome') -> 'dataflow is awesome'``
+___
++
+<a name="contains" ></a>
+
+### <code>contains</code>
+<code><b>contains(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : unaryfunction) => boolean</b></code><br/><br/>
+Returns true if any element in the provided array evaluates as true in the provided predicate. Contains expects a reference to one element in the predicate function as #item.
+* ``contains([1, 2, 3, 4], #item == 3) -> true``
+* ``contains([1, 2, 3, 4], #item > 5) -> false``
+___
++
+<a name="cos" ></a>
+
+### <code>cos</code>
+<code><b>cos(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
+Calculates a cosine value.
+* ``cos(10) -> -0.8390715290764524``
+___
++
+<a name="cosh" ></a>
+
+### <code>cosh</code>
+<code><b>cosh(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
+Calculates a hyperbolic cosine of a value.
+* ``cosh(0) -> 1.0``
+___
++
+<a name="count" ></a>
+
+### <code>count</code>
+<code><b>count([<i>&lt;value1&gt;</i> : any]) => long</b></code><br/><br/>
+Gets the aggregate count of values. If the optional column(s) is specified, it ignores NULL values in the count.
+* ``count(custId)``
+* ``count(custId, custName)``
+* ``count()``
+* ``count(iif(isNull(custId), 1, NULL))``
+___
+
+<a name="countAll" ></a>
+
+### <code>countAll</code>
+<code><b>countAll([<i>&lt;value1&gt;</i> : any]) => long</b></code><br/><br/>
+Gets the aggregate count of values including nulls.
+* ``countAll(custId)``
+* ``countAll()``
+
+___
++
+<a name="countDistinct" ></a>
+
+### <code>countDistinct</code>
+<code><b>countDistinct(<i>&lt;value1&gt;</i> : any, [<i>&lt;value2&gt;</i> : any], ...) => long</b></code><br/><br/>
+Gets the aggregate count of distinct values of a set of columns.
+* ``countDistinct(custId, custName)``
+___
++
+<a name="countAllDistinct" ></a>
+
+### <code>countAllDistinct</code>
+<code><b>countAllDistinct(<i>&lt;value1&gt;</i> : any, [<i>&lt;value2&gt;</i> : any], ...) => long</b></code><br/><br/>
+Gets the aggregate count of distinct values of a set of columns including nulls.
+* ``countAllDistinct(custId, custName)``
+___
++
+<a name="countIf" ></a>
+
+### <code>countIf</code>
+<code><b>countIf(<i>&lt;value1&gt;</i> : boolean, [<i>&lt;value2&gt;</i> : any]) => long</b></code><br/><br/>
+Based on a criteria gets the aggregate count of values. If the optional column is specified, it ignores NULL values in the count.
+* ``countIf(state == 'CA' && commission < 10000, name)``
+___
++
+<a name="covariancePopulation" ></a>
+
+### <code>covariancePopulation</code>
+<code><b>covariancePopulation(<i>&lt;value1&gt;</i> : number, <i>&lt;value2&gt;</i> : number) => double</b></code><br/><br/>
+Gets the population covariance between two columns.
+* ``covariancePopulation(sales, profit)``
+___
++
+<a name="covariancePopulationIf" ></a>
+
+### <code>covariancePopulationIf</code>
+<code><b>covariancePopulationIf(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : number, <i>&lt;value3&gt;</i> : number) => double</b></code><br/><br/>
+Based on a criteria, gets the population covariance of two columns.
+* ``covariancePopulationIf(region == 'West', sales)``
+___
++
+<a name="covarianceSample" ></a>
+
+### <code>covarianceSample</code>
+<code><b>covarianceSample(<i>&lt;value1&gt;</i> : number, <i>&lt;value2&gt;</i> : number) => double</b></code><br/><br/>
+Gets the sample covariance of two columns.
+* ``covarianceSample(sales, profit)``
+___
++
+<a name="covarianceSampleIf" ></a>
+
+### <code>covarianceSampleIf</code>
+<code><b>covarianceSampleIf(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : number, <i>&lt;value3&gt;</i> : number) => double</b></code><br/><br/>
+Based on a criteria, gets the sample covariance of two columns.
+* ``covarianceSampleIf(region == 'West', sales, profit)``
+___
+++
+<a name="crc32" ></a>
+
+### <code>crc32</code>
+<code><b>crc32(<i>&lt;value1&gt;</i> : any, ...) => long</b></code><br/><br/>
+Calculates the CRC32 hash of set of column of varying primitive datatypes given a bit length, which can only be of values 0(256), 224, 256, 384, 512. It can be used to calculate a fingerprint for a row.
+* ``crc32(256, 'gunchus', 8.2, 'bojjus', true, toDate('2010-4-4')) -> 3630253689L``
+___
++
+<a name="cumeDist" ></a>
+
+### <code>cumeDist</code>
+<code><b>cumeDist() => integer</b></code><br/><br/>
+The CumeDist function computes the position of a value relative to all values in the partition. The result is the number of rows preceding or equal to the current row in the ordering of the partition divided by the total number of rows in the window partition. Any tie values in the ordering will evaluate to the same position.
+* ``cumeDist()``
+___
++
+<a name="currentDate" ></a>
+
+### <code>currentDate</code>
+<code><b>currentDate([<i>&lt;value1&gt;</i> : string]) => date</b></code><br/><br/>
+Gets the current date when this job starts to run. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. The local timezone is used as the default. Refer to Java's `SimpleDateFormat` class for available formats. [https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html](https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html).
+* ``currentDate() == toDate('2250-12-31') -> false``
+* ``currentDate('PST') == toDate('2250-12-31') -> false``
+* ``currentDate('America/New_York') == toDate('2250-12-31') -> false``
+___
++
+<a name="currentTimestamp" ></a>
+
+### <code>currentTimestamp</code>
+<code><b>currentTimestamp() => timestamp</b></code><br/><br/>
+Gets the current timestamp when the job starts to run with local time zone.
+* ``currentTimestamp() == toTimestamp('2250-12-31 12:12:12') -> false``
+___
++
+<a name="currentUTC" ></a>
+
+### <code>currentUTC</code>
+<code><b>currentUTC([<i>&lt;value1&gt;</i> : string]) => timestamp</b></code><br/><br/>
+Gets the current timestamp as UTC. If you want your current time to be interpreted in a different timezone than your cluster time zone, you can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. It's defaulted to the current timezone. Refer to Java's `SimpleDateFormat` class for available formats. [https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html](https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html). To convert the UTC time to a different timezone use `fromUTC()`.
+* ``currentUTC() == toTimestamp('2050-12-12 19:18:12') -> false``
+* ``currentUTC() != toTimestamp('2050-12-12 19:18:12') -> true``
+* ``fromUTC(currentUTC(), 'Asia/Seoul') != toTimestamp('2050-12-12 19:18:12') -> true``
+___
+
+## D
+
+<a name="dayOfMonth" ></a>
+
+### <code>dayOfMonth</code>
+<code><b>dayOfMonth(<i>&lt;value1&gt;</i> : datetime) => integer</b></code><br/><br/>
+Gets the day of the month given a date.
+* ``dayOfMonth(toDate('2018-06-08')) -> 8``
+___
++
+<a name="dayOfWeek" ></a>
+
+### <code>dayOfWeek</code>
+<code><b>dayOfWeek(<i>&lt;value1&gt;</i> : datetime) => integer</b></code><br/><br/>
+Gets the day of the week given a date. 1 - Sunday, 2 - Monday ..., 7 - Saturday.
+* ``dayOfWeek(toDate('2018-06-08')) -> 6``
+___
++
+<a name="dayOfYear" ></a>
+
+### <code>dayOfYear</code>
+<code><b>dayOfYear(<i>&lt;value1&gt;</i> : datetime) => integer</b></code><br/><br/>
+Gets the day of the year given a date.
+* ``dayOfYear(toDate('2016-04-09')) -> 100``
+___
++
+<a name="days" ></a>
+
+### <code>days</code>
+<code><b>days(<i>&lt;value1&gt;</i> : integer) => long</b></code><br/><br/>
+Duration in milliseconds for number of days.
+* ``days(2) -> 172800000L``
+___
++
+<a name="degrees" ></a>
+
+### <code>degrees</code>
+<code><b>degrees(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
+Converts radians to degrees.
+* ``degrees(3.141592653589793) -> 180``
+___
++
+<a name="denseRank" ></a>
+
+### <code>denseRank</code>
+<code><b>denseRank() => integer</b></code><br/><br/>
+Computes the rank of a value in a group of values specified in a window's order by clause. The result is one plus the number of rows preceding or equal to the current row in the ordering of the partition. The values won't produce gaps in the sequence. Dense Rank works even when data isn't sorted and looks for change in values.
+* ``denseRank()``
+___
++
+<a name="distinct" ></a>
+
+### <code>distinct</code>
+<code><b>distinct(<i>&lt;value1&gt;</i> : array) => array</b></code><br/><br/>
+Returns a distinct set of items from an array.
+* ``distinct([10, 20, 30, 10]) => [10, 20, 30]``
+___
++
+<a name="divide" ></a>
+
+### <code>divide</code>
+<code><b>divide(<i>&lt;value1&gt;</i> : any, <i>&lt;value2&gt;</i> : any) => any</b></code><br/><br/>
+Divides pair of numbers. Same as the `/` operator.
+* ``divide(20, 10) -> 2``
+* ``20 / 10 -> 2``
+___
++
+<a name="dropLeft" ></a>
+
+### <code>dropLeft</code>
+<code><b>dropLeft(<i>&lt;value1&gt;</i> : string, <i>&lt;value2&gt;</i> : integer) => string</b></code><br/><br/>
+Removes as many characters from the left of the string. If the drop requested exceeds the length of the string, an empty string is returned.
+* dropLeft('bojjus', 2) => 'jjus'
+* dropLeft('cake', 10) => ''
+___
++
+<a name="dropRight" ></a>
+
+### <code>dropRight</code>
+<code><b>dropRight(<i>&lt;value1&gt;</i> : string, <i>&lt;value2&gt;</i> : integer) => string</b></code><br/><br/>
+Removes as many characters from the right of the string. If the drop requested exceeds the length of the string, an empty string is returned.
+* dropRight('bojjus', 2) => 'bojj'
+* dropRight('cake', 10) => ''
+___
+
+## E
+
+<a name="endsWith" ></a>
+
+### <code>endsWith</code>
+<code><b>endsWith(<i>&lt;string&gt;</i> : string, <i>&lt;substring to check&gt;</i> : string) => boolean</b></code><br/><br/>
+Checks if the string ends with the supplied string.
+* ``endsWith('dumbo', 'mbo') -> true``
+___
++
+<a name="equals" ></a>
+
+### <code>equals</code>
+<code><b>equals(<i>&lt;value1&gt;</i> : any, <i>&lt;value2&gt;</i> : any) => boolean</b></code><br/><br/>
+Comparison equals operator. Same as == operator.
+* ``equals(12, 24) -> false``
+* ``12 == 24 -> false``
+* ``'bad' == 'bad' -> true``
+* ``isNull('good' == toString(null)) -> true``
+* ``isNull(null == null) -> true``
+___
++
+<a name="equalsIgnoreCase" ></a>
+
+### <code>equalsIgnoreCase</code>
+<code><b>equalsIgnoreCase(<i>&lt;value1&gt;</i> : string, <i>&lt;value2&gt;</i> : string) => boolean</b></code><br/><br/>
+Comparison equals operator ignoring case. Same as <=> operator.
+* ``'abc'<=>'Abc' -> true``
+* ``equalsIgnoreCase('abc', 'Abc') -> true``
+___
++
+<a name="escape" ></a>
+
+### <code>escape</code>
+<code><b>escape(<i>&lt;string_to_escape&gt;</i> : string, <i>&lt;format&gt;</i> : string) => string</b></code><br/><br/>
+Escapes a string according to a format. Literal values for acceptable format are 'json', 'xml', 'ecmascript', 'html', 'java'.
+___
++
+<a name="except" ></a>
+
+### <code>except</code>
+<code><b>except(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : array) => array</b></code><br/><br/>
+Returns a difference set of one array from another dropping duplicates.
+* ``except([10, 20, 30], [20, 40]) => [10, 30]``
+___
++
+<a name="expr" ></a>
+
+### <code>expr</code>
+<code><b>expr(<i>&lt;expr&gt;</i> : string) => any</b></code><br/><br/>
+Results in an expression from a string. This is the same as writing this expression in a non-literal form. This can be used to pass parameters as string representations.
+* expr('price * discount') => any
+___
+
+## F
+
+<a name="factorial" ></a>
+
+### <code>factorial</code>
+<code><b>factorial(<i>&lt;value1&gt;</i> : number) => long</b></code><br/><br/>
+Calculates the factorial of a number.
+* ``factorial(5) -> 120``
+___
++
+<a name="false" ></a>
+
+### <code>false</code>
+<code><b>false() => boolean</b></code><br/><br/>
+Always returns a false value. Use the function `syntax(false())` if there's a column named 'false'.
+* ``(10 + 20 > 30) -> false``
+* ``(10 + 20 > 30) -> false()``
+___
++
+<a name="filter" ></a>
+
+### <code>filter</code>
+<code><b>filter(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : unaryfunction) => array</b></code><br/><br/>
+Filters elements out of the array that don't meet the provided predicate. Filter expects a reference to one element in the predicate function as #item.
+* ``filter([1, 2, 3, 4], #item > 2) -> [3, 4]``
+* ``filter(['a', 'b', 'c', 'd'], #item == 'a' || #item == 'b') -> ['a', 'b']``
+___
++
+<a name="find" ></a>
+
+### <code>find</code>
+<code><b>find(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : unaryfunction) => any</b></code><br/><br/>
+Find the first item from an array that matches the condition. It takes a filter function where you can address the item in the array as #item. For deeply nested maps you can refer to the parent maps using the #item_n(#item_1, #item_2...) notation.
+* ``find([10, 20, 30], #item > 10) -> 20``
+* ``find(['azure', 'data', 'factory'], length(#item) > 4) -> 'azure'``
+* ``find([
+ @(
+ name = 'Daniel',
+ types = [
+ @(mood = 'jovial', behavior = 'terrific'),
+ @(mood = 'grumpy', behavior = 'bad')
+ ]
+ ),
+ @(
+ name = 'Mark',
+ types = [
+ @(mood = 'happy', behavior = 'awesome'),
+ @(mood = 'calm', behavior = 'reclusive')
+ ]
+ )
+ ],
+ contains(#item.types, #item.mood=='happy') /*Filter out the happy kid*/
+ )``
+* ``
+ @(
+ name = 'Mark',
+ types = [
+ @(mood = 'happy', behavior = 'awesome'),
+ @(mood = 'calm', behavior = 'reclusive')
+ ]
+ )
+ ``
+___
++
+<a name="first" ></a>
+
+### <code>first</code>
+<code><b>first(<i>&lt;value1&gt;</i> : any, [<i>&lt;value2&gt;</i> : boolean]) => any</b></code><br/><br/>
+Gets the first value of a column group. If the second parameter ignoreNulls is omitted, it's assumed false.
+* ``first(sales)``
+* ``first(sales, false)``
+___
+++
+<a name="flatten" ></a>
+
+### <code>flatten</code>
+<code><b>flatten(<i>&lt;array&gt;</i> : array, <i>&lt;value2&gt;</i> : array ..., <i>&lt;value2&gt;</i> : boolean) => array</b></code><br/><br/>
+Flattens array or arrays into a single array. Arrays of atomic items are returned unaltered. The last argument is optional and is defaulted to false to flatten recursively more than one level deep.
+* ``flatten([['bojjus', 'girl'], ['gunchus', 'boy']]) => ['bojjus', 'girl', 'gunchus', 'boy']``
+* ``flatten([[['bojjus', 'gunchus']]] , true) => ['bojjus', 'gunchus']``
+___
++
+<a name="floor" ></a>
+
+### <code>floor</code>
+<code><b>floor(<i>&lt;value1&gt;</i> : number) => number</b></code><br/><br/>
+Returns the largest integer not greater than the number.
+* ``floor(-0.1) -> -1``
+___
++
+<a name="fromBase64" ></a>
+
+### <code>fromBase64</code>
+<code><b>fromBase64(<i>&lt;value1&gt;</i> : string, <i>&lt;encoding type&gt;</i> : string) => string</b></code><br/><br/>
+Decodes the given base64-encoded string. You can optionally pass the encoding type.
+* ``fromBase64('Z3VuY2h1cw==') -> 'gunchus'``
+* ``fromBase64('SGVsbG8gV29ybGQ=', 'Windows-1252') -> 'Hello World'``
+___
++
+<a name="fromUTC" ></a>
+
+### <code>fromUTC</code>
+<code><b>fromUTC(<i>&lt;value1&gt;</i> : timestamp, [<i>&lt;value2&gt;</i> : string]) => timestamp</b></code><br/><br/>
+Converts to the timestamp from UTC. You can optionally pass the timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. It's defaulted to the current timezone. Refer to Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html.
+* ``fromUTC(currentTimestamp()) == toTimestamp('2050-12-12 19:18:12') -> false``
+* ``fromUTC(currentTimestamp(), 'Asia/Seoul') != toTimestamp('2050-12-12 19:18:12') -> true``
+___
+
+## G
+
+<a name="greater" ></a>
+
+### <code>greater</code>
+<code><b>greater(<i>&lt;value1&gt;</i> : any, <i>&lt;value2&gt;</i> : any) => boolean</b></code><br/><br/>
+Comparison greater operator. Same as > operator.
+* ``greater(12, 24) -> false``
+* ``('dumbo' > 'dum') -> true``
+* ``(toTimestamp('2019-02-05 08:21:34.890', 'yyyy-MM-dd HH:mm:ss.SSS') > toTimestamp('2019-02-03 05:19:28.871', 'yyyy-MM-dd HH:mm:ss.SSS')) -> true``
+___
++
+<a name="greaterOrEqual" ></a>
+
+### <code>greaterOrEqual</code>
+<code><b>greaterOrEqual(<i>&lt;value1&gt;</i> : any, <i>&lt;value2&gt;</i> : any) => boolean</b></code><br/><br/>
+Comparison greater than or equal operator. Same as >= operator.
+* ``greaterOrEqual(12, 12) -> true``
+* ``('dumbo' >= 'dum') -> true``
+___
++
+<a name="greatest" ></a>
+
+### <code>greatest</code>
+<code><b>greatest(<i>&lt;value1&gt;</i> : any, ...) => any</b></code><br/><br/>
+Returns the greatest value among the list of values as input skipping null values. Returns null if all inputs are null.
+* ``greatest(10, 30, 15, 20) -> 30``
+* ``greatest(10, toInteger(null), 20) -> 20``
+* ``greatest(toDate('2010-12-12'), toDate('2011-12-12'), toDate('2000-12-12')) -> toDate('2011-12-12')``
+* ``greatest(toTimestamp('2019-02-03 05:19:28.871', 'yyyy-MM-dd HH:mm:ss.SSS'), toTimestamp('2019-02-05 08:21:34.890', 'yyyy-MM-dd HH:mm:ss.SSS')) -> toTimestamp('2019-02-05 08:21:34.890', 'yyyy-MM-dd HH:mm:ss.SSS')``
+___
+
+## H
+
+<a name="hasColumn" ></a>
+
+### <code>hasColumn</code>
+<code><b>hasColumn(<i>&lt;column name&gt;</i> : string, [<i>&lt;stream name&gt;</i> : string]) => boolean</b></code><br/><br/>
+Checks for a column value by name in the stream. You can pass an optional stream name as the second argument. Column names known at design time should be addressed just by their name. Computed inputs aren't supported but you can use parameter substitutions.
+* ``hasColumn('parent')``
+___
++
+<a name="hasError" ></a>
+
+### <code>hasError</code>
+<code><b>hasError([<i>&lt;value1&gt;</i> : string]) => boolean</b></code><br/><br/>
+Checks if the assert with provided ID is marked as error.
+
+Examples
+* ``hasError('assert1')``
+* ``hasError('assert2')``
+
+___
+
+<a name="hasPath" ></a>
+
+### <code>hasPath</code>
+<code><b>hasPath(<i>&lt;value1&gt;</i> : string, [<i>&lt;streamName&gt;</i> : string]) => boolean</b></code><br/><br/>
+Checks if a certain hierarchical path exists by name in the stream. You can pass an optional stream name as the second argument. Column names/paths known at design time should be addressed just by their name or dot notation path. Computed inputs aren't supported but you can use parameter substitutions.
+* ``hasPath('grandpa.parent.child') => boolean``
+___
++
+<a name="hex" ></a>
+
+### <code>hex</code>
+<code><b>hex(<i>\<value1\></i>: binary) => string</b></code><br/><br/>
+Returns a hex string representation of a binary value
+* ``hex(toBinary([toByte(0x1f), toByte(0xad), toByte(0xbe)])) -> '1fadbe'``
+___
++
+<a name="hour" ></a>
+
+### <code>hour</code>
+<code><b>hour(<i>&lt;value1&gt;</i> : timestamp, [<i>&lt;value2&gt;</i> : string]) => integer</b></code><br/><br/>
+Gets the hour value of a timestamp. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. The local timezone is used as the default. Refer to Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html.
+* ``hour(toTimestamp('2009-07-30 12:58:59')) -> 12``
+* ``hour(toTimestamp('2009-07-30 12:58:59'), 'PST') -> 12``
+___
++
+<a name="hours" ></a>
+
+### <code>hours</code>
+<code><b>hours(<i>&lt;value1&gt;</i> : integer) => long</b></code><br/><br/>
+Duration in milliseconds for number of hours.
+* ``hours(2) -> 7200000L``
+___
+
+## I
+
+<a name="iif" ></a>
+
+### <code>iif</code>
+<code><b>iif(<i>&lt;condition&gt;</i> : boolean, <i>&lt;true_expression&gt;</i> : any, [<i>&lt;false_expression&gt;</i> : any]) => any</b></code><br/><br/>
+Based on a condition applies one value or the other. If other is unspecified, it's considered NULL. Both the values must be compatible(numeric, string...).
+* ``iif(10 + 20 == 30, 'dumbo', 'gumbo') -> 'dumbo'``
+* ``iif(10 > 30, 'dumbo', 'gumbo') -> 'gumbo'``
+* ``iif(month(toDate('2018-12-01')) == 12, 345.12, 102.67) -> 345.12``
+___
++
+<a name="iifNull" ></a>
+
+### <code>iifNull</code>
+<code><b>iifNull(<i>&lt;value1&gt;</i> : any, [<i>&lt;value2&gt;</i> : any], ...) => any</b></code><br/><br/>
+Checks if the first parameter is null. If not null, the first parameter is returned. If null, the second parameter is returned. If three parameters are specified, the behavior is the same as iif(isNull(value1), value2, value3) and the third parameter is returned if the first value isn't null.
+* ``iifNull(10, 20) -> 10``
+* ``iifNull(null, 20, 40) -> 20``
+* ``iifNull('azure', 'data', 'factory') -> 'factory'``
+* ``iifNull(null, 'data', 'factory') -> 'data'``
+___
++
+<a name="in" ></a>
+
+### <code>in</code>
+<code><b>in(<i>&lt;array of items&gt;</i> : array, <i>&lt;item to find&gt;</i> : any) => boolean</b></code><br/><br/>
+Checks if an item is in the array.
+* ``in([10, 20, 30], 10) -> true``
+* ``in(['good', 'kid'], 'bad') -> false``
+___
++
+<a name="initCap" ></a>
+
+### <code>initCap</code>
+<code><b>initCap(<i>&lt;value1&gt;</i> : string) => string</b></code><br/><br/>
+Converts the first letter of every word to uppercase. Words are identified as separated by whitespace.
+* ``initCap('cool iceCREAM') -> 'Cool Icecream'``
+___
++
+<a name="instr" ></a>
+
+### <code>instr</code>
+<code><b>instr(<i>&lt;string&gt;</i> : string, <i>&lt;substring to find&gt;</i> : string) => integer</b></code><br/><br/>
+Finds the position(1 based) of the substring within a string. 0 is returned if not found.
+* ``instr('dumbo', 'mbo') -> 3``
+* ``instr('microsoft', 'o') -> 5``
+* ``instr('good', 'bad') -> 0``
+___
++
+<a name="intersect" ></a>
+
+### <code>intersect</code>
+<code><b>intersect(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : array) => array</b></code><br/><br/>
+Returns an intersection set of distinct items from 2 arrays.
+* ``intersect([10, 20, 30], [20, 40]) => [20]``
+___
++
+<a name="isBitSet" ></a>
+
+### <code>isBitSet</code>
+<code><b>isBitSet (<i><i>\<value1\></i></i> : array, <i>\<value2\></i>:integer ) => boolean</b></code><br/><br/>
+Checks if a bit position is set in this bitset
+* ``isBitSet(toBitSet([10, 32, 98]), 10) => true``
+___
++
+<a name="isBoolean" ></a>
+
+### <code>isBoolean</code>
+<code><b>isBoolean(<i>\<value1\></i>: string) => boolean</b></code><br/><br/>
+Checks if the string value is a boolean value according to the rules of ``toBoolean()``
+* ``isBoolean('true') -> true``
+* ``isBoolean('no') -> true``
+* ``isBoolean('microsoft') -> false``
+___
++
+<a name="isByte" ></a>
+
+### <code>isByte</code>
+<code><b>isByte(<i>\<value1\></i> : string) => boolean</b></code><br/><br/>
+Checks if the string value is a byte value given an optional format according to the rules of ``toByte()``
+* ``isByte('123') -> true``
+* ``isByte('chocolate') -> false``
+___
++
+<a name="isDate" ></a>
+
+### <code>isDate</code>
+<code><b>isDate (<i>\<value1\></i> : string, [&lt;format&gt;: string]) => boolean</b></code><br/><br/>
+Checks if the input date string is a date using an optional input date format. Refer to Java's SimpleDateFormat for available formats. If the input date format is omitted, default format is ``yyyy-[M]M-[d]d``. Accepted formats are ``[ yyyy, yyyy-[M]M, yyyy-[M]M-[d]d, yyyy-[M]M-[d]dT* ]``
+* ``isDate('2012-8-18') -> true``
+* ``isDate('12/18--234234' -> 'MM/dd/yyyy') -> false``
+___
++
+<a name="isDecimal" ></a>
+
+### <code>isDecimal</code>
+<code><b>isDecimal (<i>\<value1\></i> : string) => boolean</b></code><br/><br/>
+Checks if the string value is a decimal value given an optional format according to the rules of ``toDecimal()``
+* ``isDecimal('123.45') -> true``
+* ``isDecimal('12/12/2000') -> false``
+___
++
+<a name="isDelete" ></a>
+
+### <code>isDelete</code>
+<code><b>isDelete([<i>&lt;value1&gt;</i> : integer]) => boolean</b></code><br/><br/>
+Checks if the row is marked for delete. For transformations taking more than one input stream you can pass the (1-based) index of the stream. The stream index should be either 1 or 2 and the default value is 1.
+* ``isDelete()``
+* ``isDelete(1)``
+___
++
+<a name="isDistinct" ></a>
+
+### <code>isDistinct</code>
+<code><b>isDistinct(<i>&lt;value1&gt;</i> : any , <i>&lt;value1&gt;</i> : any) => boolean</b></code><br/><br/>
+Finds if a column or set of columns is distinct. It doesn't count null as a distinct value
+* ``isDistinct(custId, custName) => boolean``
+___
+++
+<a name="isDouble" ></a>
+
+### <code>isDouble</code>
+<code><b>isDouble (<i>\<value1\></i> : string, [&lt;format&gt;: string]) => boolean</b></code><br/><br/>
+Checks if the string value is a double value given an optional format according to the rules of ``toDouble()``
+* ``isDouble('123') -> true``
+* ``isDouble('$123.45' -> '$###.00') -> true``
+* ``isDouble('icecream') -> false``
+___
+
+<a name="isError" ></a>
+
+### <code>isError</code>
+<code><b>isError([<i>&lt;value1&gt;</i> : integer]) => boolean</b></code><br/><br/>
+Checks if the row is marked as error. For transformations taking more than one input stream you can pass the (1-based) index of the stream. The stream index should be either 1 or 2 and the default value is 1.
+* ``isError()``
+* ``isError(1)``
+___
+
+<a name="isFloat" ></a>
+
+### <code>isFloat</code>
+<code><b>isFloat (<i>\<value1\></i> : string, [&lt;format&gt;: string]) => boolean</b></code><br/><br/>
+Checks if the string value is a float value given an optional format according to the rules of ``toFloat()``
+* ``isFloat('123') -> true``
+* ``isFloat('$123.45' -> '$###.00') -> true``
+* ``isFloat('icecream') -> false``
+___
++
+<a name="isIgnore" ></a>
+
+### <code>isIgnore</code>
+<code><b>isIgnore([<i>&lt;value1&gt;</i> : integer]) => boolean</b></code><br/><br/>
+Checks if the row is marked to be ignored. For transformations taking more than one input stream you can pass the (1-based) index of the stream. The stream index should be either 1 or 2 and the default value is 1.
+* ``isIgnore()``
+* ``isIgnore(1)``
+___
++
+<a name="isInsert" ></a>
+
+### <code>isInsert</code>
+<code><b>isInsert([<i>&lt;value1&gt;</i> : integer]) => boolean</b></code><br/><br/>
+Checks if the row is marked for insert. For transformations taking more than one input stream you can pass the (1-based) index of the stream. The stream index should be either 1 or 2 and the default value is 1.
+* ``isInsert()``
+* ``isInsert(1)``
+___
++
+<a name="isInteger" ></a>
+
+### <code>isInteger</code>
+<code><b>isInteger (<i>\<value1\></i> : string, [&lt;format&gt;: string]) => boolean</b></code><br/><br/>
+Checks if the string value is an integer value given an optional format according to the rules of ``toInteger()``
+* ``isInteger('123') -> true``
+* ``isInteger('$123' -> '$###') -> true``
+* ``isInteger('microsoft') -> false``
+___
++
+<a name="isLong" ></a>
+
+### <code>isLong</code>
+<code><b>isLong (<i>\<value1\></i> : string, [&lt;format&gt;: string]) => boolean</b></code><br/><br/>
+Checks if the string value is a long value given an optional format according to the rules of ``toLong()``
+* ``isLong('123') -> true``
+* ``isLong('$123' -> '$###') -> true``
+* ``isLong('gunchus') -> false``
+___
++
+<a name="isMatch" ></a>
+
+### <code>isMatch</code>
+<code><b>isMatch([<i>&lt;value1&gt;</i> : integer]) => boolean</b></code><br/><br/>
+Checks if the row is matched at lookup. For transformations taking more than one input stream you can pass the (1-based) index of the stream. The stream index should be either 1 or 2 and the default value is 1.
+* ``isMatch()``
+* ``isMatch(1)``
+___
++
+<a name="isNan" ></a>
+
+### <code>isNan</code>
+<code><b>isNan (<i>\<value1\></i> : integral) => boolean</b></code><br/><br/>
+Check if this isn't a number.
+* ``isNan(10.2) => false``
+___
++
+<a name="isNull" ></a>
+
+### <code>isNull</code>
+<code><b>isNull(<i>&lt;value1&gt;</i> : any) => boolean</b></code><br/><br/>
+Checks if the value is NULL.
+* ``isNull(NULL()) -> true``
+* ``isNull('') -> false``
+___
++
+<a name="isShort" ></a>
+
+### <code>isShort</code>
+<code><b>isShort (<i>\<value1\></i> : string, [&lt;format&gt;: string]) => boolean</b></code><br/><br/>
+Checks if the string value is a short value given an optional format according to the rules of ``toShort()``
+* ``isShort('123') -> true``
+* ``isShort('$123' -> '$###') -> true``
+* ``isShort('microsoft') -> false``
+___
++
+<a name="isTimestamp" ></a>
+
+### <code>isTimestamp</code>
+<code><b>isTimestamp (<i>\<value1\></i> : string, [&lt;format&gt;: string]) => boolean</b></code><br/><br/>
+Checks if the input date string is a timestamp using an optional input timestamp format. Refer to Java's SimpleDateFormat for available formats. If the timestamp is omitted the default pattern ``yyyy-[M]M-[d]d hh:mm:ss[.f...]`` is used. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. Timestamp supports up to millisecond accuracy with value of 999 Refer to Java's SimpleDateFormat for available formats.
+* ``isTimestamp('2016-12-31 00:12:00') -> true``
+* ``isTimestamp('2016-12-31T00:12:00' -> 'yyyy-MM-dd\\'T\\'HH:mm:ss' -> 'PST') -> true``
+* ``isTimestamp('2012-8222.18') -> false``
+___
++
+<a name="isUpdate" ></a>
+
+### <code>isUpdate</code>
+<code><b>isUpdate([<i>&lt;value1&gt;</i> : integer]) => boolean</b></code><br/><br/>
+Checks if the row is marked for update. For transformations taking more than one input stream you can pass the (1-based) index of the stream. The stream index should be either 1 or 2 and the default value is 1.
+* ``isUpdate()``
+* ``isUpdate(1)``
+___
++
+<a name="isUpsert" ></a>
+
+### <code>isUpsert</code>
+<code><b>isUpsert([<i>&lt;value1&gt;</i> : integer]) => boolean</b></code><br/><br/>
+Checks if the row is marked for insert. For transformations taking more than one input stream you can pass the (1-based) index of the stream. The stream index should be either 1 or 2 and the default value is 1.
+* ``isUpsert()``
+* ``isUpsert(1)``
+___
+
+## J
+
+<a name="jaroWinkler" ></a>
+
+### <code>jaroWinkler</code>
+<code><b>jaroWinkler(<i>&lt;value1&gt;</i> : string, <i>&lt;value2&gt;</i> : string) => double</b></code><br/><br/>
+Gets the JaroWinkler distance between two strings.
+* ``jaroWinkler('frog', 'frog') => 1.0``
+___
+
+## K
+
+<a name="keyValues" ></a>
+
+### <code>keyValues</code>
+<code><b>keyValues(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : array) => map</b></code><br/><br/>
+Creates a map of key/values. The first parameter is an array of keys and second is the array of values. Both arrays should have equal length.
+* ``keyValues(['bojjus', 'appa'], ['gunchus', 'ammi']) => ['bojjus' -> 'gunchus', 'appa' -> 'ammi']``
+___
++
+<a name="kurtosis" ></a>
+
+### <code>kurtosis</code>
+<code><b>kurtosis(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
+Gets the kurtosis of a column.
+* ``kurtosis(sales)``
+___
++
+<a name="kurtosisIf" ></a>
+
+### <code>kurtosisIf</code>
+<code><b>kurtosisIf(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : number) => double</b></code><br/><br/>
+Based on a criteria, gets the kurtosis of a column.
+* ``kurtosisIf(region == 'West', sales)``
+___
+
+## L
+
+<a name="lag" ></a>
+
+### <code>lag</code>
+<code><b>lag(<i>&lt;value&gt;</i> : any, [<i>&lt;number of rows to look before&gt;</i> : number], [<i>&lt;default value&gt;</i> : any]) => any</b></code><br/><br/>
+Gets the value of the first parameter evaluated n rows before the current row. The second parameter is the number of rows to look back and the default value is 1. If there aren't as many rows a value of null is returned unless a default value is specified.
+* ``lag(amount, 2)``
+* ``lag(amount, 2000, 100)``
+___
++
+<a name="last" ></a>
+
+### <code>last</code>
+<code><b>last(<i>&lt;value1&gt;</i> : any, [<i>&lt;value2&gt;</i> : boolean]) => any</b></code><br/><br/>
+Gets the last value of a column group. If the second parameter ignoreNulls is omitted, it's assumed false.
+* ``last(sales)``
+* ``last(sales, false)``
+___
++
+<a name="lastDayOfMonth" ></a>
+
+### <code>lastDayOfMonth</code>
+<code><b>lastDayOfMonth(<i>&lt;value1&gt;</i> : datetime) => date</b></code><br/><br/>
+Gets the last date of the month given a date.
+* ``lastDayOfMonth(toDate('2009-01-12')) -> toDate('2009-01-31')``
+___
++
+<a name="lead" ></a>
+
+### <code>lead</code>
+<code><b>lead(<i>&lt;value&gt;</i> : any, [<i>&lt;number of rows to look after&gt;</i> : number], [<i>&lt;default value&gt;</i> : any]) => any</b></code><br/><br/>
+Gets the value of the first parameter evaluated n rows after the current row. The second parameter is the number of rows to look forward and the default value is 1. If there aren't as many rows a value of null is returned unless a default value is specified.
+* ``lead(amount, 2)``
+* ``lead(amount, 2000, 100)``
+___
++
+<a name="least" ></a>
+
+### <code>least</code>
+<code><b>least(<i>&lt;value1&gt;</i> : any, ...) => any</b></code><br/><br/>
+Comparison lesser than or equal operator. Same as <= operator.
+* ``least(10, 30, 15, 20) -> 10``
+* ``least(toDate('2010-12-12'), toDate('2011-12-12'), toDate('2000-12-12')) -> toDate('2000-12-12')``
+___
++
+<a name="left" ></a>
+
+### <code>left</code>
+<code><b>left(<i>&lt;string to subset&gt;</i> : string, <i>&lt;number of characters&gt;</i> : integral) => string</b></code><br/><br/>
+Extracts a substring start at index 1 with number of characters. Same as SUBSTRING(str, 1, n).
+* ``left('bojjus', 2) -> 'bo'``
+* ``left('bojjus', 20) -> 'bojjus'``
+___
++
+<a name="length" ></a>
+
+### <code>length</code>
+<code><b>length(<i>&lt;value1&gt;</i> : string) => integer</b></code><br/><br/>
+Returns the length of the string.
+* ``length('dumbo') -> 5``
+___
++
+<a name="lesser" ></a>
+
+### <code>lesser</code>
+<code><b>lesser(<i>&lt;value1&gt;</i> : any, <i>&lt;value2&gt;</i> : any) => boolean</b></code><br/><br/>
+Comparison less operator. Same as < operator.
+* ``lesser(12, 24) -> true``
+* ``('abcd' < 'abc') -> false``
+* ``(toTimestamp('2019-02-03 05:19:28.871', 'yyyy-MM-dd HH:mm:ss.SSS') < toTimestamp('2019-02-05 08:21:34.890', 'yyyy-MM-dd HH:mm:ss.SSS')) -> true``
+___
++
+<a name="lesserOrEqual" ></a>
+
+### <code>lesserOrEqual</code>
+<code><b>lesserOrEqual(<i>&lt;value1&gt;</i> : any, <i>&lt;value2&gt;</i> : any) => boolean</b></code><br/><br/>
+Comparison lesser than or equal operator. Same as <= operator.
+* ``lesserOrEqual(12, 12) -> true``
+* ``('dumbo' <= 'dum') -> false``
+___
++
+<a name="levenshtein" ></a>
+
+### <code>levenshtein</code>
+<code><b>levenshtein(<i>&lt;from string&gt;</i> : string, <i>&lt;to string&gt;</i> : string) => integer</b></code><br/><br/>
+Gets the levenshtein distance between two strings.
+* ``levenshtein('boys', 'girls') -> 4``
+___
++
+<a name="like" ></a>
+
+### <code>like</code>
+<code><b>like(<i>&lt;string&gt;</i> : string, <i>&lt;pattern match&gt;</i> : string) => boolean</b></code><br/><br/>
+The pattern is a string that is matched literally. The exceptions are the following special symbols: _ matches any one character in the input (similar to . in ```posix``` regular expressions)
+ % matches zero or more characters in the input (similar to .* in ```posix``` regular expressions).
+ The escape character is ''. If an escape character precedes a special symbol or another escape character, the following character is matched literally. It's invalid to escape any other character.
+* ``like('icecream', 'ice%') -> true``
+___
++
+<a name="locate" ></a>
+
+### <code>locate</code>
+<code><b>locate(<i>&lt;substring to find&gt;</i> : string, <i>&lt;string&gt;</i> : string, [<i>&lt;from index - 1-based&gt;</i> : integral]) => integer</b></code><br/><br/>
+Finds the position(1 based) of the substring within a string starting a certain position. If the position is omitted, it's considered from the beginning of the string. 0 is returned if not found.
+* ``locate('mbo', 'dumbo') -> 3``
+* ``locate('o', 'microsoft', 6) -> 7``
+* ``locate('bad', 'good') -> 0``
+___
++
+<a name="log" ></a>
+
+### <code>log</code>
+<code><b>log(<i>&lt;value1&gt;</i> : number, [<i>&lt;value2&gt;</i> : number]) => double</b></code><br/><br/>
+Calculates log value. An optional base can be supplied else a Euler number if used.
+* ``log(100, 10) -> 2``
+___
++
+<a name="log10" ></a>
+
+### <code>log10</code>
+<code><b>log10(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
+Calculates log value based on 10 base.
+* ``log10(100) -> 2``
+___
++
+<a name="lookup" ></a>
+
+### <code>lookup</code>
+<code><b>lookup(key, key2, ...) => complex[]</b></code><br/><br/>
+Looks up the first row from the cached sink using the specified keys that match the keys from the cached sink.
+* ``cacheSink#lookup(movieId)``
+___
++
+<a name="lower" ></a>
+
+### <code>lower</code>
+<code><b>lower(<i>&lt;value1&gt;</i> : string) => string</b></code><br/><br/>
+Lowercases a string.
+* ``lower('GunChus') -> 'gunchus'``
+___
++
+<a name="lpad" ></a>
+
+### <code>lpad</code>
+<code><b>lpad(<i>&lt;string to pad&gt;</i> : string, <i>&lt;final padded length&gt;</i> : integral, <i>&lt;padding&gt;</i> : string) => string</b></code><br/><br/>
+Left pads the string by the supplied padding until it is of a certain length. If the string is equal to or greater than the length, then it's trimmed to the length.
+* ``lpad('dumbo', 10, '-') -> '--dumbo'``
+* ``lpad('dumbo', 4, '-') -> 'dumb'``
+
+___
++
+<a name="ltrim" ></a>
+
+### <code>ltrim</code>
+<code><b>ltrim(<i>&lt;string to trim&gt;</i> : string, [<i>&lt;trim characters&gt;</i> : string]) => string</b></code><br/><br/>
+Left trims a string of leading characters. If second parameter is unspecified, it trims whitespace. Else it trims any character specified in the second parameter.
+* ``ltrim(' dumbo ') -> 'dumbo '``
+* ``ltrim('!--!du!mbo!', '-!') -> 'du!mbo!'``
+___
+
+## M
+
+<a name="map" ></a>
+
+### <code>map</code>
+<code><b>map(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : unaryfunction) => any</b></code><br/><br/>
+Maps each element of the array to a new element using the provided expression. Map expects a reference to one element in the expression function as #item.
+* ``map([1, 2, 3, 4], #item + 2) -> [3, 4, 5, 6]``
+* ``map(['a', 'b', 'c', 'd'], #item + '_processed') -> ['a_processed', 'b_processed', 'c_processed', 'd_processed']``
+___
++
+<a name="mapAssociation" ></a>
+
+### <code>mapAssociation</code>
+<code><b>mapAssociation(<i>&lt;value1&gt;</i> : map, <i>&lt;value2&gt;</i> : binaryFunction) => array</b></code><br/><br/>
+Transforms a map by associating the keys to new values. Returns an array. It takes a mapping function where you can address the item as #key and current value as #value.
+* ``mapAssociation(['bojjus' -> 'gunchus', 'appa' -> 'ammi'], @(key = #key, value = #value)) => [@(key = 'bojjus', value = 'gunchus'), @(key = 'appa', value = 'ammi')]``
+___
++
+<a name="mapIf" ></a>
+
+### <code>mapIf</code>
+<code><b>mapIf (<i>\<value1\></i> : array, <i>\<value2\></i> : binaryfunction, \<value3\>: binaryFunction) => any</b></code><br/><br/>
+Conditionally maps an array to another array of same or smaller length. The values can be of any datatype including structTypes. It takes a mapping function where you can address the item in the array as #item and current index as #index. For deeply nested maps you can refer to the parent maps using the ``#item_[n](#item_1, #index_1...)`` notation.
+* ``mapIf([10, 20, 30], #item > 10, #item + 5) -> [25, 35]``
+* ``mapIf(['icecream', 'cake', 'soda'], length(#item) > 4, upper(#item)) -> ['ICECREAM', 'CAKE']``
+___
++
+<a name="mapIndex" ></a>
+
+### <code>mapIndex</code>
+<code><b>mapIndex(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : binaryfunction) => any</b></code><br/><br/>
+Maps each element of the array to a new element using the provided expression. Map expects a reference to one element in the expression function as #item and a reference to the element index as #index.
+* ``mapIndex([1, 2, 3, 4], #item + 2 + #index) -> [4, 6, 8, 10]``
+___
++
+<a name="mapLoop" ></a>
+
+### <code>mapLoop</code>
+<code><b>mapLoop(<i>\<value1\></i> : integer, <i>\<value2\></i> : unaryfunction) => any</b></code><br/><br/>
+Loops through from 1 to length to create an array of that length. It takes a mapping function where you can address the index in the array as #index. For deeply nested maps you can refer to the parent maps using the #index_n(#index_1, #index_2...) notation.
+* ``mapLoop(3, #index * 10) -> [10, 20, 30]``
+___
++
+<a name="max" ></a>
+
+### <code>max</code>
+<code><b>max(<i>&lt;value1&gt;</i> : any) => any</b></code><br/><br/>
+Gets the maximum value of a column.
+* ``max(sales)``
+___
++
+<a name="maxIf" ></a>
+
+### <code>maxIf</code>
+<code><b>maxIf(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : any) => any</b></code><br/><br/>
+Based on a criteria, gets the maximum value of a column.
+* ``maxIf(region == 'West', sales)``
+___
++
+<a name="md5" ></a>
+
+### <code>md5</code>
+<code><b>md5(<i>&lt;value1&gt;</i> : any, ...) => string</b></code><br/><br/>
+Calculates the MD5 digest of set of column of varying primitive datatypes and returns a 32-character hex string. It can be used to calculate a fingerprint for a row.
+* ``md5(5, 'gunchus', 8.2, 'bojjus', true, toDate('2010-4-4')) -> '4ce8a880bd621a1ffad0bca905e1bc5a'``
+___
++
+<a name="mean" ></a>
+
+### <code>mean</code>
+<code><b>mean(<i>&lt;value1&gt;</i> : number) => number</b></code><br/><br/>
+Gets the mean of values of a column. Same as AVG.
+* ``mean(sales)``
+___
++
+<a name="meanIf" ></a>
+
+### <code>meanIf</code>
+<code><b>meanIf(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : number) => number</b></code><br/><br/>
+Based on a criteria gets the mean of values of a column. Same as avgIf.
+* ``meanIf(region == 'West', sales)``
+___
++
+<a name="millisecond" ></a>
+
+### <code>millisecond</code>
+<code><b>millisecond(<i>&lt;value1&gt;</i> : timestamp, [<i>&lt;value2&gt;</i> : string]) => integer</b></code><br/><br/>
+Gets the millisecond value of a date. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. The local timezone is used as the default. Refer to Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html.
+* ``millisecond(toTimestamp('2009-07-30 12:58:59.871', 'yyyy-MM-dd HH:mm:ss.SSS')) -> 871``
+___
++
+<a name="milliseconds" ></a>
+
+### <code>milliseconds</code>
+<code><b>milliseconds(<i>&lt;value1&gt;</i> : integer) => long</b></code><br/><br/>
+Duration in milliseconds for number of milliseconds.
+* ``milliseconds(2) -> 2L``
+___
++
+<a name="min" ></a>
+
+### <code>min</code>
+<code><b>min(<i>&lt;value1&gt;</i> : any) => any</b></code><br/><br/>
+Gets the minimum value of a column.
+* ``min(sales)``
+___
++
+<a name="minIf" ></a>
+
+### <code>minIf</code>
+<code><b>minIf(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : any) => any</b></code><br/><br/>
+Based on a criteria, gets the minimum value of a column.
+* ``minIf(region == 'West', sales)``
+___
++
+<a name="minus" ></a>
+
+### <code>minus</code>
+<code><b>minus(<i>&lt;value1&gt;</i> : any, <i>&lt;value2&gt;</i> : any) => any</b></code><br/><br/>
+Subtracts numbers. Subtract number of days from a date. Subtract duration from a timestamp. Subtract two timestamps to get difference in milliseconds. Same as the - operator.
+* ``minus(20, 10) -> 10``
+* ``20 - 10 -> 10``
+* ``minus(toDate('2012-12-15'), 3) -> toDate('2012-12-12')``
+* ``toDate('2012-12-15') - 3 -> toDate('2012-12-12')``
+* ``toTimestamp('2019-02-03 05:19:28.871', 'yyyy-MM-dd HH:mm:ss.SSS') + (days(1) + hours(2) - seconds(10)) -> toTimestamp('2019-02-04 07:19:18.871', 'yyyy-MM-dd HH:mm:ss.SSS')``
+* ``toTimestamp('2019-02-03 05:21:34.851', 'yyyy-MM-dd HH:mm:ss.SSS') - toTimestamp('2019-02-03 05:21:36.923', 'yyyy-MM-dd HH:mm:ss.SSS') -> -2072``
+___
++
+<a name="minute" ></a>
+
+### <code>minute</code>
+<code><b>minute(<i>&lt;value1&gt;</i> : timestamp, [<i>&lt;value2&gt;</i> : string]) => integer</b></code><br/><br/>
+Gets the minute value of a timestamp. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. The local timezone is used as the default. Refer to Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html.
+* ``minute(toTimestamp('2009-07-30 12:58:59')) -> 58``
+* ``minute(toTimestamp('2009-07-30 12:58:59'), 'PST') -> 58``
+___
++
+<a name="minutes" ></a>
+
+### <code>minutes</code>
+<code><b>minutes(<i>&lt;value1&gt;</i> : integer) => long</b></code><br/><br/>
+Duration in milliseconds for number of minutes.
+* ``minutes(2) -> 120000L``
+___
++
+<a name="mlookup" ></a>
+
+### <code>mlookup</code>
+<code><b>mlookup(key, key2, ...) => complex[]</b></code><br/><br/>
+Looks up the all matching rows from the cached sink using the specified keys that match the keys from the cached sink.
+* ``cacheSink#mlookup(movieId)``
+___
++
+<a name="mod" ></a>
+
+### <code>mod</code>
+<code><b>mod(<i>&lt;value1&gt;</i> : any, <i>&lt;value2&gt;</i> : any) => any</b></code><br/><br/>
+Modulus of pair of numbers. Same as the % operator.
+* ``mod(20, 8) -> 4``
+* ``20 % 8 -> 4``
+___
++
+<a name="month" ></a>
+
+### <code>month</code>
+<code><b>month(<i>&lt;value1&gt;</i> : datetime) => integer</b></code><br/><br/>
+Gets the month value of a date or timestamp.
+* ``month(toDate('2012-8-8')) -> 8``
+___
++
+<a name="monthsBetween" ></a>
+
+### <code>monthsBetween</code>
+<code><b>monthsBetween(<i>&lt;from date/timestamp&gt;</i> : datetime, <i>&lt;to date/timestamp&gt;</i> : datetime, [<i>&lt;roundoff&gt;</i> : boolean], [<i>&lt;time zone&gt;</i> : string]) => double</b></code><br/><br/>
+Gets the number of months between two dates. You can round off the calculation. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. The local timezone is used as the default. Refer to Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html.
+* ``monthsBetween(toTimestamp('1997-02-28 10:30:00'), toDate('1996-10-30')) -> 3.94959677``
+___
++
+<a name="multiply" ></a>
+
+### <code>multiply</code>
+<code><b>multiply(<i>&lt;value1&gt;</i> : any, <i>&lt;value2&gt;</i> : any) => any</b></code><br/><br/>
+Multiplies pair of numbers. Same as the * operator.
+* ``multiply(20, 10) -> 200``
+* ``20 * 10 -> 200``
+___
+
+## N
+
+<a name="negate" ></a>
+
+### <code>negate</code>
+<code><b>negate(<i>&lt;value1&gt;</i> : number) => number</b></code><br/><br/>
+Negates a number. Turns positive numbers to negative and vice versa.
+* ``negate(13) -> -13``
+___
++
+<a name="nextSequence" ></a>
+
+### <code>nextSequence</code>
+<code><b>nextSequence() => long</b></code><br/><br/>
+Returns the next unique sequence. The number is consecutive only within a partition and is prefixed by the partitionId.
+* ``nextSequence() == 12313112 -> false``
+___
++
+<a name="normalize" ></a>
+
+### <code>normalize</code>
+<code><b>normalize(<i>&lt;String to normalize&gt;</i> : string) => string</b></code><br/><br/>
+Normalizes the string value to separate accented unicode characters.
+* ``regexReplace(normalize('bo┬▓s'), `\p{M}`, '') -> 'boys'``
+___
++
+<a name="not" ></a>
+
+### <code>not</code>
+<code><b>not(<i>&lt;value1&gt;</i> : boolean) => boolean</b></code><br/><br/>
+Logical negation operator.
+* ``not(true) -> false``
+* ``not(10 == 20) -> true``
+___
++
+<a name="notEquals" ></a>
+
+### <code>notEquals</code>
+<code><b>notEquals(<i>&lt;value1&gt;</i> : any, <i>&lt;value2&gt;</i> : any) => boolean</b></code><br/><br/>
+Comparison not equals operator. Same as != operator.
+* ``12 != 24 -> true``
+* ``'bojjus' != 'bo' + 'jjus' -> false``
+___
++
+<a name="notNull" ></a>
+
+### <code>notNull</code>
+<code><b>notNull(<i>&lt;value1&gt;</i> : any) => boolean</b></code><br/><br/>
+Checks if the value isn't NULL.
+* ``notNull(NULL()) -> false``
+* ``notNull('') -> true``
+___
++
+<a name="nTile" ></a>
+
+### <code>nTile</code>
+<code><b>nTile([<i>&lt;value1&gt;</i> : integer]) => integer</b></code><br/><br/>
+The ```NTile``` function divides the rows for each window partition into `n` buckets ranging from 1 to at most `n`. Bucket values will differ by at most 1. If the number of rows in the partition doesn't divide evenly into the number of buckets, then the remainder values are distributed one per bucket, starting with the first bucket. The ```NTile``` function is useful for the calculation of ```tertiles```, quartiles, deciles, and other common summary statistics. The function calculates two variables during initialization: The size of a regular bucket will have one extra row added to it. Both variables are based on the size of the current partition. During the calculation process the function keeps track of the current row number, the current bucket number, and the row number at which the bucket will change (bucketThreshold). When the current row number reaches bucket threshold, the bucket value is increased by one and the threshold is increased by the bucket size (plus one extra if the current bucket is padded).
+* ``nTile()``
+* ``nTile(numOfBuckets)``
+___
++
+<a name="null" ></a>
+
+### <code>null</code>
+<code><b>null() => null</b></code><br/><br/>
+Returns a NULL value. Use the function `syntax(null())` if there's a column named 'null'. Any operation that uses will result in a NULL.
+* ``isNull('dumbo' + null) -> true``
+* ``isNull(10 * null) -> true``
+* ``isNull('') -> false``
+* ``isNull(10 + 20) -> false``
+* ``isNull(10/0) -> true``
+___
+
+## O
+
+<a name="or" ></a>
+
+### <code>or</code>
+<code><b>or(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : boolean) => boolean</b></code><br/><br/>
+Logical OR operator. Same as ||.
+* ``or(true, false) -> true``
+* ``true || false -> true``
+___
++
+<a name="originColumns" ></a>
+
+### <code>originColumns</code>
+<code><b>originColumns(<i>&lt;streamName&gt;</i> : string) => any</b></code><br/><br/>
+Gets all output columns for an origin stream where columns were created. Must be enclosed in another function.
+* ``array(toString(originColumns('source1')))``
+___
++
+<a name="output" ></a>
+
+### <code>output</code>
+<code><b>output() => any</b></code><br/><br/>
+Returns the first row of the results of the cache sink
+* ``cacheSink#output()``
+___
++
+<a name="outputs" ></a>
+
+### <code>outputs</code>
+<code><b>output() => any</b></code><br/><br/>
+Returns the entire output row set of the results of the cache sink
+* ``cacheSink#outputs()``
+___
+
+## P
+
+<a name="partitionId" ></a>
+
+### <code>partitionId</code>
+<code><b>partitionId() => integer</b></code><br/><br/>
+Returns the current partition ID the input row is in.
+* ``partitionId()``
+___
++
+<a name="pMod" ></a>
+
+### <code>pMod</code>
+<code><b>pMod(<i>&lt;value1&gt;</i> : any, <i>&lt;value2&gt;</i> : any) => any</b></code><br/><br/>
+Positive Modulus of pair of numbers.
+* ``pmod(-20, 8) -> 4``
+___
++
+<a name="power" ></a>
+
+### <code>power</code>
+<code><b>power(<i>&lt;value1&gt;</i> : number, <i>&lt;value2&gt;</i> : number) => double</b></code><br/><br/>
+Raises one number to the power of another.
+* ``power(10, 2) -> 100``
+___
+
+## R
+
+<a name="radians" ></a>
+
+### <code>radians</code>
+<code><b>radians(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
+Converts degrees to radians
+* ``radians(180) => 3.141592653589793``
+___
++
+<a name="random" ></a>
+
+### <code>random</code>
+<code><b>random(<i>&lt;value1&gt;</i> : integral) => long</b></code><br/><br/>
+Returns a random number given an optional seed within a partition. The seed should be a fixed value and is used with the partitionId to produce random values
+* ``random(1) == 1 -> false``
+___
++
+<a name="rank" ></a>
+
+### <code>rank</code>
+<code><b>rank() => integer</b></code><br/><br/>
+Computes the rank of a value in a group of values specified in a window's order by clause. The result is one plus the number of rows preceding or equal to the current row in the ordering of the partition. The values will produce gaps in the sequence. Rank works even when data isn't sorted and looks for change in values.
+* ``rank()``
+___
++
+<a name="reassociate" ></a>
+
+### <code>reassociate</code>
+<code><b>reassociate(<i>&lt;value1&gt;</i> : map, <i>&lt;value2&gt;</i> : binaryFunction) => map</b></code><br/><br/>
+Transforms a map by associating the keys to new values. It takes a mapping function where you can address the item as #key and current value as #value.
+* ``reassociate(['fruit' -> 'apple', 'vegetable' -> 'tomato'], substring(#key, 1, 1) + substring(#value, 1, 1)) => ['fruit' -> 'fa', 'vegetable' -> 'vt']``
+___
+
++
+<a name="reduce" ></a>
+
+### <code>reduce</code>
+<code><b>reduce(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : any, <i>&lt;value3&gt;</i> : binaryfunction, <i>&lt;value4&gt;</i> : unaryfunction) => any</b></code><br/><br/>
+Accumulates elements in an array. Reduce expects a reference to an accumulator and one element in the first expression function as #acc and #item and it expects the resulting value as #result to be used in the second expression function.
+* ``toString(reduce(['1', '2', '3', '4'], '0', #acc + #item, #result)) -> '01234'``
+___
++
+<a name="regexExtract" ></a>
+
+### <code>regexExtract</code>
+<code><b>regexExtract(<i>&lt;string&gt;</i> : string, <i>&lt;regex to find&gt;</i> : string, [<i>&lt;match group 1-based index&gt;</i> : integral]) => string</b></code><br/><br/>
+Extract a matching substring for a given regex pattern. The last parameter identifies the match group and is defaulted to 1 if omitted. Use `<regex>`(back quote) to match a string without escaping.
+* ``regexExtract('Cost is between 600 and 800 dollars', '(\\d+) and (\\d+)', 2) -> '800'``
+* ``regexExtract('Cost is between 600 and 800 dollars', `(\d+) and (\d+)`, 2) -> '800'``
+___
++
+<a name="regexMatch" ></a>
+
+### <code>regexMatch</code>
+<code><b>regexMatch(<i>&lt;string&gt;</i> : string, <i>&lt;regex to match&gt;</i> : string) => boolean</b></code><br/><br/>
+Checks if the string matches the given regex pattern. Use `<regex>`(back quote) to match a string without escaping.
+* ``regexMatch('200.50', '(\\d+).(\\d+)') -> true``
+* ``regexMatch('200.50', `(\d+).(\d+)`) -> true``
+___
++
+<a name="regexReplace" ></a>
+
+### <code>regexReplace</code>
+<code><b>regexReplace(<i>&lt;string&gt;</i> : string, <i>&lt;regex to find&gt;</i> : string, <i>&lt;substring to replace&gt;</i> : string) => string</b></code><br/><br/>
+Replace all occurrences of a regex pattern with another substring in the given string Use `<regex>`(back quote) to match a string without escaping.
+* ``regexReplace('100 and 200', '(\\d+)', 'bojjus') -> 'bojjus and bojjus'``
+* ``regexReplace('100 and 200', `(\d+)`, 'gunchus') -> 'gunchus and gunchus'``
+___
++
+<a name="regexSplit" ></a>
+
+### <code>regexSplit</code>
+<code><b>regexSplit(<i>&lt;string to split&gt;</i> : string, <i>&lt;regex expression&gt;</i> : string) => array</b></code><br/><br/>
+Splits a string based on a delimiter based on regex and returns an array of strings.
+* ``regexSplit('bojjusAgunchusBdumbo', `[CAB]`) -> ['bojjus', 'gunchus', 'dumbo']``
+* ``regexSplit('bojjusAgunchusBdumboC', `[CAB]`) -> ['bojjus', 'gunchus', 'dumbo', '']``
+* ``(regexSplit('bojjusAgunchusBdumboC', `[CAB]`)[1]) -> 'bojjus'``
+* ``isNull(regexSplit('bojjusAgunchusBdumboC', `[CAB]`)[20]) -> true``
+___
++
+<a name="replace" ></a>
+
+### <code>replace</code>
+<code><b>replace(<i>&lt;string&gt;</i> : string, <i>&lt;substring to find&gt;</i> : string, [<i>&lt;substring to replace&gt;</i> : string]) => string</b></code><br/><br/>
+Replace all occurrences of a substring with another substring in the given string. If the last parameter is omitted, it's default to empty string.
+* ``replace('doggie dog', 'dog', 'cat') -> 'catgie cat'``
+* ``replace('doggie dog', 'dog', '') -> 'gie '``
+* ``replace('doggie dog', 'dog') -> 'gie '``
+___
++
+<a name="reverse" ></a>
+
+### <code>reverse</code>
+<code><b>reverse(<i>&lt;value1&gt;</i> : string) => string</b></code><br/><br/>
+Reverses a string.
+* ``reverse('gunchus') -> 'suhcnug'``
+___
++
+<a name="right" ></a>
+
+### <code>right</code>
+<code><b>right(<i>&lt;string to subset&gt;</i> : string, <i>&lt;number of characters&gt;</i> : integral) => string</b></code><br/><br/>
+Extracts a substring with number of characters from the right. Same as SUBSTRING(str, LENGTH(str) - n, n).
+* ``right('bojjus', 2) -> 'us'``
+* ``right('bojjus', 20) -> 'bojjus'``
+___
++
+<a name="rlike" ></a>
+
+### <code>rlike</code>
+<code><b>rlike(<i>&lt;string&gt;</i> : string, <i>&lt;pattern match&gt;</i> : string) => boolean</b></code><br/><br/>
+Checks if the string matches the given regex pattern.
+* ``rlike('200.50', `(\d+).(\d+)`) -> true``
+* ``rlike('bogus', `M[0-9]+.*`) -> false``
+___
++
+<a name="round" ></a>
+
+### <code>round</code>
+<code><b>round(<i>&lt;number&gt;</i> : number, [<i>&lt;scale to round&gt;</i> : number], [<i>&lt;rounding option&gt;</i> : integral]) => double</b></code><br/><br/>
+Rounds a number given an optional scale and an optional rounding mode. If the scale is omitted, it's defaulted to 0. If the mode is omitted, it's defaulted to ROUND_HALF_UP(5). The values for rounding include
+1 - ROUND_UP
+2 - ROUND_DOWN
+3 - ROUND_CEILING
+4 - ROUND_FLOOR
+5 - ROUND_HALF_UP
+6 - ROUND_HALF_DOWN
+7 - ROUND_HALF_EVEN
+8 - ROUND_UNNECESSARY.
+* ``round(100.123) -> 100.0``
+* ``round(2.5, 0) -> 3.0``
+* ``round(5.3999999999999995, 2, 7) -> 5.40``
+___
++
+<a name="rowNumber" ></a>
+
+### <code>rowNumber</code>
+<code><b>rowNumber() => integer</b></code><br/><br/>
+Assigns a sequential row numbering for rows in a window starting with 1.
+* ``rowNumber()``
+++
+<a name="rpad" ></a>
+
+### <code>rpad</code>
+<code><b>rpad(<i>&lt;string to pad&gt;</i> : string, <i>&lt;final padded length&gt;</i> : integral, <i>&lt;padding&gt;</i> : string) => string</b></code><br/><br/>
+Right pads the string by the supplied padding until it is of a certain length. If the string is equal to or greater than the length, then it's trimmed to the length.
+* ``rpad('dumbo', 10, '-') -> 'dumbo--'``
+* ``rpad('dumbo', 4, '-') -> 'dumb'``
+* ``rpad('dumbo', 8, '<>') -> 'dumbo<><'``
+___
++
+<a name="rtrim" ></a>
+
+### <code>rtrim</code>
+<code><b>rtrim(<i>&lt;string to trim&gt;</i> : string, [<i>&lt;trim characters&gt;</i> : string]) => string</b></code><br/><br/>
+Right trims a string of trailing characters. If second parameter is unspecified, it trims whitespace. Else it trims any character specified in the second parameter.
+* ``rtrim(' dumbo ') -> ' dumbo'``
+* ``rtrim('!--!du!mbo!', '-!') -> '!--!du!mbo'``
+___
+
+## S
+
+<a name="second" ></a>
+
+### <code>second</code>
+<code><b>second(<i>&lt;value1&gt;</i> : timestamp, [<i>&lt;value2&gt;</i> : string]) => integer</b></code><br/><br/>
+Gets the second value of a date. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. The local timezone is used as the default. Refer to Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html.
+* ``second(toTimestamp('2009-07-30 12:58:59')) -> 59``
+___
++
+<a name="seconds" ></a>
+
+### <code>seconds</code>
+<code><b>seconds(<i>&lt;value1&gt;</i> : integer) => long</b></code><br/><br/>
+Duration in milliseconds for number of seconds.
+* ``seconds(2) -> 2000L``
+___
++
+<a name="setBitSet" ></a>
+
+### <code>setBitSet</code>
+<code><b>setBitSet (<i>\<value1\></i>: array, <i>\<value2\></i>:array) => array</b></code><br/><br/>
+Sets bit positions in this bitset
+* ``setBitSet(toBitSet([10, 32]), [98]) => [4294968320L, 17179869184L]``
+___
++
+<a name="sha1" ></a>
+
+### <code>sha1</code>
+<code><b>sha1(<i>&lt;value1&gt;</i> : any, ...) => string</b></code><br/><br/>
+Calculates the SHA-1 digest of set of column of varying primitive datatypes and returns a 40-character hex string. It can be used to calculate a fingerprint for a row.
+* ``sha1(5, 'gunchus', 8.2, 'bojjus', true, toDate('2010-4-4')) -> '46d3b478e8ec4e1f3b453ac3d8e59d5854e282bb'``
+___
++
+<a name="sha2" ></a>
+
+### <code>sha2</code>
+<code><b>sha2(<i>&lt;value1&gt;</i> : integer, <i>&lt;value2&gt;</i> : any, ...) => string</b></code><br/><br/>
+Calculates the SHA-2 digest of set of column of varying primitive datatypes given a bit length, which can only be of values 0(256), 224, 256, 384, 512. It can be used to calculate a fingerprint for a row.
+* ``sha2(256, 'gunchus', 8.2, 'bojjus', true, toDate('2010-4-4')) -> 'afe8a553b1761c67d76f8c31ceef7f71b66a1ee6f4e6d3b5478bf68b47d06bd3'``
+___
++
+<a name="sin" ></a>
+
+### <code>sin</code>
+<code><b>sin(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
+Calculates a sine value.
+* ``sin(2) -> 0.9092974268256817``
+___
++
+<a name="sinh" ></a>
+
+### <code>sinh</code>
+<code><b>sinh(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
+Calculates a hyperbolic sine value.
+* ``sinh(0) -> 0.0``
+___
++
+<a name="size" ></a>
+
+### <code>size</code>
+<code><b>size(<i>&lt;value1&gt;</i> : any) => integer</b></code><br/><br/>
+Finds the size of an array or map type
+* ``size(['element1', 'element2']) -> 2``
+* ``size([1,2,3]) -> 3``
+___
++
+<a name="skewness" ></a>
+
+### <code>skewness</code>
+<code><b>skewness(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
+Gets the skewness of a column.
+* ``skewness(sales)``
+___
++
+<a name="skewnessIf" ></a>
+
+### <code>skewnessIf</code>
+<code><b>skewnessIf(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : number) => double</b></code><br/><br/>
+Based on a criteria, gets the skewness of a column.
+* ``skewnessIf(region == 'West', sales)``
+___
++
+<a name="slice" ></a>
+
+### <code>slice</code>
+<code><b>slice(<i>&lt;array to slice&gt;</i> : array, <i>&lt;from 1-based index&gt;</i> : integral, [<i>&lt;number of items&gt;</i> : integral]) => array</b></code><br/><br/>
+Extracts a subset of an array from a position. Position is 1 based. If the length is omitted, it's defaulted to end of the string.
+* ``slice([10, 20, 30, 40], 1, 2) -> [10, 20]``
+* ``slice([10, 20, 30, 40], 2) -> [20, 30, 40]``
+* ``slice([10, 20, 30, 40], 2)[1] -> 20``
+* ``isNull(slice([10, 20, 30, 40], 2)[0]) -> true``
+* ``isNull(slice([10, 20, 30, 40], 2)[20]) -> true``
+* ``slice(['a', 'b', 'c', 'd'], 8) -> []``
+___
++
+<a name="sort" ></a>
+
+### <code>sort</code>
+<code><b>sort(<i>&lt;value1&gt;</i> : array, <i>&lt;value2&gt;</i> : binaryfunction) => array</b></code><br/><br/>
+Sorts the array using the provided predicate function. Sort expects a reference to two consecutive elements in the expression function as #item1 and #item2.
+* ``sort([4, 8, 2, 3], compare(#item1, #item2)) -> [2, 3, 4, 8]``
+* ``sort(['a3', 'b2', 'c1'], iif(right(#item1, 1) >= right(#item2, 1), 1, -1)) -> ['c1', 'b2', 'a3']``
+___
++
+<a name="soundex" ></a>
+
+### <code>soundex</code>
+<code><b>soundex(<i>&lt;value1&gt;</i> : string) => string</b></code><br/><br/>
+Gets the ```soundex``` code for the string.
+* ``soundex('genius') -> 'G520'``
+___
++
+<a name="split" ></a>
+
+### <code>split</code>
+<code><b>split(<i>&lt;string to split&gt;</i> : string, <i>&lt;split characters&gt;</i> : string) => array</b></code><br/><br/>
+Splits a string based on a delimiter and returns an array of strings.
+* ``split('bojjus,guchus,dumbo', ',') -> ['bojjus', 'guchus', 'dumbo']``
+* ``split('bojjus,guchus,dumbo', '|') -> ['bojjus,guchus,dumbo']``
+* ``split('bojjus, guchus, dumbo', ', ') -> ['bojjus', 'guchus', 'dumbo']``
+* ``split('bojjus, guchus, dumbo', ', ')[1] -> 'bojjus'``
+* ``isNull(split('bojjus, guchus, dumbo', ', ')[0]) -> true``
+* ``isNull(split('bojjus, guchus, dumbo', ', ')[20]) -> true``
+* ``split('bojjusguchusdumbo', ',') -> ['bojjusguchusdumbo']``
+___
++
+<a name="sqrt" ></a>
+
+### <code>sqrt</code>
+<code><b>sqrt(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
+Calculates the square root of a number.
+* ``sqrt(9) -> 3``
+___
++
+<a name="startsWith" ></a>
+
+### <code>startsWith</code>
+<code><b>startsWith(<i>&lt;string&gt;</i> : string, <i>&lt;substring to check&gt;</i> : string) => boolean</b></code><br/><br/>
+Checks if the string starts with the supplied string.
+* ``startsWith('dumbo', 'du') -> true``
+___
++
+<a name="stddev" ></a>
+
+### <code>stddev</code>
+<code><b>stddev(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
+Gets the standard deviation of a column.
+* ``stdDev(sales)``
+___
++
+<a name="stddevIf" ></a>
+
+### <code>stddevIf</code>
+<code><b>stddevIf(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : number) => double</b></code><br/><br/>
+Based on a criteria, gets the standard deviation of a column.
+* ``stddevIf(region == 'West', sales)``
+___
++
+<a name="stddevPopulation" ></a>
+
+### <code>stddevPopulation</code>
+<code><b>stddevPopulation(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
+Gets the population standard deviation of a column.
+* ``stddevPopulation(sales)``
+___
++
+<a name="stddevPopulationIf" ></a>
+
+### <code>stddevPopulationIf</code>
+<code><b>stddevPopulationIf(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : number) => double</b></code><br/><br/>
+Based on a criteria, gets the population standard deviation of a column.
+* ``stddevPopulationIf(region == 'West', sales)``
+___
++
+<a name="stddevSample" ></a>
+
+### <code>stddevSample</code>
+<code><b>stddevSample(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
+Gets the sample standard deviation of a column.
+* ``stddevSample(sales)``
+___
++
+<a name="stddevSampleIf" ></a>
+
+### <code>stddevSampleIf</code>
+<code><b>stddevSampleIf(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : number) => double</b></code><br/><br/>
+Based on a criteria, gets the sample standard deviation of a column.
+* ``stddevSampleIf(region == 'West', sales)``
+___
++
+<a name="subDays" ></a>
+
+### <code>subDays</code>
+<code><b>subDays(<i>&lt;date/timestamp&gt;</i> : datetime, <i>&lt;days to subtract&gt;</i> : integral) => datetime</b></code><br/><br/>
+Subtract days from a date or timestamp. Same as the - operator for date.
+* ``subDays(toDate('2016-08-08'), 1) -> toDate('2016-08-07')``
+___
++
+<a name="subMonths" ></a>
+
+### <code>subMonths</code>
+<code><b>subMonths(<i>&lt;date/timestamp&gt;</i> : datetime, <i>&lt;months to subtract&gt;</i> : integral) => datetime</b></code><br/><br/>
+Subtract months from a date or timestamp.
+* ``subMonths(toDate('2016-09-30'), 1) -> toDate('2016-08-31')``
+___
++
+<a name="substring" ></a>
+
+### <code>substring</code>
+<code><b>substring(<i>&lt;string to subset&gt;</i> : string, <i>&lt;from 1-based index&gt;</i> : integral, [<i>&lt;number of characters&gt;</i> : integral]) => string</b></code><br/><br/>
+Extracts a substring of a certain length from a position. Position is 1 based. If the length is omitted, it's defaulted to end of the string.
+* ``substring('Cat in the hat', 5, 2) -> 'in'``
+* ``substring('Cat in the hat', 5, 100) -> 'in the hat'``
+* ``substring('Cat in the hat', 5) -> 'in the hat'``
+* ``substring('Cat in the hat', 100, 100) -> ''``
+___
++
+<a name="sum" ></a>
+
+### <code>sum</code>
+<code><b>sum(<i>&lt;value1&gt;</i> : number) => number</b></code><br/><br/>
+Gets the aggregate sum of a numeric column.
+* ``sum(col)``
+___
++
+<a name="sumDistinct" ></a>
+
+### <code>sumDistinct</code>
+<code><b>sumDistinct(<i>&lt;value1&gt;</i> : number) => number</b></code><br/><br/>
+Gets the aggregate sum of distinct values of a numeric column.
+* ``sumDistinct(col)``
+___
++
+<a name="sumDistinctIf" ></a>
+
+### <code>sumDistinctIf</code>
+<code><b>sumDistinctIf(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : number) => number</b></code><br/><br/>
+Based on criteria gets the aggregate sum of a numeric column. The condition can be based on any column.
+* ``sumDistinctIf(state == 'CA' && commission < 10000, sales)``
+* ``sumDistinctIf(true, sales)``
+___
++
+<a name="sumIf" ></a>
+
+### <code>sumIf</code>
+<code><b>sumIf(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : number) => number</b></code><br/><br/>
+Based on criteria gets the aggregate sum of a numeric column. The condition can be based on any column.
+* ``sumIf(state == 'CA' && commission < 10000, sales)``
+* ``sumIf(true, sales)``
+___
+
+## T
+
+<a name="tan" ></a>
+
+### <code>tan</code>
+<code><b>tan(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
+Calculates a tangent value.
+* ``tan(0) -> 0.0``
+___
++
+<a name="tanh" ></a>
+
+### <code>tanh</code>
+<code><b>tanh(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
+Calculates a hyperbolic tangent value.
+* ``tanh(0) -> 0.0``
+___
++
+<a name="toBase64" ></a>
+
+### <code>toBase64</code>
+<code><b>toBase64(<i>&lt;value1&gt;</i> : string, <i>&lt;encoding type&gt;</i> : string]) => string</b></code><br/><br/>
+Encodes the given string in base64. You can optionally pass the encoding type
+* ``toBase64('bojjus') -> 'Ym9qanVz'``
+* ``toBase64('┬▒ 25000, Γé¼ 5.000,- |', 'Windows-1252') -> 'sSAyNTAwMCwggCA1LjAwMCwtIHw='``
+
+___
+
+<a name="toBinary" ></a>
+
+### <code>toBinary</code>
+<code><b>toBinary(<i>&lt;value1&gt;</i> : any) => binary</b></code><br/><br/>
+Converts any numeric/date/timestamp/string to binary representation.
+* ``toBinary(3) -> [0x11]``
+___
++
+<a name="toBoolean" ></a>
+
+### <code>toBoolean</code>
+<code><b>toBoolean(<i>&lt;value1&gt;</i> : string) => boolean</b></code><br/><br/>
+Converts a value of ('t', 'true', 'y', 'yes', '1') to true and ('f', 'false', 'n', 'no', '0') to false and NULL for any other value.
+* ``toBoolean('true') -> true``
+* ``toBoolean('n') -> false``
+* ``isNull(toBoolean('truthy')) -> true``
+___
++
+<a name="toByte" ></a>
+
+### <code>toByte</code>
+<code><b>toByte(<i>&lt;value&gt;</i> : any, [<i>&lt;format&gt;</i> : string], [<i>&lt;locale&gt;</i> : string]) => byte</b></code><br/><br/>
+Converts any numeric or string to a byte value. An optional Java decimal format can be used for the conversion.
+* ``toByte(123)``
+* ``123``
+* ``toByte(0xFF)``
+* ``-1``
+* ``toByte('123')``
+* ``123``
+___
++
+<a name="toDate" ></a>
+
+### <code>toDate</code>
+<code><b>toDate(<i>&lt;string&gt;</i> : any, [<i>&lt;date format&gt;</i> : string]) => date</b></code><br/><br/>
+Converts input date string to date using an optional input date format. Refer to Java's `SimpleDateFormat` class for available formats. If the input date format is omitted, default format is yyyy-[M]M-[d]d. Accepted formats are :[ yyyy, yyyy-[M]M, yyyy-[M]M-[d]d, yyyy-[M]M-[d]dT* ].
+* ``toDate('2012-8-18') -> toDate('2012-08-18')``
+* ``toDate('12/18/2012', 'MM/dd/yyyy') -> toDate('2012-12-18')``
+___
++
+<a name="toDecimal" ></a>
+
+### <code>toDecimal</code>
+<code><b>toDecimal(<i>&lt;value&gt;</i> : any, [<i>&lt;precision&gt;</i> : integral], [<i>&lt;scale&gt;</i> : integral], [<i>&lt;format&gt;</i> : string], [<i>&lt;locale&gt;</i> : string]) => decimal(10,0)</b></code><br/><br/>
+Converts any numeric or string to a decimal value. If precision and scale aren't specified, it's defaulted to (10,2). An optional Java decimal format can be used for the conversion. An optional locale format in the form of BCP47 language like en-US, de, zh-CN.
+* ``toDecimal(123.45) -> 123.45``
+* ``toDecimal('123.45', 8, 4) -> 123.4500``
+* ``toDecimal('$123.45', 8, 4,'$###.00') -> 123.4500``
+* ``toDecimal('Ç123,45', 10, 2, 'Ç###,##', 'de') -> 123.45``
+___
++
+<a name="toDouble" ></a>
+
+### <code>toDouble</code>
+<code><b>toDouble(<i>&lt;value&gt;</i> : any, [<i>&lt;format&gt;</i> : string], [<i>&lt;locale&gt;</i> : string]) => double</b></code><br/><br/>
+Converts any numeric or string to a double value. An optional Java decimal format can be used for the conversion. An optional locale format in the form of BCP47 language like en-US, de, zh-CN.
+* ``toDouble(123.45) -> 123.45``
+* ``toDouble('123.45') -> 123.45``
+* ``toDouble('$123.45', '$###.00') -> 123.45``
+* ``toDouble('Ç123,45', 'Ç###,##', 'de') -> 123.45``
+___
++
+<a name="toFloat" ></a>
+
+### <code>toFloat</code>
+<code><b>toFloat(<i>&lt;value&gt;</i> : any, [<i>&lt;format&gt;</i> : string], [<i>&lt;locale&gt;</i> : string]) => float</b></code><br/><br/>
+Converts any numeric or string to a float value. An optional Java decimal format can be used for the conversion. Truncates any double.
+* ``toFloat(123.45) -> 123.45f``
+* ``toFloat('123.45') -> 123.45f``
+* ``toFloat('$123.45', '$###.00') -> 123.45f``
+___
++
+<a name="toInteger" ></a>
+
+### <code>toInteger</code>
+<code><b>toInteger(<i>&lt;value&gt;</i> : any, [<i>&lt;format&gt;</i> : string], [<i>&lt;locale&gt;</i> : string]) => integer</b></code><br/><br/>
+Converts any numeric or string to an integer value. An optional Java decimal format can be used for the conversion. Truncates any long, float, double.
+* ``toInteger(123) -> 123``
+* ``toInteger('123') -> 123``
+* ``toInteger('$123', '$###') -> 123``
+___
++
+<a name="toLong" ></a>
+
+### <code>toLong</code>
+<code><b>toLong(<i>&lt;value&gt;</i> : any, [<i>&lt;format&gt;</i> : string], [<i>&lt;locale&gt;</i> : string]) => long</b></code><br/><br/>
+Converts any numeric or string to a long value. An optional Java decimal format can be used for the conversion. Truncates any float, double.
+* ``toLong(123) -> 123``
+* ``toLong('123') -> 123``
+* ``toLong('$123', '$###') -> 123``
+___
++
+<a name="toShort" ></a>
+
+### <code>toShort</code>
+<code><b>toShort(<i>&lt;value&gt;</i> : any, [<i>&lt;format&gt;</i> : string], [<i>&lt;locale&gt;</i> : string]) => short</b></code><br/><br/>
+Converts any numeric or string to a short value. An optional Java decimal format can be used for the conversion. Truncates any integer, long, float, double.
+* ``toShort(123) -> 123``
+* ``toShort('123') -> 123``
+* ``toShort('$123', '$###') -> 123``
+___
++
+<a name="toString" ></a>
+
+### <code>toString</code>
+<code><b>toString(<i>&lt;value&gt;</i> : any, [<i>&lt;number format/date format&gt;</i> : string], [<i>&lt;date locale&gt;</i> : string]) => string</b></code><br/><br/>
+Converts a primitive datatype to a string. For numbers and date a format can be specified. If unspecified the system default is picked.Java decimal format is used for numbers. Refer to Java SimpleDateFormat for all possible date formats; the default format is yyyy-MM-dd. For date or timestamp a locale can be optionally specified.
+* ``toString(10) -> '10'``
+* ``toString('engineer') -> 'engineer'``
+* ``toString(123456.789, '##,###.##') -> '123,456.79'``
+* ``toString(123.78, '000000.000') -> '000123.780'``
+* ``toString(12345, '##0.#####E0') -> '12.345E3'``
+* ``toString(toDate('2018-12-31')) -> '2018-12-31'``
+* ``isNull(toString(toDate('2018-12-31', 'MM/dd/yy'))) -> true``
+* ``toString(4 == 20) -> 'false'``
+* ``toString(toDate('12/31/18', 'MM/dd/yy', 'es-ES'), 'MM/dd/yy', 'de-DE')``
+ ___
+
+<a name="toTimestamp" ></a>
+
+### <code>toTimestamp</code>
+<code><b>toTimestamp(<i>&lt;string&gt;</i> : any, [<i>&lt;timestamp format&gt;</i> : string], [<i>&lt;time zone&gt;</i> : string]) => timestamp</b></code><br/><br/>
+Converts a string to a timestamp given an optional timestamp format. If the timestamp is omitted the default pattern yyyy-[M]M-[d]d hh:mm:ss[.f...] is used. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. Timestamp supports up to millisecond accuracy with value of 999. Refer to Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html.
+* ``toTimestamp('2016-12-31 00:12:00') -> toTimestamp('2016-12-31 00:12:00')``
+* ``toTimestamp('2016-12-31T00:12:00', 'yyyy-MM-dd\'T\'HH:mm:ss', 'PST') -> toTimestamp('2016-12-31 00:12:00')``
+* ``toTimestamp('12/31/2016T00:12:00', 'MM/dd/yyyy\'T\'HH:mm:ss') -> toTimestamp('2016-12-31 00:12:00')``
+* ``millisecond(toTimestamp('2019-02-03 05:19:28.871', 'yyyy-MM-dd HH:mm:ss.SSS')) -> 871``
+___
++
+<a name="toUTC" ></a>
+
+### <code>toUTC</code>
+<code><b>toUTC(<i>&lt;value1&gt;</i> : timestamp, [<i>&lt;value2&gt;</i> : string]) => timestamp</b></code><br/><br/>
+Converts the timestamp to UTC. You can pass an optional timezone in the form of 'GMT', 'PST', 'UTC', 'America/Cayman'. It's defaulted to the current timezone. Refer to Java's `SimpleDateFormat` class for available formats. https://docs.oracle.com/javase/8/docs/api/java/text/SimpleDateFormat.html.
+* ``toUTC(currentTimestamp()) == toTimestamp('2050-12-12 19:18:12') -> false``
+* ``toUTC(currentTimestamp(), 'Asia/Seoul') != toTimestamp('2050-12-12 19:18:12') -> true``
+++
+<a name="translate" ></a>
+
+### <code>translate</code>
+<code><b>translate(<i>&lt;string to translate&gt;</i> : string, <i>&lt;lookup characters&gt;</i> : string, <i>&lt;replace characters&gt;</i> : string) => string</b></code><br/><br/>
+Replace one set of characters by another set of characters in the string. Characters have 1 to 1 replacement.
+* ``translate('(bojjus)', '()', '[]') -> '[bojjus]'``
+* ``translate('(gunchus)', '()', '[') -> '[gunchus'``
+___
++
+<a name="trim" ></a>
+
+### <code>trim</code>
+<code><b>trim(<i>&lt;string to trim&gt;</i> : string, [<i>&lt;trim characters&gt;</i> : string]) => string</b></code><br/><br/>
+Trims a string of leading and trailing characters. If second parameter is unspecified, it trims whitespace. Else it trims any character specified in the second parameter.
+* ``trim(' dumbo ') -> 'dumbo'``
+* ``trim('!--!du!mbo!', '-!') -> 'du!mbo'``
+___
++
+<a name="true" ></a>
+
+### <code>true</code>
+<code><b>true() => boolean</b></code><br/><br/>
+Always returns a true value. Use the function `syntax(true())` if there's a column named 'true'.
+* ``(10 + 20 == 30) -> true``
+* ``(10 + 20 == 30) -> true()``
+___
++
+<a name="typeMatch" ></a>
+
+### <code>typeMatch</code>
+<code><b>typeMatch(<i>&lt;type&gt;</i> : string, <i>&lt;base type&gt;</i> : string) => boolean</b></code><br/><br/>
+Matches the type of the column. Can only be used in pattern expressions.number matches short, integer, long, double, float or decimal, integral matches short, integer, long, fractional matches double, float, decimal and datetime matches date or timestamp type.
+* ``typeMatch(type, 'number')``
+* ``typeMatch('date', 'datetime')``
+___
+
+## U
+
+<a name="unescape" ></a>
+
+### <code>unescape</code>
+<code><b>unescape(<i>&lt;string_to_escape&gt;</i> : string, <i>&lt;format&gt;</i> : string) => string</b></code><br/><br/>
+Unescapes a string according to a format. Literal values for acceptable format are 'json', 'xml', 'ecmascript', 'html', 'java'.
+* ```unescape('{\\\\\"value\\\\\": 10}', 'json')```
+* ```'{\\\"value\\\": 10}'```
+___
++
+<a name="unfold" ></a>
+
+### <code>unfold</code>
+<code><b>unfold (<i>&lt;value1&gt;</i>: array) => any</b></code><br/><br/>
+Unfolds an array into a set of rows and repeats the values for the remaining columns in every row.
+* ``unfold(addresses) => any``
+* ``unfold( @(name = salesPerson, sales = salesAmount) ) => any``
+___
++
+<a name="unhex" ></a>
+
+### <code>unhex</code>
+<code><b>unhex(<i>\<value1\></i>: string) => binary</b></code><br/><br/>
+Unhexes a binary value from its string representation. This can be used with sha2, md5 to convert from string to binary representation
+* ``unhex('1fadbe') -> toBinary([toByte(0x1f), toByte(0xad), toByte(0xbe)])``
+* ``unhex(md5(5, 'gunchus', 8.2, 'bojjus', true, toDate('2010-4-4'))) -> toBinary([toByte(0x4c),toByte(0xe8),toByte(0xa8),toByte(0x80),toByte(0xbd),toByte(0x62),toByte(0x1a),toByte(0x1f),toByte(0xfa),toByte(0xd0),toByte(0xbc),toByte(0xa9),toByte(0x05),toByte(0xe1),toByte(0xbc),toByte(0x5a)])``
+++
+<a name="union" ></a>
+
+### <code>union</code>
+<code><b>union(<i>&lt;value1&gt;</i>: array, <i>&lt;value2&gt;</i> : array) => array</b></code><br/><br/>
+Returns a union set of distinct items from 2 arrays.
+* ``union([10, 20, 30], [20, 40]) => [10, 20, 30, 40]``
+___
+
++
+<a name="upper" ></a>
+
+### <code>upper</code>
+<code><b>upper(<i>&lt;value1&gt;</i> : string) => string</b></code><br/><br/>
+Uppercases a string.
+* ``upper('bojjus') -> 'BOJJUS'``
+___
++
+<a name="uuid" ></a>
+
+### <code>uuid</code>
+<code><b>uuid() => string</b></code><br/><br/>
+Returns the generated UUID.
+* ``uuid()``
+___
+
+## V
+
+<a name="variance" ></a>
+
+### <code>variance</code>
+<code><b>variance(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
+Gets the variance of a column.
+* ``variance(sales)``
+___
++
+<a name="varianceIf" ></a>
+
+### <code>varianceIf</code>
+<code><b>varianceIf(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : number) => double</b></code><br/><br/>
+Based on a criteria, gets the variance of a column.
+* ``varianceIf(region == 'West', sales)``
+___
++
+<a name="variancePopulation" ></a>
+
+### <code>variancePopulation</code>
+<code><b>variancePopulation(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
+Gets the population variance of a column.
+* ``variancePopulation(sales)``
+___
++
+<a name="variancePopulationIf" ></a>
+
+### <code>variancePopulationIf</code>
+<code><b>variancePopulationIf(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : number) => double</b></code><br/><br/>
+Based on a criteria, gets the population variance of a column.
+* ``variancePopulationIf(region == 'West', sales)``
+___
++
+<a name="varianceSample" ></a>
+
+### <code>varianceSample</code>
+<code><b>varianceSample(<i>&lt;value1&gt;</i> : number) => double</b></code><br/><br/>
+Gets the unbiased variance of a column.
+* ``varianceSample(sales)``
+___
++
+<a name="varianceSampleIf" ></a>
+
+### <code>varianceSampleIf</code>
+<code><b>varianceSampleIf(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : number) => double</b></code><br/><br/>
+Based on a criteria, gets the unbiased variance of a column.
+* ``varianceSampleIf(region == 'West', sales)``
+___
+
+## W
+
+<a name="weekOfYear" ></a>
+
+### <code>weekOfYear</code>
+<code><b>weekOfYear(<i>&lt;value1&gt;</i> : datetime) => integer</b></code><br/><br/>
+Gets the week of the year given a date.
+* ``weekOfYear(toDate('2008-02-20')) -> 8``
+___
++
+<a name="weeks" ></a>
+
+### <code>weeks</code>
+<code><b>weeks(<i>&lt;value1&gt;</i> : integer) => long</b></code><br/><br/>
+Duration in milliseconds for number of weeks.
+* ``weeks(2) -> 1209600000L``
+___
+
+## X
+
+<a name="xor" ></a>
+
+### <code>xor</code>
+<code><b>xor(<i>&lt;value1&gt;</i> : boolean, <i>&lt;value2&gt;</i> : boolean) => boolean</b></code><br/><br/>
+Logical XOR operator. Same as ^ operator.
+* ``xor(true, false) -> true``
+* ``xor(true, true) -> false``
+* ``true ^ false -> true``
+___
+
+## Y
+
+<a name="year" ></a>
+
+### <code>year</code>
+<code><b>year(<i>&lt;value1&gt;</i> : datetime) => integer</b></code><br/><br/>
+Gets the year value of a date.
+* ``year(toDate('2012-8-8')) -> 2012``
+
+## Next steps
+
+- List of all [aggregate functions](data-flow-aggregate-functions.md).
+- List of all [array functions](data-flow-array-functions.md).
+- List of all [cached lookup functions](data-flow-cached-lookup-functions.md).
+- List of all [conversion functions](data-flow-conversion-functions.md).
+- List of all [date and time functions](data-flow-date-time-functions.md).
+- List of all [expression functions](data-flow-expression-functions.md).
+- List of all [map functions](data-flow-map-functions.md).
+- List of all [metafunctions](data-flow-metafunctions.md).
+- List of all [window functions](data-flow-window-functions.md).
+- [Learn how to use Expression Builder](concepts-data-flow-expression-builder.md).
data-factory Data Flow Lookup https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-lookup.md
Title: Lookup transformation in mapping data flow
+ Title: Lookup transformations in mapping data flow
-description: Reference data from another source using the lookup transformation in mapping data flow for Azure Data Factory and Synapse Analytics pipelines.
+description: Reference data from another source using lookup transformations in mapping data flow for Azure Data Factory and Synapse Analytics pipelines.
Last updated 09/09/2021
-# Lookup transformation in mapping data flow
+# Lookup transformations in mapping data flow
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
A lookup transformation is similar to a left outer join. All rows from the prima
**Match multiple rows:** If enabled, a row with multiple matches in the primary stream will return multiple rows. Otherwise, only a single row will be returned based upon the 'Match on' condition.
-**Match on:** Only visible if 'Match multiple rows' is not selected. Choose whether to match on any row, the first match, or the last match. Any row is recommended as it executes the fastest. If first row or last row is selected, you'll be required to specify sort conditions.
+**Match on:** Only visible if 'Match multiple rows' isn't selected. Choose whether to match on any row, the first match, or the last match. Any row is recommended as it executes the fastest. If first row or last row is selected, you'll be required to specify sort conditions.
-**Lookup conditions:** Choose which columns to match on. If the equality condition is met, then the rows will be considered a match. Hover and select 'Computed column' to extract a value using the [data flow expression language](data-flow-expression-functions.md).
+**Lookup conditions:** Choose which columns to match on. If the equality condition is met, then the rows will be considered a match. Hover and select 'Computed column' to extract a value using the [data flow expression language](data-transformation-functions.md).
All columns from both streams are included in the output data. To drop duplicate or unwanted columns, add a [select transformation](data-flow-select.md) after your lookup transformation. Columns can also be dropped or renamed in a sink transformation.
data-factory Data Flow Map Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-map-functions.md
+
+ Title: Map functions in the mapping data flow
+
+description: Learn about map functions in mapping data flow.
++++++ Last updated : 02/02/2022++
+# Map functions in mapping data flow
+++
+The following articles provide details about map functions supported by Azure Data Factory and Azure Synapse Analytics in mapping data flows.
+
+## Map function list
+
+ Map functions perform operations on map data types
+
+| Map function | Task |
+|-|-|
+| [associate](data-flow-expressions-usage.md#associate) | Creates a map of key/values. All the keys & values should be of the same type. If no items are specified, it's defaulted to a map of string to string type. Same as a ```[ -> ]``` creation operator. Keys and values should alternate with each other.|
+| [keyValues](data-flow-expressions-usage.md#keyValues) | Creates a map of key/values. The first parameter is an array of keys and second is the array of values. Both arrays should have equal length.|
+| [mapAssociation](data-flow-expressions-usage.md#mapAssociation) | Transforms a map by associating the keys to new values. Returns an array. It takes a mapping function where you can address the item as #key and current value as #value. |
+| [reassociate](data-flow-expressions-usage.md#reassociate) | Transforms a map by associating the keys to new values. It takes a mapping function where you can address the item as #key and current value as #value. |
+|||
+
+## Next steps
+
+- List of all [aggregate functions](data-flow-aggregate-functions.md).
+- List of all [array functions](data-flow-array-functions.md).
+- List of all [cached lookup functions](data-flow-cached-lookup-functions.md).
+- List of all [conversion functions](data-flow-conversion-functions.md).
+- List of all [date and time functions](data-flow-date-time-functions.md).
+- List of all [expression functions](data-flow-expression-functions.md).
+- List of all [metafunctions](data-flow-metafunctions.md).
+- List of all [window functions](data-flow-window-functions.md).
+- [Usage details of all data transformation expressions](data-flow-expressions-usage.md).
+- [Learn how to use Expression Builder](concepts-data-flow-expression-builder.md).
data-factory Data Flow Metafunctions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-metafunctions.md
+
+ Title: Metafunctions in the mapping data flow
+
+description: Learn about metafunctions in mapping data flow.
++++++ Last updated : 02/02/2022++
+# Metafunctions in mapping data flow
+++
+The following articles provide details about metafunctions supported by Azure Data Factory and Azure Synapse Analytics in mapping data flows.
+
+## Metafunction list
+
+Metafunctions primarily function on metadata in your data flow
+
+| Metafunction | Task |
+|-|-|
+| [byItem](data-flow-expressions-usage.md#byItem) | Find a sub item within a structure or array of structure. If there are multiple matches, the first match is returned. If no match it returns a NULL value. The returned value has to be type converted by one of the type conversion actions(? date, ? string ...). Column names known at design time should be addressed just by their name. Computed inputs aren't supported but you can use parameter substitutions |
+| [byOrigin](data-flow-expressions-usage.md#byOrigin) | Selects a column value by name in the origin stream. The second argument is the origin stream name. If there are multiple matches, the first match is returned. If no match it returns a NULL value. The returned value has to be type converted by one of the type conversion functions(TO_DATE, TO_STRING ...). Column names known at design time should be addressed just by their name. Computed inputs aren't supported but you can use parameter substitutions. |
+| [byOrigins](data-flow-expressions-usage.md#byOrigins) | Selects an array of columns by name in the stream. The second argument is the stream where it originated from. If there are multiple matches, the first match is returned. If no match it returns a NULL value. The returned value has to be type converted by one of the type conversion functions(TO_DATE, TO_STRING ...) Column names known at design time should be addressed just by their name. Computed inputs aren't supported but you can use parameter substitutions.|
+| [byName](data-flow-expressions-usage.md#byName) | Selects a column value by name in the stream. You can pass an optional stream name as the second argument. If there are multiple matches, the first match is returned. If no match it returns a NULL value. The returned value has to be type converted by one of the type conversion functions(TO_DATE, TO_STRING ...). Column names known at design time should be addressed just by their name. Computed inputs aren't supported but you can use parameter substitutions. |
+| [byNames](data-flow-expressions-usage.md#byNames) | Select an array of columns by name in the stream. You can pass an optional stream name as the second argument. If there are multiple matches, the first match is returned. If there are no matches for a column, the entire output is a NULL value. The returned value requires a type conversion function (toDate, toString, ...). Column names known at design time should be addressed just by their name. Computed inputs aren't supported but you can use parameter substitutions.|
+| [byPath](data-flow-expressions-usage.md#byPath) | Finds a hierarchical path by name in the stream. You can pass an optional stream name as the second argument. If no such path is found, it returns null. Column names/paths known at design time should be addressed just by their name or dot notation path. Computed inputs aren't supported but you can use parameter substitutions. |
+| [byPosition](data-flow-expressions-usage.md#byPosition) | Selects a column value by its relative position(1 based) in the stream. If the position is out of bounds, it returns a NULL value. The returned value has to be type converted by one of the type conversion functions(TO_DATE, TO_STRING ...) Computed inputs aren't supported but you can use parameter substitutions. |
+| [hasPath](data-flow-expressions-usage.md#hasPath) | Checks if a certain hierarchical path exists by name in the stream. You can pass an optional stream name as the second argument. Column names/paths known at design time should be addressed just by their name or dot notation path. Computed inputs aren't supported but you can use parameter substitutions. |
+| [originColumns](data-flow-expressions-usage.md#originColumns) | Gets all output columns for an origin stream where columns were created. Must be enclosed in another function.|
+| [hex](data-flow-expressions-usage.md#hex) | Returns a hex string representation of a binary value|
+| [unhex](data-flow-expressions-usage.md#unhex) | Unhexes a binary value from its string representation. This can be used with sha2, md5 to convert from string to binary representation|
+|||
+
+## Next steps
+
+- List of all [aggregate functions](data-flow-aggregate-functions.md).
+- List of all [array functions](data-flow-array-functions.md).
+- List of all [cached lookup functions](data-flow-cached-lookup-functions.md).
+- List of all [conversion functions](data-flow-conversion-functions.md).
+- List of all [date and time functions](data-flow-date-time-functions.md).
+- List of all [expression functions](data-flow-expression-functions.md).
+- List of all [map functions](data-flow-map-functions.md).
+- List of all [window functions](data-flow-window-functions.md).
+- [Usage details of all data transformation expressions](data-flow-expressions-usage.md).
+- [Learn how to use Expression Builder](concepts-data-flow-expression-builder.md).
data-factory Data Flow Pivot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-pivot.md
Last updated 09/09/2021
[!INCLUDE[data-flow-preamble](includes/data-flow-preamble.md)]
-Use the pivot transformation to create multiple columns from the unique row values of a single column. Pivot is an aggregation transformation where you select group by columns and generate pivot columns using [aggregate functions](data-flow-expression-functions.md#aggregate-functions).
+Use the pivot transformation to create multiple columns from the unique row values of a single column. Pivot is an aggregation transformation where you select group by columns and generate pivot columns using [aggregate functions](data-flow-aggregate-functions.md).
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4C4YN]
In the section labeled **Value**, you can enter specific row values to be pivote
:::image type="content" source="media/data-flow/pivot4.png" alt-text="Pivoted columns":::
-For each unique pivot key value that becomes a column, generate an aggregated row value for each group. You can create multiple columns per pivot key. Each pivot column must contain at least one [aggregate function](data-flow-expression-functions.md#aggregate-functions).
+For each unique pivot key value that becomes a column, generate an aggregated row value for each group. You can create multiple columns per pivot key. Each pivot column must contain at least one [aggregate function](data-flow-aggregate-functions.md).
**Column name pattern:** Select how to format the column name of each pivot column. The outputted column name will be a combination of the pivot key value, column prefix and optional prefix, suffice, middle characters.
data-factory Data Flow Window Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-window-functions.md
+
+ Title: Window functions in the mapping data flow
+
+description: Learn about window functions in mapping data flow.
++++++ Last updated : 02/02/2022++
+# Window functions in mapping data flow
+++
+The following articles provide details about window functions supported by Azure Data Factory and Azure Synapse Analytics in mapping data flows.
+
+## Window function list
+
+The following functions are only available in window transformations.
+
+| Window function | Task |
+|-|-|
+| [cumeDist](data-flow-expressions-usage.md#cumeDist) | The CumeDist function computes the position of a value relative to all values in the partition. The result is the number of rows preceding or equal to the current row in the ordering of the partition divided by the total number of rows in the window partition. Any tie values in the ordering will evaluate to the same position. |
+| [denseRank](data-flow-expressions-usage.md#denseRank) | Computes the rank of a value in a group of values specified in a window's order by clause. The result is one plus the number of rows preceding or equal to the current row in the ordering of the partition. The values will not produce gaps in the sequence. Dense Rank works even when data is not sorted and looks for change in values. |
+| [lag](data-flow-expressions-usage.md#lag) | Gets the value of the first parameter evaluated n rows before the current row. The second parameter is the number of rows to look back and the default value is 1. If there are not as many rows a value of null is returned unless a default value is specified. |
+| [lead](data-flow-expressions-usage.md#lead) | Gets the value of the first parameter evaluated n rows after the current row. The second parameter is the number of rows to look forward and the default value is 1. If there are not as many rows a value of null is returned unless a default value is specified. |
+| [nTile](data-flow-expressions-usage.md#nTile) | The ```NTile``` function divides the rows for each window partition into `n` buckets ranging from 1 to at most `n`. Bucket values will differ by at most 1. If the number of rows in the partition does not divide evenly into the number of buckets, then the remainder values are distributed one per bucket, starting with the first bucket. The ```NTile``` function is useful for the calculation of ```tertiles```, quartiles, deciles, and other common summary statistics. The function calculates two variables during initialization: The size of a regular bucket will have one extra row added to it. Both variables are based on the size of the current partition. During the calculation process the function keeps track of the current row number, the current bucket number, and the row number at which the bucket will change (bucketThreshold). When the current row number reaches bucket threshold, the bucket value is increased by one and the threshold is increased by the bucket size (plus one extra if the current bucket is padded). |
+| [rank](data-flow-expressions-usage.md#rank) | Computes the rank of a value in a group of values specified in a window's order by clause. The result is one plus the number of rows preceding or equal to the current row in the ordering of the partition. The values will produce gaps in the sequence. Rank works even when data is not sorted and looks for change in values. |
+| [rowNumber](data-flow-expressions-usage.md#rowNumber) | Assigns a sequential row numbering for rows in a window starting with 1. |
+|||
+
+## Next steps
+
+- List of all [aggregate functions](data-flow-aggregate-functions.md).
+- List of all [array functions](data-flow-array-functions.md).
+- List of all [cached lookup functions](data-flow-cached-lookup-functions.md).
+- List of all [conversion functions](data-flow-conversion-functions.md).
+- List of all [date and time functions](data-flow-date-time-functions.md).
+- List of all [expression functions](data-flow-expression-functions.md).
+- List of all [map functions](data-flow-map-functions.md).
+- List of all [metafunctions](data-flow-metafunctions.md).
+- [Usage details of all data transformation expressions](data-flow-expressions-usage.md).
+- [Learn how to use Expression Builder](concepts-data-flow-expression-builder.md).
data-factory Data Flow Window https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-flow-window.md
Lastly, use the Expression Builder to define the aggregations you wish to use wi
:::image type="content" source="media/data-flow/windows7.png" alt-text="Screenshot shows the result of the windowing action.":::
-The full list of aggregation and analytical functions available for you to use in the Data Flow Expression Language via the Expression Builder are listed in [Data transformation expressions in mapping data flow](data-flow-expression-functions.md).
+The full list of aggregation and analytical functions available for you to use in the Data Flow Expression Language via the Expression Builder are listed in [Data transformation expressions in mapping data flow](data-transformation-functions.md).
## Next steps
data-factory Data Transformation Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/data-transformation-functions.md
+
+ Title: Data transformation functions in the mapping data flow
+
+description: Learn about data transformation functions in mapping data flow.
++++++ Last updated : 02/02/2022++
+# Data transformation expressions in mapping data flow
+++
+Data transformation expressions in Azure Data Factory and Azure Synapse Analytics allow you to transform expressions in many ways, and are a powerful tool enabling you customize the behavior of your pipelines in almost every setting and property - anywhere you find a text field that shows the **Add dynamic content** or **Open expression builder** links within your pipeline.
+
+## Transformation expression function list
+
+The following articles provide details about expressions and functions supported by Azure Data Factory and Azure Synapse Analytics in mapping data flows.
+
+- [Aggregate functions](data-flow-aggregate-functions.md)
+- [Array functions](data-flow-array-functions.md)
+- [Cached lookup functions](data-flow-cached-lookup-functions.md)
+- [Conversion functions](data-flow-conversion-functions.md)
+- [Date and time functions](data-flow-date-time-functions.md)
+- [Expression functions](data-flow-expression-functions.md)
+- [Map functions](data-flow-map-functions.md)
+- [Metafunctions](data-flow-metafunctions.md)
+- [Window functions](data-flow-window-functions.md)
+
+For details about the usage of each function in a comprehensive alphabetical list, refer to [Usage details of all data transformation expressions](data-flow-expressions-usage.md).
+
+## Next steps
+
+[Learn how to use Expression Builder](concepts-data-flow-expression-builder.md).
data-factory How To Send Email https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-send-email.md
make your messages dynamic. For example:  
The above expressions will return the relevant error messages from a Copy activity failure, which can be redirected then to your Web activity that sends the email. Refer to the [Copy activity output properties](copy-activity-monitoring.md) article for more details.
-## Next Steps
+## Next steps
[How to send Teams notifications from a pipeline](how-to-send-notifications-to-teams.md)
data-factory How To Send Notifications To Teams https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/how-to-send-notifications-to-teams.md
The above expressions will return the relevant error messages from a failure, wh
We also encourage you to review the Microsoft Teams supported [notification payload schema](https://adaptivecards.io/explorer/AdaptiveCard.html) and further customize the above template to your needs.
-## Next Steps
+## Next steps
[How to send email from a pipeline](how-to-send-email.md)
data-factory Monitor Ssis https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/monitor-ssis.md
When querying SSIS package execution logs on Logs Analytics, you can join them u
:::image type="content" source="media/data-factory-monitor-oms/log-analytics-query2.png" alt-text="Querying SSIS package execution logs on Log Analytics":::
-## Next Steps
+## Next steps
[Schema of logs and events](monitor-schema-logs-events.md)
data-factory Parameters Data Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/parameters-data-flow.md
You can quickly add additional parameters by selecting **New parameter** and spe
Once you've created a data flow with parameters, you can execute it from a pipeline with the Execute Data Flow Activity. After you add the activity to your pipeline canvas, you will be presented with the available data flow parameters in the activity's **Parameters** tab.
-When assigning parameter values, you can use either the [pipeline expression language](control-flow-expression-language-functions.md) or the [data flow expression language](data-flow-expression-functions.md) based on spark types. Each mapping data flow can have any combination of pipeline and data flow expression parameters.
+When assigning parameter values, you can use either the [pipeline expression language](control-flow-expression-language-functions.md) or the [data flow expression language](data-transformation-functions.md) based on spark types. Each mapping data flow can have any combination of pipeline and data flow expression parameters.
:::image type="content" source="media/data-flow/parameter-assign.png" alt-text="Screenshot shows the Parameters tab with Data Flow expression selected for the value of myparam.":::
data-factory Tutorial Data Flow Delta Lake https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-data-flow-delta-lake.md
You will generate two data flows in this tutorial. The fist data flow is a simpl
### Tutorial objectives
-1. Take the MoviesCSV dataset source from above, form a new Delta Lake from it
-1. Build the logic to updated ratings for 1988 movies to '1'
-1. Delete all movies from 1950
-1. Insert new movies for 2021 by duplicating the movies from 1960
+1. Take the MoviesCSV dataset source from above, and form a new Delta Lake from it.
+1. Build the logic to updated ratings for 1988 movies to '1'.
+1. Delete all movies from 1950.
+1. Insert new movies for 2021 by duplicating the movies from 1960.
### Start from a blank data flow canvas
You will generate two data flows in this tutorial. The fist data flow is a simpl
:::image type="content" source="media/data-flow/data-flow-tutorial-4.png" alt-text="Sink"::: 1. Here we are using the Delta Lake sink to your ADLS Gen2 data lake and allowing inserts, updates, deletes. 
-1. Note that the Key Columns is a composite key made up of the Movie primary key column and year column. This is because we created fake 2021 movies by duplicating the 1960 rows. This avoids collisions when looking up the existing rows by providing uniqueness.
+1. Note that the Key Columns are a composite key made up of the Movie primary key column and year column. This is because we created fake 2021 movies by duplicating the 1960 rows. This avoids collisions when looking up the existing rows by providing uniqueness.
### Download completed sample [Here is a sample solution for the Delta pipeline with a data flow for update/delete rows in the lake:](https://github.com/kromerm/adfdataflowdocs/blob/master/sampledata/DeltaPipeline.zip) ## Next steps
-Learn more about the [data flow expression language](data-flow-expression-functions.md).
+Learn more about the [data flow expression language](data-transformation-functions.md).
data-factory Tutorial Data Flow https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/tutorial-data-flow.md
The pipeline in this tutorial runs a data flow that aggregates the average ratin
> * Test run the pipeline. > * Monitor a Data Flow activity
-Learn more about the [data flow expression language](data-flow-expression-functions.md).
+Learn more about the [data flow expression language](data-transformation-functions.md).
data-factory Data Factory Build Your First Pipeline Using Vs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-build-your-first-pipeline-using-vs.md
In this tutorial, you created an Azure Data Factory to process data by running H
3. Created two **datasets**, which describe input and output data for HDInsight Hive activity in the pipeline. 4. Created a **pipeline** with a **HDInsight Hive** activity.
-## Next Steps
+## Next steps
In this article, you have created a pipeline with a transformation activity (HDInsight Activity) that runs a Hive script on an on-demand HDInsight cluster. To see how to use a Copy Activity to copy data from an Azure Blob to Azure SQL, see [Tutorial: Copy data from an Azure blob to Azure SQL](data-factory-copy-data-from-azure-blob-storage-to-sql-database.md). You can chain two activities (run one activity after another) by setting the output dataset of one activity as the input dataset of the other activity. See [Scheduling and execution in Data Factory](data-factory-scheduling-and-execution.md) for detailed information.
data-factory Data Factory Json Scripting Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-json-scripting-reference.md
You can specify the following properties in a .NET custom activity JSON definiti
For detailed information, see [Use custom activities in Data Factory](data-factory-use-custom-activities.md) article.
-## Next Steps
+## Next steps
See the following tutorials: - [Tutorial: create a pipeline with a copy activity](data-factory-copy-data-from-azure-blob-storage-to-sql-database.md)
data-factory Data Factory On Premises Mongodb Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-on-premises-mongodb-connector.md
When copying data from relational data stores, keep repeatability in mind to avo
## Performance and Tuning See [Copy Activity Performance & Tuning Guide](data-factory-copy-activity-performance.md) to learn about key factors that impact performance of data movement (Copy Activity) in Azure Data Factory and various ways to optimize it.
-## Next Steps
+## Next steps
See [Move data between on-premises and cloud](data-factory-move-data-between-onprem-and-cloud.md) article for step-by-step instructions for creating a data pipeline that moves data from an on-premises data store to an Azure data store.
data-factory Data Factory Sftp Connector https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/v1/data-factory-sftp-connector.md
The pipeline contains a Copy Activity that is configured to use the input and ou
## Performance and Tuning See [Copy Activity Performance & Tuning Guide](data-factory-copy-activity-performance.md) to learn about key factors that impact performance of data movement (Copy Activity) in Azure Data Factory and various ways to optimize it.
-## Next Steps
+## Next steps
See the following articles: * [Copy Activity tutorial](data-factory-copy-data-from-azure-blob-storage-to-sql-database.md) for step-by-step instructions for creating a pipeline with a Copy Activity.
data-factory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/whats-new.md
This page is updated monthly, so revisit it regularly.
<tr><td>Map Data public preview</td><td>The Map Data preview enables business users to define column mapping and transformations to load Synapse Lake Databases<br><a href="../synapse-analytics/database-designer/overview-map-data.md">Learn more</a></td></tr>
-<tr><td>Multiple output destinations from Power Query</td><td>You can now map multiple output desintations from Power Query in Azure Data Factory for flexible ETL patterns for citizen data integrators.<br><a href="control-flow-power-query-activity.md#sink">Learn more</a></td></tr>
+<tr><td>Multiple output destinations from Power Query</td><td>You can now map multiple output destinations from Power Query in Azure Data Factory for flexible ETL patterns for citizen data integrators.<br><a href="control-flow-power-query-activity.md#sink">Learn more</a></td></tr>
<tr><td>External Call transformation support</td><td>Extend the functionality of Mapping Data Flows by using the External Call transformation. You can now add your own custom code as a REST endpoint or call a curated third party service row-by-row.<br><a href="data-flow-external-call.md">Learn more</a></td></tr>
This page is updated monthly, so revisit it regularly.
<table> <tr><td><b>Service Category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr> <tr><td><b>Data Movement</b></td><td>Get metadata driven data ingestion pipelines on ADF Copy Data Tool within 10 minutes (Public Preview)</td><td>With this, you can build large-scale data copy pipelines with metadata-driven approach on copy data tool(Public Preview) within 10 minutes.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/get-metadata-driven-data-ingestion-pipelines-on-adf-within-10/ba-p/2528219">Learn more</a></td></tr>
-<tr><td><b>Data Flow</b></td><td>New map functions added in data flow transformation functions</td><td>A new set of data flow transformation functions has been added to enable data engineers to easily generate, read, and update map data types and complex map structures.<br><a href="data-flow-expression-functions.md#map-functions">Learn more</a></td></tr>
+<tr><td><b>Data Flow</b></td><td>New map functions added in data flow transformation functions</td><td>A new set of data flow transformation functions has been added to enable data engineers to easily generate, read, and update map data types and complex map structures.<br><a href="data-flow-map-functions.md">Learn more</a></td></tr>
<tr><td><b>Integration Runtime</b></td><td>5 new regions available in Azure Data Factory Managed VNET (Public Preview)</td><td>These 5 new regions(China East2, China North2, US Gov Arizona, US Gov Texas, US Gov Virginia) are available in Azure Data Factory managed virtual network (Public Preview).<br><a href="managed-virtual-network-private-endpoint.md#azure-data-factory-managed-virtual-network-is-available-in-the-following-azure-regions">Learn more</a></td></tr> <tr><td rowspan=2><b>Developer Productivity</b></td><td>ADF homepage improvements</td><td>The Data Factory home page has been redesigned with better contrast and reflow capabilities. Additionally, a few sections have been introduced on the homepage to help you improve productivity in your data integration journey.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/the-new-and-refreshing-data-factory-home-page/ba-p/2515076">Learn more</a></td></tr> <tr><td>New landing page for Azure Data Factory Studio</td><td>The landing page for Data Factory blade in the Azure portal.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/the-new-and-refreshing-data-factory-home-page/ba-p/2515076">Learn more</a></td></tr>
event-grid Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/overview.md
Title: What is Azure Event Grid? description: Send event data from a source to handlers with Azure Event Grid. Build event-based applications, and integrate with Azure services. Previously updated : 07/27/2021 Last updated : 02/04/2022 # What is Azure Event Grid?
This article provides an overview of Azure Event Grid. If you want to get starte
Currently, the following Azure services support sending events to Event Grid. For more information about a source in the list, select the link.
+- [Azure API Management](event-schema-api-management.md)
- [Azure App Configuration](event-schema-app-configuration.md)
+- [Azure App Service](event-schema-app-service.md)
- [Azure Blob Storage](event-schema-blob-storage.md)
+- [Azure Cache for Redis](event-schema-azure-cache.md)
- [Azure Communication Services](event-schema-communication-services.md) - [Azure Container Registry](event-schema-container-registry.md) - [Azure Event Hubs](event-schema-event-hubs.md)
+- [Azure FarmBeats](event-schema-farmbeats.md)
- [Azure IoT Hub](event-schema-iot-hub.md) - [Azure Key Vault](event-schema-key-vault.md)
+- [Azure Kubernetes Service (preview)](event-schema-aks.md)
- [Azure Machine Learning](event-schema-machine-learning.md) - [Azure Maps](event-schema-azure-maps.md) - [Azure Media Services](event-schema-media-services.md)-- [Azure Policy](./event-schema-policy.md)
+- [Azure Policy](event-schema-policy.md)
- [Azure resource groups](event-schema-resource-groups.md) - [Azure Service Bus](event-schema-service-bus.md) - [Azure SignalR](event-schema-azure-signalr.md) - [Azure subscriptions](event-schema-subscriptions.md)-- [Azure Cache for Redis](event-schema-azure-cache.md)-- [Azure Kubernetes Service (preview)](event-schema-aks.md)+ ## Event handlers
Azure Event Grid uses a pay-per-event pricing model, so you only pay for what yo
* [Stream big data into a data warehouse](event-grid-event-hubs-integration.md) A tutorial that uses Azure Functions to stream data from Event Hubs to Azure Synapse Analytics. * [Event Grid REST API reference](/rest/api/eventgrid)
- Provides reference content for managing Event Subscriptions, routing, and filtering.
+ Provides reference content for managing Event Subscriptions, routing, and filtering.
industrial-iot Overview What Is Industrial Iot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industrial-iot/overview-what-is-industrial-iot.md
An IoT Edge device is composed of an IoT Edge Runtime and IoT Edge Modules.
- **OPC Publisher**: The OPC Publisher module connects to OPC UA server systems and publishes JSON encoded telemetry data from these servers in OPC UA "Pub/Sub" format to Azure. The OPC Publisher can run in two modes: - In combination with and controlled by the Industrial-IoT cloud microservices (orchestrated mode) - Configured by a local configuration file to allow operation without any Industrial-IoT cloud microservice (standalone mode)-- **OPC Twin**: The OPC Twin module enables connection from the cloud to OPC UA server systems on the factory network. OPC Twin provides access to OPC UA server systems through REST APIs exposed by the Industrial-IoT cloud microservices.-- **Discovery**: The Discovery module works only in combination with the Industrial-IoT cloud microservices. The Discovery module implements OPC UA server system discovery and reports the results to the Industrial-IoT cloud microservices.
+- **OPC Twin**: The OPC Twin module enables connection from the cloud to OPC UA server systems on the factory network. OPC Twin provides access to OPC UA server systems through REST APIs exposed by the Industrial-IoT cloud microservices. In contrast to OPC Publisher, in OPC Twin, working in standalone mode (module only) isn't supported. The OPC Twin module must work in combination with the Industrial-IoT cloud microservices.
+- **Discovery**: The Discovery module works only in combination with the Industrial-IoT cloud microservices. The Discovery module implements OPC UA server system discovery and reports the results to the Industrial-IoT cloud microservices. In contrast to OPC Publisher, in the Discovery module, working in standalone mode (module only) isn't supported. The Discovery module must work in combination with the Industrial-IoT cloud microservices.
## Next steps
-Now that you have learned what Industrial IoT is, you can read more about the OPC Publisher or get started with deploying the IIoT Platform:
+You can read more about the OPC Publisher or get started with deploying the IIoT Platform:
> [!div class="nextstepaction"] > [What is the OPC Publisher?](overview-what-is-opc-publisher.md) > [!div class="nextstepaction"] > [Deploy the Industrial IoT Platform](tutorial-deploy-industrial-iot-platform.md)
->
+>
industrial-iot Tutorial Publisher Deploy Opc Publisher Standalone https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/industrial-iot/tutorial-publisher-deploy-opc-publisher-standalone.md
In this tutorial, you learn how to:
> * Run the latest released version of OPC Publisher as a container > * Specify Container Create Options in the Azure portal
-If you donΓÇÖt have an Azure subscription, create a free trial account
- ## Prerequisites
+- An Azure subscription must be created. If you donΓÇÖt have an Azure subscription, create a [free trial account](https://azure.microsoft.com/free/search/).
- An IoT Hub must be created - An IoT Edge device must be created - An IoT Edge device must be registered
iot-central Overview Iot Central Operator https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/overview-iot-central-operator.md
To manage individual devices, use device views to set device and cloud propertie
To manage devices in bulk, create and schedule jobs. Jobs can update properties and run commands on multiple devices. To learn more, see [Create and run a job in your Azure IoT Central application](howto-manage-devices-in-bulk.md).
-To manage IoT Edge devices, [create and edit deployment manifests](concepts-iot-edge.md#iot-edge-deployment-manifests-and-iot-central-device-templates) and deploy them onto the device directly from IoT Central. You can also run commands on modules from within IoT Central.
+To manage IoT Edge devices, you can use the IoT Central UI to[create and edit deployment manifests](concepts-iot-edge.md#iot-edge-deployment-manifests-and-iot-central-device-templates), and then deploy them to your IoT Edge devices. You can also run commands in IoT Edge modules from within IoT Central.
If your IoT Central application uses *organizations*, an administrator controls which devices you have access to.
iot-develop Quickstart Devkit Stm B L4s5i https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-develop/quickstart-devkit-stm-b-l4s5i.md
ms.devlang: c
Last updated 06/02/2021
+zone_pivot_groups: iot-develop-stm-toolset
+
+# Owner: timlt
+# - id: iot-develop-stm-toolset
+# Title: IoT Devices
+# prompt: Choose a build environment
+# pivots:
+# - id: iot-toolset-cmake
+# Title: CMake
+# - id: iot-toolset-iar-ewarm
+# Title: IAR EWARM
+ # Quickstart: Connect an STMicroelectronics B-L4S5I-IOT01A Discovery kit to IoT Central
**Applies to**: [Embedded device development](about-iot-develop.md#embedded-device-development)<br> **Total completion time**: 30 minutes
-[![Browse code](media/common/browse-code.svg)](https://github.com/azure-rtos/getting-started/tree/master/STMicroelectronics/B-L4S5I-IOT01A)
+[![Browse code](media/common/browse-code.svg)](https://github.com/azure-rtos/getting-started/tree/master/STMicroelectronics/)
+[![Browse code](media/common/browse-code.svg)](https://github.com/azure-rtos/samples/)
In this quickstart, you use Azure RTOS to connect the STMicroelectronics [B-L4S5I-IOT01A](https://www.st.com/en/evaluation-tools/b-l4S5i-iot01a.html) Discovery kit (from now on, the STM DevKit) to Azure IoT.
-You will complete the following tasks:
+You'll complete the following tasks:
* Install a set of embedded development tools for programming the STM DevKit in C * Build an image and flash it onto the STM DevKit * Use Azure IoT Central to create cloud components, view properties, view device telemetry, and call direct commands ## Prerequisites * A PC running Windows 10
The cloned repo contains a setup script that installs and configures the require
To install the tools:
-1. From File Explorer, navigate to the following path in the repo and run the setup script named *get-toolchain.bat*:
+1. From File Explorer, navigate to the following path in the repo and run the setup batch file named *get-toolchain.bat*.
*getting-started\tools\get-toolchain.bat*-
-1. After the installation, open a new console window to recognize the configuration changes made by the setup script. Use this console to complete the remaining programming tasks in the quickstart. You can use Windows CMD, PowerShell, or Git Bash for Windows.
+1. After the installation, open a new console window to recognize the configuration changes made by the setup batch file. Use this console to complete the remaining programming tasks in the quickstart. You can use Windows CMD, PowerShell, or Git Bash for Windows.
1. Run the following code to confirm that CMake version 3.14 or later is installed. ```shell
To connect the STM DevKit to Azure, you'll modify a configuration file for Wi-Fi
### Add configuration
-1. Open the following file in a text editor:
+1. Open the following file in a text editor.
*getting-started\STMicroelectronics\B-L4S5I-IOT01A\app\azure_config.h*
To connect the STM DevKit to Azure, you'll modify a configuration file for Wi-Fi
|Constant name|Value| |-|--|
- |`WIFI_SSID` |{*Your Wi-Fi SSID*}|
- |`WIFI_PASSWORD` |{*Your Wi-Fi password*}|
- |`WIFI_MODE` |{*One of the enumerated Wi-Fi mode values in the file*}|
+ |`WIFI_SSID` |{*Use your Wi-Fi SSID*}|
+ |`WIFI_PASSWORD` |{*Use your Wi-Fi password*}|
+ |`WIFI_MODE` |{*Use one of the enumerated Wi-Fi mode values in the file*}|
1. Set the Azure IoT device information constants to the values that you saved after you created Azure resources. |Constant name|Value| |-|--|
- |`IOT_DPS_ID_SCOPE` |{*Your ID scope value*}|
- |`IOT_DPS_REGISTRATION_ID` |{*Your Device ID value*}|
- |`IOT_DEVICE_SAS_KEY` |{*Your Primary key value*}|
+ |`IOT_DPS_ID_SCOPE` |{*Use your ID scope value*}|
+ |`IOT_DPS_REGISTRATION_ID` |{*Use your Device ID value*}|
+ |`IOT_DEVICE_SAS_KEY` |{*Use your Primary key value*}|
1. Save and close the file.
To connect the STM DevKit to Azure, you'll modify a configuration file for Wi-Fi
### Flash the image
-1. On the STM DevKit MCU, locate the **Reset** button (1), the Micro USB port (2), which is labeled **USB STLink**, and the board part number (3). You will refer to these items in the next steps. All of them are highlighted in the following picture:
+1. On the STM DevKit MCU, locate the **Reset** button (1), the Micro USB port (2), which is labeled **USB STLink**, and the board part number (3). You'll refer to these items in the next steps. All of them are highlighted in the following picture:
- :::image type="content" source="media/quickstart-devkit-stm-b-l4s5i/stm-board.png" alt-text="Locate key components on the STM DevKit board":::
+ ::: image type="content" source="media/quickstart-devkit-stm-b-l4s5i/stm-b-l4s5i.png" alt-text="Locate key components on the STM DevKit board":::
1. Connect the Micro USB cable to the **USB STLINK** port on the STM DevKit, and then connect it to your computer.
To connect the STM DevKit to Azure, you'll modify a configuration file for Wi-Fi
1. Copy the binary file named *stm32l4s5_azure_iot.bin*.
-1. In File Explorer, find the STM Devkit that's connected to your computer. The device appears as a drive on your system with the drive label **DIS_LS5VI**.
+1. In File Explorer, find the STM Devkit that's connected to your computer. The device appears as a drive on your system.
1. Paste the binary file into the root folder of the STM Devkit. Flashing starts automatically and completes in a few seconds.
You can use the **Termite** app to monitor communication and confirm that your d
> [!IMPORTANT] > If the DNS client initialization fails and notifies you that the Wi-Fi firmware is out of date, you'll need to update the Wi-Fi module firmware. Download and install the [Inventek ISM 43362 Wi-Fi module firmware update](https://www.st.com/resource/en/utilities/inventek_fw_updater.zip). Then press the **Reset** button on the device to recheck your connection, and continue with this quickstart. - Keep Termite open to monitor device output in the following steps. +
+## Prerequisites
+
+* A PC running Windows 10
+* [Git](https://git-scm.com/downloads) for cloning the repository
+* Hardware
+
+ * The [B-L4S5I-IOT01A](https://www.st.com/en/evaluation-tools/b-l4s5i-iot01a.html) (STM DevKit)
+ * Wi-Fi 2.4 GHz
+ * USB 2.0 A male to Micro USB male cable
+
+* IAR Embedded Workbench for ARM (IAR EW). You can download and install a [14-day free trial of IAR EW for ARM](https://www.iar.com/products/architectures/arm/iar-embedded-workbench-for-arm/).
+
+* Download the STMicroelectronics B-L4S5I-IOT01A IAR sample from [Azure RTOS samples](https://github.com/azure-rtos/samples/), and unzip it to a working directory. Choose a directory with a short path to avoid compiler errors when you build.
+++
+## Prepare the device
+
+To connect the device to Azure, you'll modify a configuration file for Azure IoT settings and IAR settings for Wi-Fi. Then you'll build and flash the image to the device.
+
+### Add configuration
+
+1. Open the **azure_rtos.eww** EWARM Workspace in IAR from the extracted zip file.
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l4s5i/ewarm-workspace-in-iar.png" alt-text="EWARM workspace in IAR":::
++
+1. Expand the project, then expand the **Sample** subfolder and open the *sample_config.h* file.
+
+1. Near the top of the file, uncomment the `#define ENABLE_DPS_SAMPLE` directive.
+
+ ```c
+ #define ENABLE_DPS_SAMPLE
+ ```
+
+1. Set the Azure IoT device information constants to the values that you saved after you created Azure resources. The `ENDPOINT` constant is set to the global endpoint for Azure Device Provisioning Service (DPS).
+
+ |Constant name|Value|
+ |-|--|
+ |`ENDPOINT`| {*Use this value: "global.azure-devices-provisioning.net"*}|
+ |`REGISTRATION_ID`| {*Use your Device ID value*}|
+ |`ID_SCOPE`| {*Use your ID scope value*}|
+ |`DEVICE_SYMMETRIC_KEY`| {*Use your Primary key value*}|
+
+ > [!NOTE]
+ > The `ENDPOINT`, `DEVICE_ID`, `ID_SCOPE`, and `DEVICE_SYMMETRIC_KEY` values are set in a `#ifndef ENABLE_DPS_SAMPLE` statement. Make sure you set the values in the `#else` statement, which will be used when the `ENABLE_DPS_SAMPLE` value is defined.
+
+1. Save the file.
+
+1. Select the **sample_azure_iot_embedded_sdk_pnp**, right-click on it on in the left **Workspace** pane, and select **Set as active**.
+1. Right-click on the active project, select **Options > C/C++ Compiler > Preprocessor**. Replace the values for your WiFi to be used.
+
+ |Symbol name|Value|
+ |--|--|
+ |`WIFI_SSID` |{*Use your Wi-Fi SSID*}|
+ |`WIFI_PASSWORD` |{*Use your Wi-Fi password*}|
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l4s5i/options-for-node-sample.png" alt-text="Options for node sample":::
+
+### Build the project
+
+In IAR, select **Project > Batch Build** and choose **build_all** and select **Make** to build all projects. YouΓÇÖll observe compilation and linking of all sample projects.
+
+### Flash the image
+
+1. On the STM DevKit MCU, locate the **Reset** button (1), the Micro USB port (2), which is labeled **USB STLink**, and the board part number (3). YouΓÇÖll refer to these items in the next steps. All of them are highlighted in the following picture.
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l4s5i/stm-b-l4s5i.png" alt-text="Locate key components on the STM DevKit board":::
+
+1. Connect the Micro USB cable to the **USB STLINK** port on the STM DevKit, and then connect it to your computer.
+
+ > [!NOTE]
+ > For detailed setup information about the STM DevKit, see the instructions on the packaging, or see [B-L4S5I-IOT01A Resources](https://www.st.com/en/evaluation-tools/b-l4s5i-iot01a.html#resource).
+
+1. In IAR, press the green **Download and Debug** button in the toolbar to download the program and run it. Then press ***Go***.
+1. Check the Terminal I/O to verify that messages have been successfully sent to the Azure IoT hub.
+
+ As the project runs, the demo displays the status information to the Terminal IO window (**View > Terminal I/O**). The demo also publishes the message to IoT Hub every few seconds.
+
+ > [!NOTE]
+ > The terminal output content varies depending on which sample you choose to build and run.
+
+### Confirm device connection details
+
+In the terminal window, you should see output like the following, to verify that the device is initialized and connected to Azure IoT.
+
+```output
+STM32L4XX Lib:
+> CMSIS Device Version: 1.7.0.0.
+> HAL Driver Version: 1.12.0.0.
+> BSP Driver Version: 1.0.0.0.
+ES-WIFI Firmware:
+> Product Name: Inventek eS-WiFi
+> Product ID: ISM43362-M3G-L44-SPI
+> Firmware Version: C3.5.2.5.STM
+> API Version: v3.5.2
+ES-WIFI MAC Address: C4:7F:51:7:D7:73
+wifi connect try 1 times
+ES-WIFI Connected.
+> ES-WIFI IP Address: 10.0.0.228
+> ES-WIFI Gateway Address: 10.0.0.1
+> ES-WIFI DNS1 Address: 75.75.75.75
+> ES-WIFI DNS2 Address: 75.75.76.76
+IP address: 10.0.0.228
+Mask: 255.255.255.0
+Gateway: 10.0.0.1
+DNS Server address: 1.1.1.1
+SNTP Time Sync...0.pool.ntp.org
+SNTP Time Sync successfully.
+[INFO] Azure IoT Security Module has been enabled, status=0
+Start Provisioning Client...
+Registered Device Successfully.
+IoTHub Host Name: iotc-14c961cd-1779-4d1c-8739-5d2b9afa5b84.azure-devices.net; Device ID: mydevice.
+Connected to IoTHub.
+Sent properties request.
+Telemetry message send: {"temperature":22}.
+Received all properties
+Telemetry message send: {"temperature":22}.
+Telemetry message send: {"temperature":22}.
+Telemetry message send: {"temperature":22}.
+Telemetry message send: {"temperature":22}.
+```
+
+Keep the terminal open to monitor device output in the following steps.
+
+## Verify the device status
+
+To view the device status in IoT Central portal:
+1. From the application dashboard, select **Devices** on the side navigation menu.
+1. Confirm that the **Device status** is updated to **Provisioned**.
+1. Confirm that the **Device template** is updated to **Thermostat**.
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l4s5i/iot-central-device-view-status-iar.png" alt-text="Screenshot of device status in IoT Central":::
+
+## View telemetry
+
+With IoT Central, you can view the flow of telemetry from your device to the cloud.
+
+To view telemetry in IoT Central portal:
+
+1. From the application dashboard, select **Devices** on the side navigation menu.
+1. Select the device from the device list.
+1. View the telemetry as the device sends messages to the cloud in the **Overview** tab.
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l4s5i/iot-central-device-telemetry-iar.png" alt-text="Screenshot of device telemetry in IoT Central":::
+
+ > [!NOTE]
+ > You can also monitor telemetry from the device by using the Termite app.
++
+## Call a direct method on the device
+
+You can also use IoT Central to call a direct method that you have implemented on your device. Direct methods have a name, and can optionally have a JSON payload, configurable connection, and method timeout.
+
+To call a method in IoT Central portal:
+
+1. Select the **Command** tab from the device page.
+1. In the **Since** field, use the date picker and time selectors to set a time, then select **Run**.
+
+ :::image type="content" source="media/quickstart-devkit-stm-b-l4s5i/iot-central-invoke-method-iar.png" alt-text="Screenshot of calling a direct method on a device in IoT Central":::
+
+## View device information
+
+You can view the device information from IoT Central.
+
+Select the **About** tab from the device page.
+++ ## Verify the device status To view the device status in IoT Central portal:
To call a method in IoT Central portal:
:::image type="content" source="media/quickstart-devkit-stm-b-l4s5i/iot-central-invoke-method.png" alt-text="Screenshot of calling a direct method on a device in IoT Central":::
-1. In the **State** dropdown, select **False**, and then select **Run**. The LED light should turn off.
- ## View device information You can view the device information from IoT Central.
-Select **About** tab from the device page.
+Select the **About** tab from the device page.
:::image type="content" source="media/quickstart-devkit-stm-b-l4s5i/iot-central-device-about.png" alt-text="Screenshot of device information in IoT Central"::: + ## Troubleshoot and debug If you experience issues building the device code, flashing the device, or connecting, see [Troubleshooting](troubleshoot-embedded-device-quickstarts.md). For debugging the application, see [Debugging with Visual Studio Code](https://github.com/azure-rtos/getting-started/blob/master/docs/debugging.md).
+For help with debugging the application, see the selections under **Help** in **IAR EW for ARM**.
## Clean up resources
To remove the entire Azure IoT Central sample application and all its devices an
## Next steps
-In this quickstart, you built a custom image that contains Azure RTOS sample code, and then flashed the image to the STM DevKit device. You also used the IoT Central portal to create Azure resources, connect the STM DevKit securely to Azure, view telemetry, and send messages.
+In this quickstart, you built a custom image that contains Azure RTOS sample code, and then flashed the image to the STM DevKit device. You also used the IoT Central portal to create Azure resources, connect the STM DevKit securely to Azure, view customer content, and send messages.
As a next step, explore the following articles to learn more about using the IoT device SDKs to connect devices to Azure IoT.
As a next step, explore the following articles to learn more about using the IoT
> [!div class="nextstepaction"] > [Connect a simulated device to IoT Hub](quickstart-send-telemetry-iot-hub.md) - > [!IMPORTANT] > Azure RTOS provides OEMs with components to secure communication and to create code and data isolation using underlying MCU/MPU hardware protection mechanisms. However, each OEM is ultimately responsible for ensuring that their device meets evolving security requirements.
iot-dps About Iot Dps https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/about-iot-dps.md
DPS is available in many regions. The updated list of existing and newly announc
> [!NOTE] > DPS is global and not bound to a location. However, you must specify a region in which the metadata associated with your DPS profile will reside.
-## High availability
-
-There is a 99.9% Service Level Agreement for DPS, and you can [read the SLA](https://azure.microsoft.com/support/legal/sla/iot-hub/). The full [Azure SLA](https://azure.microsoft.com/support/legal/sla/) explains the guaranteed availability of Azure as a whole.
-
-DPS also supports [Availability Zones](../availability-zones/az-overview.md). An Availability Zone is a high-availability offering that protects your applications and data from datacenter failures. A region with Availability Zone support is comprised of a minimum of three zones supporting that region. Each zone provides one or more datacenters each in a unique physical location with independent power, cooling, and networking. This provides replication and redundancy within the region. Availability Zone support for DPS is enabled automatically for DPS resources in the following Azure regions:
-
-* Australia East
-* Brazil South
-* Canada Central
-* Central US
-* East US
-* East US 2
-* Japan East
-* North Europe
-* UK South
-* West Europe
-* West US 2
- ## Quotas and Limits Each Azure subscription has default quota limits in place that could impact the scope of your IoT solution. The current limit on a per-subscription basis is 10 Device Provisioning Services per subscription.
iot-dps Iot Dps Ha Dr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-dps/iot-dps-ha-dr.md
+
+ Title: Azure IoT Hub Device Provisioning Service high availability and disaster recovery | Microsoft Docs
+description: Describes the Azure and Device Provisioning Service features that help you to build highly available Azure IoT solutions with disaster recovery capabilities.
++++ Last updated : 02/04/2022++++
+# IoT Hub Device Provisioning Service high availability and disaster recovery
+
+Device Provisioning Service (DPS) is a helper service for IoT Hub that enables zero-touch device provisioning at-scale. DPS is an important part of your IoT solution. This article describes the High Availability (HA) and Disaster Recovery (DR) capabilities that DPS provides. To learn more about how to achieve HA-DR across your entire IoT solution, see [Disaster recovery and high availability for Azure applications](/azure/architecture/reliability/disaster-recovery). To learn about HA-DR in IoT Hub, see [IoT Hub high availability and disaster recovery](../iot-hub/iot-hub-ha-dr.md).
+
+## High availability
+
+DPS is a highly available service; for details, see the [SLA for Azure IoT Hub](https://azure.microsoft.com/support/legal/sla/iot-hub/). The full [Azure SLA](https://azure.microsoft.com/support/legal/sla/) explains the guaranteed availability of Azure as a whole.
+
+DPS also supports [Availability Zones](../availability-zones/az-overview.md). An Availability Zone is a high-availability offering that protects your applications and data from datacenter failures. A region with Availability Zone support is comprised of a minimum of three zones supporting that region. Each zone provides one or more datacenters, each in a unique physical location with independent power, cooling, and networking. This provides replication and redundancy within the region. Availability Zone support for DPS is enabled automatically for DPS resources in the following Azure regions:
+
+* Australia East
+* Brazil South
+* Canada Central
+* Central US
+* East US
+* East US 2
+* Japan East
+* North Europe
+* UK South
+* West Europe
+* West US 2
+
+You don't need to take any action to use availability zones in supported regions. Your DPS instances are AZ-enabled by default. It's recommended that you leverage Availability Zones by using regions where they are supported.
+
+## Disaster recovery and Microsoft-initiated failover
+
+DPS leverages [paired regions](/azure/availability-zones/cross-region-replication-azure) to enable automatic failover. Microsoft-initiated failover is exercised by Microsoft in rare situations when an entire region goes down to failover all the DPS instances from the affected region to its corresponding paired region. This process is a default option (there is no way for users to opt out) and requires no intervention from the user. Microsoft reserves the right to make a determination of when this option will be exercised. This mechanism doesn't involve user consent before the user's DPS instance is failed over.
+
+## Disable disaster recovery
+
+By default, DPS provides automatic failover by replicating data to the [paired region](/azure/availability-zones/cross-region-replication-azure) for a DPS instance. For some regions, you can avoid data replication outside of the region by disabling disaster recovery when creating a DPS instance. The following regions support this feature:
+
+* **Brazil South**; paired region, South Central US.
+* **Southeast Asia (Singapore)**; paired region, East Asia (Hong Kong).
+
+To disable disaster recovery in supported regions, make sure that **Disaster recovery enabled** is unselected when you create your DPS instance:
++
+You can also disable disaster recovery when you create a DPS instance using an [ARM template](/azure/templates/microsoft.devices/provisioningservices?tabs=bicep).
+
+Failover capability will not be available if you disable disaster recovery for a DPS instance.
+
+You can check whether disaster recovery is disabled from the **Overview** page of your DPS instance in Azure portal:
+
iot-edge Iot Edge Limits And Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-edge/iot-edge-limits-and-restrictions.md
+
+ Title: Limits and restrictions - Azure IoT Edge | Microsoft Docs
+description: Description of the limits and restrictions when using IoT Edge.
++ Last updated : 01/28/2022+++++
+# Understand Azure IoT Edge limits and restrictions
++
+This article explains the limits and restrictions when using IoT Edge.
+
+## Limits
+### Number of children in gateway hierarchy
+IoT Edge gateway hierarchies have a default limit of up to 100 connected child devices. This limit can be changed by setting the **MaxConnectedClients** environment variable in the parent device's edgeHub module.
+
+For more information, see [Create a gateway hierarchy](how-to-connect-downstream-iot-edge-device.md#create-a-gateway-hierarchy).
+
+### Size of desired properties
+IoT Hub enforces the following restrictions:
+* An 8-kb size limit on the value of tags.
+* A 32-kb size limit on both the value of `properties/desired` and `properties/reported`.
+
+For more information, see [Module twin size](../iot-hub/iot-hub-devguide-module-twins.md#module-twin-size).
+
+### Number of nested hierarchy layers
+An IoT Edge device has a limit of five layers of IoT Edge devices linked as children below it.
+
+For more information, see [Parent and child relationships](iot-edge-as-gateway.md#cloud-identities).
+
+### Number of modules in a deployment
+IoT Hub has the following restrictions for IoT Edge automatic deployments:
+* 50 modules per deployment
+ * This limit is superseded by the IoT Hub 32-kb module twin size limit. For more information, see [Be mindful of twin size limits when using custom modules](production-checklist.md#be-mindful-of-twin-size-limits-when-using-custom-modules).
+* 100 deployments (including layered deployments per paid SKU hub)
+* 10 deployments per free SKU hub
+
+## Restrictions
+### Certificates
+IoT Edge certificates have the following restrictions:
+* The common name (CN) can't be the same as the "hostname" that will be used in the configuration file on the IoT Edge device.
+* The name used by clients to connect to IoT Edge can't be the same as the common name used in the edge CA certificate.
+
+For more information, see [Certificates for device security](iot-edge-certs.md#production-implications).
+
+### TPM attestation
+When using TPM attestation with the device provisioning service, you need to use TPM 2.0.
+
+For more information, see [TPM attestation device requirements](how-to-provision-devices-at-scale-linux-tpm.md#device-requirements).
+
+### Routing syntax
+IoT Edge and IoT Hub routing syntax is almost identical.
+Supported query syntax:
+* [Message routing query based on message properties](../iot-hub/iot-hub-devguide-routing-query-syntax.md#message-routing-query-based-on-message-properties)
+* [Message routing query based on message body](../iot-hub/iot-hub-devguide-routing-query-syntax.md#message-routing-query-based-on-message-body)
+
+Not supported query syntax:
+* [Message routing query based on device twin](../iot-hub/iot-hub-devguide-routing-query-syntax.md#message-routing-query-based-on-device-twin)
+
+### File upload
+IoT Hub only supports file upload APIs for device identities, not module identities. Since IoT Edge exclusively uses modules, file upload isn't natively supported in IoT Edge.
+
+For more information on uploading files with IoT Hub, see [Upload files with IoT Hub](../iot-hub/iot-hub-devguide-file-upload.md).
+
+## Next steps
+For more information, see [IoT Hub other limits](../iot-hub/iot-hub-devguide-quotas-throttling.md#other-limits).
iot-fundamentals Iot Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-fundamentals/iot-introduction.md
For an in-depth discussion of IoT architecture, see the [Microsoft Azure IoT Ref
## Next steps
+To learn about the different solution models and how you should get started, see [What's the difference between the aPaaS and PaaS solution offerings?](iot-solution-apaas-paas.md).
+ For some actual business cases and the architecture used, see the [Microsoft Azure IoT Technical Case Studies](https://microsoft.github.io/techcasestudies/#technology=IoT&sortBy=featured). For some sample projects that you can try out with an IoT DevKit, see the [IoT DevKit Project Catalog](https://microsoft.github.io/azure-iot-developer-kit/docs/projects/).
iot-fundamentals Iot Solution Apaas Paas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-fundamentals/iot-solution-apaas-paas.md
+
+ Title: Azure IoT aPaaS and PaaS solution options
+description: Explains why it's a good idea to start your IoT journey with IoT Central
++++ Last updated : 02/03/2022+++
+# What's the difference between aPaaS and PaaS solution offerings?
+
+IoT solutions require a combination of technologies to effectively connect devices, events, and actions to cloud applications. Microsoft provides open-source [Device SDKs](../iot-develop/about-iot-sdks.md) that you can use to build the apps that run on your devices. However, there are many options for building and deploying your IoT cloud solutions. The technologies and services you use depend on your scenario's development, deployment, and management needs.
+
+## Understand the difference between PaaS and aPaaS solutions
+
+Microsoft enables you to create an IoT solution, by using individual PaaS services or in an aPaaS IoT solution platform.
+
+- Platform as a service (PaaS) is a cloud computing model in which you tailor Azure hardware and software tools to a specific task or job function. With PaaS services, you're responsible for scaling and configuration, but the underlying infrastructure as a service (IaaS) is taken care of for you.
+- Application platform as a service (aPaaS) provides a cloud environment to build, manage, and deliver applications to customers. aPaaS offerings take care of scaling and most of the configuration, but they still require developer input to build out a finished solution.
+
+## Start with Azure IoT Central (aPaaS)
+
+Using an aPaaS environment streamlines many of the complex decisions you face when building an IoT solution. Azure IoT Central is designed to simplify and accelerate IoT solution assembly and operation. It pre-assembles PaaS components into an extensible and fully managed app development platform hosted in Azure. aPaaS takes much of the guesswork and complexity out of building reliable, scalable, and secure IoT applications.
+
+An out-of-the box web UX and API surface area make it simple to monitor device conditions, create rules, and manage millions of devices and their data remotely throughout their life cycles. Furthermore, it enables you to act on device insights by extending IoT intelligence into line-of-business applications. Azure IoT Central also offers built-in disaster recovery, multitenancy, global availability, and a predictable cost structure.
++
+## Building with Azure PaaS Services
+
+In some scenarios, you may need a higher degree of control and customization than Azure IoT Central provides. In these cases, Azure also offers individual platform as a service (PaaS) cloud services that you can use to build a custom IoT solution. For example, you might build a solution using a combination of these PaaS
+
+- Azure IoT Device Provisioning Service and Azure IoT Hub for automated device provisioning, device connectivity, and management
+- Azure Time Series Insights for storing and analyzing warm and cold path time series data from IoT devices
+- Azure Stream Analytics for analyzing hot path data from IoT devices
+- Azure IoT Edge for running AI, third-party services, or your own business logic on IoT Edge devices
++
+## Comparing approaches
+
+Use the following table to help decide if you can use an aPaaS solution based on Azure IoT Central, or if you should consider building a PaaS solution that uses Azure IoT Hub and other Azure PaaS components.
+
+| &nbsp; | Azure IoT Central (aPaaS) | Azure IoT Hub (PaaS) plus stream processing, data storage, and access control services |
+|--||-|
+| Type of service | Fully managed aPaaS solution. It simplifies device connectivity and management at scale so that you can focus time and resources on using IoT for business transformation. This simplicity comes with a tradeoff: an aPaaS-based solution is less customizable than a PaaS-based solution. | Managed PaaS back-end solution that acts as a central message hub between your IoT application and the devices it manages. You can add functionality by using other Azure PaaS services. This approach provides great flexibility but requires more development and management effort to build and operate your solution. |
+| Application templates | [Application templates](../iot-central/core/overview-iot-central-admin.md#create-applications) in Azure IoT Central help solution builders kick-start IoT solution development. You can get started with a generic application template, or use a prebuilt industry-focused application template for [retail](../iot-central/retail/tutorial-in-store-analytics-create-app.md), [energy](../iot-central/energy/tutorial-smart-meter-app.md), [government](../iot-central/government/tutorial-connected-waste-management.md), or [healthcare](../iot-central/healthcare/tutorial-continuous-patient-monitoring.md). | Not supported. You design and build your own solution using Azure IoT Hub and other PaaS services. |
+| Device management | Provides seamless [device provisioning and lifecycle management experiences](../iot-central/core/overview-iot-central.md#manage-your-devices). Includes built-in device management capabilities such as *jobs*, *connectivity status*, *raw data view*, and the Device Provisioning Service (DPS). | No built-in experience. You design and build your own solutions using Azure IoT Hub primitives, such as *device twins* and *direct methods*. DPS must be enabled separately. |
+| Scalability| Supports autoscaling. | There's no built-in mechanism for automatically scaling an IoT Hub. You need to deploy other solutions to enable autoscaling. See [Autoscale your Azure IoT Hub](/samples/azure-samples/iot-hub-dotnet-autoscale/iot-hub-dotnet-autoscale/). |
+| Message retention | Retains data on a rolling, 30-day basis. You can continuously export data using the [export feature](../iot-central/core/howto-export-data.md). | Enables data retention in the built-in Event Hubs service for a maximum of seven days. |
+| Visualizations | IoT Central has a UX that makes it simple to visualize device data, perform analytics queries, and create custom dashboards. See: [What is Azure IoT Central?](../iot-central/core/overview-iot-central.md#dashboards) | You design and build your own visualizations with your choice of technologies. |
+| OPC UA protocol | Not currently supported. | OPC Publisher is a Microsoft-supported open-source product that bridges the gap between industrial assets and Azure hosted resources. It connects to OPC UAΓÇôenabled assets or industrial connectivity software and publishes telemetry data to [Azure IoT Hub](../iot-hub/iot-concepts-and-iot-hub.md) in various formats, including IEC62541 OPC UA PubSub standard format. See: [Microsoft OPC Publisher](https://github.com/Azure/iot-edge-opc-publisher). |
+| Pricing | The first two active devices within an IoT Central application are free, if their message volume doesn't exceed the threshold: 800 messages with the *Standard Tier 0 plan*, 10,000 messages with the *Standard Tier 1 plan*, or 60,000 messages with the *Standard Tier 2 plan* per month. Volumes that exceed those thresholds incur overage charges. With more than two active devices, device pricing is prorated monthly. For each hour during the billing period, the highest number of active devices is counted and billed. See: [Azure IoT Central pricing](https://azure.microsoft.com/pricing/details/iot-central/). | See: [Azure IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub/). |
+| Analytics, insights, and actions | Integrated analytics experience targeted at exploration of device data in the context of device management. | To incorporate analytics, insights, and actions, use separate Azure PaaS services such as Azure Steam Analytics, Time Series Insights, Azure Data Explorer, and Azure Synapse. |
+| Big data management | Data management can be managed from Azure IoT Central itself. | You need to add and manage big data Azure PaaS services as part of your solution. |
+| High availability and disaster recovery | High availability and disaster recovery capabilities are built in to Azure IoT Central and managed for you automatically. See: [Best practices for device development in Azure IoT Central](../iot-central/core/concepts-best-practices.md). | Can be configured to support multiple high availability and disaster recovery scenarios. See: [Azure IoT Hub high availability and disaster recovery](../iot-hub/iot-hub-ha-dr.md). |
+| SLA | See: [SLA for Azure IoT Central](https://azure.microsoft.com/support/legal/sla/iot-central/).| See: [SLA for Azure IoT Hub](https://azure.microsoft.com/support/legal/sla/iot-hub/v1_2/). |
+| Device templates | Lets you centrally define and manage device templates that help structure the characteristics and behaviors of different device types. Device templates support device management tasks and visualizations. | Requires users to create their own repository to define and manage device templates. |
+| Data export | Provides data export to Azure blob storage, Azure Event Hubs, Azure Service Bus, webhooks, and Azure Data Explorer. Capabilities include filtering, enriching, and transforming messages on egress. | Provides a built-in Event Hubs compatible endpoint and can also use message routing to export data to other locations. |
+| Multi-tenancy | IoT Central [organizations](../iot-central/core/howto-create-organizations.md) enable in-app multi-tenancy. You to define a hierarchy to manage which users can see which devices in your IoT Central application. | Not supported. Tenancy can be achieved by using separate hubs per customer and/or access control can be built into the data layer of solutions.|
+| Rules and actions | Provides built-in rules and actions processing. Actions include email notifications, Azure Monitor group, Power Automate, and webhook actions. See: [IoT Central rules and actions](../iot-central/core/overview-iot-central.md#rules-and-actions). | Data coming from IoT Hub can be sent to Azure Stream Analytics, Azure Time Series Insights, or Azure Event Grid. From those services, you can connect to Azure Logic apps or other custom applications to handle rules and actions processing. See: [IoT remote monitoring and notifications with Azure Logic Apps](../iot-hub/iot-hub-monitoring-notifications-with-azure-logic-apps.md). |
+| SigFox/LoRaWAN protocols | Uses IoT Central Device Bridge. See: [Azure IoT Central Device Bridge](https://github.com/Azure/iotc-device-bridge#azure-iot-central-device-bridge). | Requires you to write a custom module for Azure IoT Edge and integrate it with Azure IoT Hub. |
+
+## Next steps
+
+Now that you've learned about the difference between aPaaS and PaaS offerings in Azure IoT a suggested next step is to read our FAQ on why start with IoT Central.
iot-hub-device-update Device Update Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub-device-update/device-update-error-codes.md
The following table lists error codes pertaining to the content service componen
| "UpdateVersionCompatibility" | Cannot import additional update version with the specified compatibility. | Same as for UpdateProviderCompatibility.ContentLimitNamespaceCompatibility. | | "CannotProcessUpdateFile" | Error processing source file. | | | "ContentFileCannotDownload" | Cannot download source file. | Check to make sure the URL for the update file(s) is still valid. |
+| "SourceFileMalwareDetected" | A known malware signature was detected in a file being imported. | Content imported into Device Update for IoT Hub is scanned for malware by several different mechanisms. If a known malware signature is identified, the import will fail and a unique error message will be returned. The error message contains the description of the malware signature, and a file hash for each file where the signature was detected. You can use the file hash to find the exact file being flagged, and use the description of the malware signature to check that file for malware. <br><br>Once you have removed the malware from any files being imported, you can start the import process again. |
+| "SourceFilePendingMalwareAnalysis" | A signature was detected in a file being imported that may indicate malware is present. | Content imported into Device Update for IoT Hub is scanned for malware by several different mechanisms. If a signature is identified that is not an exact match for known malware, but has characteristics of malware, the import will fail and a unique error message will be returned. The error message contains the description of the suspected malware signature, and a file hash for each file where the signature was detected. You can use the file hash to find the exact file being flagged, and use the description of the malware signature to check that file for malware. <br><br>Once you have removed the malware from any files being imported, you can start the import process again. If you are certain your files are free of malware and continue to see this error, use the [Contact Microsoft Support](troubleshoot-device-update.md#contact) process. |
**[Next Step: Troubleshoot issues with Device Update](.\troubleshoot-device-update.md)**
iot-hub Iot Hub Bulk Identity Mgmt https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-bulk-identity-mgmt.md
Use the optional **importMode** property in the import serialization data for ea
| importMode | Description | | | |
-| **createOrUpdate** |If a device does not exist with the specified **ID**, it is newly registered. <br/>If the device already exists, existing information is overwritten with the provided input data without regard to the **ETag** value. <br> The user can optionally specify twin data along with the device data. The twin's etag, if specified, is processed independently from the device's etag. If there is a mismatch with the existing twin's etag, an error is written to the log file. |
-| **create** |If a device does not exist with the specified **ID**, it is newly registered. <br/>If the device already exists, an error is written to the log file. <br> The user can optionally specify twin data along with the device data. The twin's etag, if specified, is processed independently from the device's etag. If there is a mismatch with the existing twin's etag, an error is written to the log file. |
-| **update** |If a device already exists with the specified **ID**, existing information is overwritten with the provided input data without regard to the **ETag** value. <br/>If the device does not exist, an error is written to the log file. |
-| **updateIfMatchETag** |If a device already exists with the specified **ID**, existing information is overwritten with the provided input data only if there is an **ETag** match. <br/>If the device does not exist, an error is written to the log file. <br/>If there is an **ETag** mismatch, an error is written to the log file. |
-| **createOrUpdateIfMatchETag** |If a device does not exist with the specified **ID**, it is newly registered. <br/>If the device already exists, existing information is overwritten with the provided input data only if there is an **ETag** match. <br/>If there is an **ETag** mismatch, an error is written to the log file. <br> The user can optionally specify twin data along with the device data. The twin's etag, if specified, is processed independently from the device's etag. If there is a mismatch with the existing twin's etag, an error is written to the log file. |
-| **delete** |If a device already exists with the specified **ID**, it is deleted without regard to the **ETag** value. <br/>If the device does not exist, an error is written to the log file. |
-| **deleteIfMatchETag** |If a device already exists with the specified **ID**, it is deleted only if there is an **ETag** match. If the device does not exist, an error is written to the log file. <br/>If there is an ETag mismatch, an error is written to the log file. |
+| **Create** |If a device does not exist with the specified **ID**, it is newly registered. If the device already exists, an error is written to the log file. |
+| **CreateOrUpdate** |If a device does not exist with the specified **ID**, it is newly registered. If the device already exists, existing information is overwritten with the provided input data without regard to the **ETag** value. |
+| **CreateOrUpdateIfMatchETag** |If a device does not exist with the specified **ID**, it is newly registered. If the device already exists, existing information is overwritten with the provided input data only if there is an **ETag** match. If there is an **ETag** mismatch, an error is written to the log file. |
+| **Delete** |If a device already exists with the specified **ID**, it is deleted without regard to the **ETag** value. If the device does not exist, an error is written to the log file. |
+| **DeleteIfMatchETag** |If a device already exists with the specified **ID**, it is deleted only if there is an **ETag** match. If the device does not exist, an error is written to the log file. If there is an ETag mismatch, an error is written to the log file. |
+| **Update** |If a device already exists with the specified **ID**, existing information is overwritten with the provided input data without regard to the **ETag** value. If the device does not exist, an error is written to the log file. |
+| **UpdateIfMatchETag** |If a device already exists with the specified **ID**, existing information is overwritten with the provided input data only if there is an **ETag** match. If the device does not exist or there is an **ETag** mismatch, an error is written to the log file. |
+| **UpdateTwin** |If a twin already exists with the specified **ID**, existing information is overwritten with the provided input data without regard to the twin's **ETag** value. |
+| **UpdateTwinIfMatchETag** |If a twin already exists with the specified **ID**, existing information is overwritten with the provided input data only if there is a match on the twin's **ETag** value. The twin's **ETag**, is processed independently from the device's **ETag**. If there is a mismatch with the existing twin's **ETag**, an error is written to the log file. |
> [!NOTE] > If the serialization data does not explicitly define an **importMode** flag for a device, it defaults to **createOrUpdate** during the import operation.
iot-hub Iot Hub Devguide Messages D2c https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-messages-d2c.md
You can enable/disable the fallback route in the Azure portal->Message Routing b
## Non-telemetry events
-In addition to device telemetry, message routing also enables sending device twin change events, device lifecycle events, digital twin change events, and device connection state events. For example, if a route is created with data source set to **device twin change events**, IoT Hub sends messages to the endpoint that contain the change in the device twin. Similarly, if a route is created with data source set to **device lifecycle events**, IoT Hub sends a message indicating whether the device was deleted or created. As part of [Azure IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md), a developer can create routes with data source set to **digital twin change events** and IoT Hub sends messages whenever a digital twin property is set or changed, a digital twin is replaced, or when a change event happens for the underlying device twin. Finally, if a route is created with data source set to **device connection state events**, IoT Hub sends a message indicating whether the device was connected or disconnected.
+In addition to device telemetry, message routing also enables sending device twin change events, device lifecycle events, digital twin change events, and device connection state events. For example, if a route is created with data source set to **device twin change events**, IoT Hub sends messages to the endpoint that contain the change in the device twin. Similarly, if a route is created with data source set to **device lifecycle events**, IoT Hub sends a message indicating whether the device or module was deleted or created. For more information about device lifecycle events, see [Device and module lifecycle notifications](./iot-hub-devguide-identity-registry.md#device-and-module-lifecycle-notifications). When using [Azure IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md), a developer can create routes with data source set to **digital twin change events** and IoT Hub sends messages whenever a digital twin property is set or changed, a digital twin is replaced, or when a change event happens for the underlying device twin. Finally, if a route is created with data source set to **device connection state events**, IoT Hub sends a message indicating whether the device was connected or disconnected.
[IoT Hub also integrates with Azure Event Grid](iot-hub-event-grid.md) to publish device events to support real-time integrations and automation of workflows based on these events. See key [differences between message routing and Event Grid](iot-hub-event-grid-routing-comparison.md) to learn which works best for your scenario. ## Limitations for device connection state events
-To receive device connection state events, a device must call either the *device-to-cloud send telemetry* or a *cloud-to-device receive message* operation with IoT Hub. However, if a device uses AMQP protocol to connect with IoT Hub, we recommend the device to call *cloud-to-device receive message* operation, otherwise their connection state notifications may be delayed by few minutes. If your device connects with MQTT protocol, IoT Hub keeps the cloud-to-device link open. To open the cloud-to-device link for AMQP, call the [Receive Async API](/rest/api/iothub/device/receivedeviceboundnotification).
+Device connection state events are available for devices connecting using either the MQTT or AMQP protocol, or using either of these protocols over WebSockets. Requests made only with HTTPS won't trigger device connection state notifications. For IoT Hub to start sending device connection state events, after opening a connection a device must call either the *cloud-to-device receive message* operation or the *device-to-cloud send telemetry* operation. Outside of the Azure IoT SDKs, in MQTT these operations equate to SUBSCRIBE or PUBLISH operations on the appropriate messaging [topics](iot-hub-mqtt-support.md). Over AMQP these equate to attaching or transferring a message on the [appropriate link paths](iot-hub-amqp-support.md).
-The device-to-cloud link stays open as long as the device sends telemetry.
-
-If the device connection flickers, meaning if device connects and disconnects frequently, IoT Hub doesn't send every single connection state, but publishes the current connection state taken at a periodic snapshot of 60 sec until the flickering stops. Receiving either the same connection state event with different sequence numbers or different connection state events both mean that there was a change in the device connection state.
+IoT Hub does not report each individual device connect and disconnect, but rather publishes the current connection state taken at a periodic 60 second snapshot. Receiving either the same connection state event with different sequence numbers or different connection state events both mean that there was a change in the device connection state during the 60 second window.
## Testing routes
iot-hub Iot Hub Devguide Routing Query Syntax https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-devguide-routing-query-syntax.md
Message routing enables you to query on [Device Twin](iot-hub-devguide-device-tw
} ```
+> [!NOTE]
+> Modules do not inherit twin tags from their corresponding devices. Twin queries for messages originating from device modules (for example from IoT Edge modules) query against the module twin and not the corresponding device twin.
+ ### Query expressions
-A query on message twin needs to be prefixed with the `$twin`. Your query expression can also combine a twin tag or property reference with a body reference, message system properties, and message application properties reference. We recommend using unique names in tags and properties as the query is not case-sensitive. This applies to both device twins and module twins. Also refrain from using `twin`, `$twin`, `body`, or `$body`, as a property names. For example, the following are all valid query expressions:
+A query on a device or module twin needs to be prefixed with `$twin`. Your query expression can also combine a twin tag or property reference with a body reference, a message system properties reference, and/or a message application properties reference. We recommend using unique names in tags and properties as the query is not case-sensitive. This applies to both device twins and module twins. Also refrain from using `twin`, `$twin`, `body`, or `$body`, as a property names. For example, the following are all valid query expressions:
```sql $twin.properties.desired.telemetryConfig.sendFrequency = '5m'
iot-hub Iot Hub Ha Dr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-ha-dr.md
Previously updated : 11/22/2021 Last updated : 02/04/2022 + # IoT Hub high availability and disaster recovery
The IoT Hub service provides intra-region HA by implementing redundancies in alm
## Availability Zones
-There is a 99.9% Service Level Agreement for IoT Hub, and you can [read the SLA](https://azure.microsoft.com/support/legal/sla/iot-hub/). The full [Azure SLA](https://azure.microsoft.com/support/legal/sla/) explains the guaranteed availability of Azure as a whole.
- IoT Hub supports [Availability Zones](../availability-zones/az-overview.md). An Availability Zone is a high-availability offering that protects your applications and data from datacenter failures. A region with Availability Zone support is comprised of three zones supporting that region. Each zone provides one or more datacenters each in a unique physical location with independent power, cooling, and networking. This provides replication and redundancy within the region. Availability Zone support for IoT Hub is enabled automatically for new IoT Hub resources created in the following Azure regions: - Australia East
Time to recover = RTO [10 min - 2 hours for manual failover | 2 - 26 hours for M
> [!IMPORTANT] > The IoT SDKs do not cache the IP address of the IoT hub. We recommend that user code interfacing with the SDKs should not cache the IP address of the IoT hub.
+## Disable disaster recovery
+
+IoT Hub provides Microsoft-Initiated Failover and Manual Failover by replicating data to the [paired region](/azure/availability-zones/cross-region-replication-azure) for each IoT hub. For some regions, you can avoid data replication outside of the region by disabling disaster recovery when creating an IoT hub. The following regions support this feature:
+
+- **Brazil South**; paired region, South Central US.
+- **Southeast Asia (Singapore)**; paired region, East Asia (Hong Kong).
+
+To disable disaster recovery in supported regions, make sure that **Disaster recovery enabled** is unselected when you create your IoT hub:
++
+You can also disable disaster recovery when you create an IoT hub using an [ARM template](/azure/templates/microsoft.devices/iothubs?tabs=bicep#iothubproperties).
+
+Failover capability will not be available if you disable disaster recovery for an IoT hub.
++
+You can only disable disaster recovery to avoid data replication outside of the paired region in Brazil South or Southeast Asia when you create an IoT hub. If you want to configure your existing IoT hub to disable disaster recovery, you need to create a new IoT hub with disaster recovery disabled and manually migrate your existing IoT hub by following the [How to clone an Azure IoT Hub to another region article](iot-hub-how-to-clone.md).
+ ## Achieve cross region HA If your business uptime goals aren't satisfied by the RTO that either Microsoft-initiated failover or manual failover options provide, you should consider implementing a per-device automatic cross region failover mechanism.
iot-hub Iot Hub Message Enrichments Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-message-enrichments-overview.md
To try out message enrichments, see the [message enrichments tutorial](tutorial-
* Message enrichments don't apply to digital twin change events.
+* Modules do not inherit twin tags from their corresponding devices. Enrichments for messages originating from device modules (for example from IoT Edge modules) must use the twin tags that are set on the module twin.
+ ## Pricing Message enrichments are available for no additional charge. Currently, you are charged when you send a message to an IoT Hub. You are only charged once for that message, even if the message goes to multiple endpoints.
iot-hub Iot Hub Mqtt Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-hub/iot-hub-mqtt-support.md
-
+ Title: Understand Azure IoT Hub MQTT support | Microsoft Docs description: Support for devices connecting to an IoT Hub device-facing endpoint using the MQTT protocol. Includes information about built-in MQTT support in the Azure IoT device SDKs.
logic-apps Create Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/create-managed-service-identity.md
ms.suite: integration Previously updated : 10/22/2021 Last updated : 02/03/2022 # Authenticate access to Azure resources with managed identities in Azure Logic Apps
-Some triggers and actions in logic app workflows support using a [managed identity](../active-directory/managed-identities-azure-resources/overview.md), previously known as a *Managed Service Identity (MSI)*, to authenticate connections to resources protected by Azure Active Directory (Azure AD). When your logic app resource has a managed identity enabled, you don't have to provide credentials, secrets, or Azure AD tokens. Azure manages this identity and helps keep authentication information secure because you don't have to manage this sensitive information.
+In logic app workflows, some triggers and actions support using a [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to authenticate access to resources protected by Azure Active Directory (Azure AD). This identity was previously known as a *Managed Service Identity (MSI)*. When you enable your logic app resource to use a managed identity for authentication, you don't have to provide credentials, secrets, or Azure AD tokens. Azure manages this identity and helps keep authentication information secure because you don't have to manage this sensitive information.
-Azure Logic Apps supports the [*system-assigned* managed identity](../active-directory/managed-identities-azure-resources/overview.md), which you can use with only one logic app resource, and the [*user-assigned* managed identity](../active-directory/managed-identities-azure-resources/overview.md), which you can share across a group of logic app resources, based on where your logic app workflows run.
+Azure Logic Apps supports the [*system-assigned* managed identity](../active-directory/managed-identities-azure-resources/overview.md) and the [*user-assigned* managed identity](../active-directory/managed-identities-azure-resources/overview.md), but the following differences exist between these identity types:
-| Logic app resource type | Environment | Description |
-|-|-|-|
-| Consumption | - Multi-tenant Azure Logic Apps <p><p>- Integration service environment (ISE) | You can enable and use *either* the system-assigned identity or a *single* user-assigned identity at the logic app resource level and connection level. |
-| Standard | - Single-tenant Azure Logic Apps <p><p>- App Service Environment v3 (ASEv3) <p><p>- Azure Arc enabled Logic Apps | Currently, you can use *only* the system-assigned identity, which is automatically enabled. The user-assigned identity is currently unavailable. |
-|||
+* A logic app resource can enable and use only one unique system-assigned identity.
+
+* A logic app resource can share the same user-assigned identity across a group of other logic app resources.
+
+* Based on your logic app resource type, you can enable either the system-assigned identity, user-assigned identity, or both at the same time:
-To learn about managed identity limits in Azure Logic Apps, review [Limits on managed identities for logic apps](logic-apps-limits-and-config.md#managed-identity). For more information about the Consumption and Standard logic app resource types and environments, review the following documentation:
+ | Logic app resource type | Environment | Managed identity support |
+ |-|-|--|
+ | Consumption | - Multi-tenant Azure Logic Apps <p><p>- Integration service environment (ISE) | - You can enable *either* the system-assigned identity type *or* the user-assigned identity type on your logic app resource. <p>- If enabled with the user-assigned identity type, your logic app resource can have *only a single user-assigned identity* at any one time. <p>- You can use the identity at the logic app resource level and at the connection level. |
+ | Standard | - Single-tenant Azure Logic Apps <p><p>- App Service Environment v3 (ASEv3) <p><p>- Azure Arc enabled Logic Apps | - You can enable *both* the system-assigned identity type, which is enabled by default, *and* the user-assigned identity type at the same time. <p>- Your logic app resource can have *multiple* user-assigned identities at the same time. <p>- You can use the identity at the logic app resource level and at the connection level. |
+ ||||
+
+To learn more about managed identity limits in Azure Logic Apps, review [Limits on managed identities for logic apps](logic-apps-limits-and-config.md#managed-identity). For more information about the Consumption and Standard logic app resource types and environments, review the following documentation:
* [What is Azure Logic Apps?](logic-apps-overview.md#resource-environment-differences) * [Single-tenant versus multi-tenant and integration service environment](single-tenant-overview-compare.md)
The following table lists the operations where you can use either the system-ass
### [Standard](#tab/standard)
-The following table lists the operations where you can use the system-assigned managed identity in the **Logic App (Standard)** resource type:
+The following table lists the operations where you can use both the system-assigned managed identity and multiple user-assigned managed identities in the **Logic App (Standard)** resource type:
| Operation type | Supported operations | |-|-|
The following table lists the operations where you can use the system-assigned m
-This article shows how to enable and set up the system-assigned identity or user-assigned identity, based on whether you're using the **Logic App (Consumption)** or **Logic App (Standard)** resource type. Unlike the system-assigned identity, which you don't have to manually create, you *do* have to manually create the user-assigned identity for the **Logic App (Consumption)** resource type. This article includes the steps to create the user-assigned identity using the Azure portal and Azure Resource Manager template (ARM template). For Azure PowerShell, Azure CLI, and Azure REST API, review the following documentation:
+This article shows how to enable and set up the system-assigned identity or user-assigned identity, based on whether you're using the **Logic App (Consumption)** or **Logic App (Standard)** resource type. Unlike the system-assigned identity, which you don't have to manually create, you *do* have to manually create the user-assigned identity. This article includes the steps to create the user-assigned identity using the Azure portal and Azure Resource Manager template (ARM template). For Azure PowerShell, Azure CLI, and Azure REST API, review the following documentation:
| Tool | Documentation | |||
This article shows how to enable and set up the system-assigned identity or user
* The logic app resource where you want to use the [trigger or actions that support managed identities](logic-apps-securing-a-logic-app.md#authentication-types-supported-triggers-actions).
- | Logic app resource type | Managed identity support |
- |-|--|
- | Consumption | System-assigned or user-assigned identity |
- | Standard | System-assigned identity (automatically enabled) |
- |||
- <a name="system-assigned-azure-portal"></a> <a name="azure-portal-system-logic-app"></a>
This article shows how to enable and set up the system-assigned identity or user
> user-assigned identity. Before you can add the system-assigned identity, you have to first *remove* the user-assigned identity > from your logic app resource.
- Your logic app resource can now use the system-assigned identity, which is registered with Azure AD and is represented by an object ID.
+ Your logic app resource can now use the system-assigned identity. This identity is registered with Azure AD and is represented by an object ID.
![Screenshot showing Consumption logic app's "Identity" pane with the object ID for system-assigned identity.](./media/create-managed-service-identity/object-id-system-assigned-identity.png)
When Azure creates your logic app resource definition, the `identity` object get
<a name="azure-portal-user-identity"></a> <a name="user-assigned-azure-portal"></a>
-## Create user-assigned identity in the Azure portal (Consumption only)
+## Create user-assigned identity in the Azure portal
-Before you can enable the user-assigned identity on your **Logic App (Consumption)** resource, you have to first create that identity as a separate Azure resource.
+Before you can enable the user-assigned identity on your **Logic App (Consumption)** or **Logic App (Standard)** resource, you have to first create that identity as a separate Azure resource.
1. In the [Azure portal](https://portal.azure.com) search box, enter `managed identities`. Select **Managed Identities**.
Before you can enable the user-assigned identity on your **Logic App (Consumptio
| **Name** | Yes | <*user-assigned-identity-name*> | The name to give your user-assigned identity. This example uses `Fabrikam-user-assigned-identity`. | |||||
- After validating the information, Azure creates your managed identity. Now you can add the user-assigned identity to your logic app resource, which can have only one user-assigned identity.
+ After Azure validates the information, Azure creates your managed identity. Now you can add the user-assigned identity to your logic app resource.
+
+## Add user-assigned identity to logic app in the Azure portal
+
+### [Consumption](#tab/consumption)
1. In the Azure portal, open your logic app resource.
Before you can enable the user-assigned identity on your **Logic App (Consumptio
1. On the **Identity** pane, select **User assigned** > **Add**.
- ![Screenshot showing "Identity" pane with "Add" selected.](./media/create-managed-service-identity/add-user-assigned-identity-logic-app.png)
+ ![Screenshot showing Consumption logic app and "Identity" pane with "Add" selected.](./media/create-managed-service-identity/add-user-assigned-identity-logic-app-consumption.png)
1. On the **Add user assigned managed identity** pane, follow these steps:
Before you can enable the user-assigned identity on your **Logic App (Consumptio
1. From the list with *all* the managed identities in that subscription, select the user-assigned identity that you want. To filter the list, in the **User assigned managed identities** search box, enter the name for the identity or resource group.
- ![Screenshot showing the user-assigned identity selected.](./media/create-managed-service-identity/select-user-assigned-identity.png)
+ ![Screenshot showing Consumption logic app and the user-assigned identity selected.](./media/create-managed-service-identity/select-user-assigned-identity-consumption.png)
1. When you're done, select **Add**. > [!NOTE]
- > If you get an error that you can have only a single managed identity, your logic app is already associated with the system-assigned
- > identity. Before you can add the user-assigned identity, you have to first disable the system-assigned identity.
+ > If you get an error that you can have only a single managed identity, your logic app
+ > is already associated with the system-assigned identity. Before you can add the
+ > user-assigned identity, you have to first disable the system-assigned identity.
Your logic app is now associated with the user-assigned managed identity.
- ![Screenshot showing association between user-assigned identity and logic app resource.](./media/create-managed-service-identity/added-user-assigned-identity.png)
+ ![Screenshot showing Consumption logic app and association between user-assigned identity and logic app resource.](./media/create-managed-service-identity/added-user-assigned-identity-consumption.png)
1. Now follow the [steps that give that identity access to the resource](#access-other-resources) later in this topic.
+### [Standard](#tab/standard)
+
+1. In the Azure portal, open your logic app resource.
+
+1. On the logic app menu, under **Settings**, select **Identity**.
+
+1. On the **Identity** pane, select **User assigned** > **Add**.
+
+ ![Screenshot showing Standard logic app and "Identity" pane with "Add" selected.](./media/create-managed-service-identity/add-user-assigned-identity-logic-app-standard.png)
+
+1. On the **Add user assigned managed identity** pane, follow these steps:
+
+ 1. From the **Subscription** list, select your Azure subscription, if not already selected.
+
+ 1. From the list with *all* the managed identities in that subscription, select the user-assigned identity that you want. To filter the list, in the **User assigned managed identities** search box, enter the name for the identity or resource group.
+
+ ![Screenshot showing Standard logic app and the user-assigned identity selected.](./media/create-managed-service-identity/select-user-assigned-identity-standard.png)
+
+ 1. When you're done, select **Add**.
+
+ Your logic app is now associated with the user-assigned managed identity.
+
+ ![Screenshot showing Standard logic app and association between user-assigned identity and logic app resource.](./media/create-managed-service-identity/added-user-assigned-identity-standard.png)
+
+ 1. To use multiple user-assigned managed identities, repeat the same steps to add the identity.
+
+1. Now follow the [steps that give the identity access to the resource](#access-other-resources) later in this topic.
+++ <a name="template-user-identity"></a>
-## Create user-assigned identity in an ARM template (Consumption only)
+## Create user-assigned identity in an ARM template
+
+To automate creating and deploying Azure resources such as logic apps, you can use an [ARM template](logic-apps-azure-resource-manager-templates-overview.md), which support [user-assigned identities for authentication](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-arm.md).
-To automate creating and deploying Azure resources such as logic apps, you can use an [ARM template](logic-apps-azure-resource-manager-templates-overview.md), which support [user-assigned identities for authentication](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-arm.md). In your template's `resources` section, your logic app's resource definition requires these items:
+In your template's `resources` section, your logic app's resource definition requires these items:
* An `identity` object with the `type` property set to `UserAssigned` * A child `userAssignedIdentities` object that specifies the user-assigned resource and name
-This example shows a logic app resource definition for an HTTP PUT request and includes a non-parameterized `identity` object. The response to the PUT request and subsequent GET operation also have this `identity` object:
+### [Consumption](#tab/consumption)
+
+This example shows a Consumption logic app resource definition for an HTTP PUT request and includes a non-parameterized `identity` object. The response to the PUT request and subsequent GET operation also have this `identity` object:
```json {
This example shows a logic app resource definition for an HTTP PUT request and i
} ```
-If your template also includes the managed identity's resource definition, you can parameterize the `identity` object. This example shows how the child `userAssignedIdentities` object references a `userAssignedIdentity` variable that you define in your template's `variables` section. This variable references the resource ID for your user-assigned identity.
+If your template also includes the managed identity's resource definition, you can parameterize the `identity` object. This example shows how the child `userAssignedIdentities` object references a `userAssignedIdentityName` variable that you define in your template's `variables` section. This variable references the resource ID for your user-assigned identity.
```json {
If your template also includes the managed identity's resource definition, you c
} ```
-<a name="access-other-resources"></a>
+### [Standard](#tab/standard)
-## Give identity access to resources
+A Standard logic app resource can enable and use both the system-assigned identity and multiple user-assigned identities. The Standard logic app resource definition is based on the Azure Functions function app resource definition.
-Before you can use your logic app's managed identity for authentication, on the Azure resource where you want to use the identity, you have to set up access for your identity by using Azure role-based access control (Azure RBAC). The steps in this section cover how to assign the appropriate role to that identity on the Azure resource using the [Azure portal](#azure-portal-assign-access) and [Azure Resource Manager template (ARM template)](../role-based-access-control/role-assignments-template.md). For Azure PowerShell, Azure CLI, and Azure REST API, review the following documentation:
+This example shows a Standard logic app resource definition that includes a non-parameterized `identity` object:
-| Tool | Documentation |
-|||
-| Azure PowerShell | [Add role assignment](../active-directory/managed-identities-azure-resources/howto-assign-access-powershell.md) |
-| Azure CLI | [Add role assignment](../active-directory/managed-identities-azure-resources/howto-assign-access-cli.md) |
-| Azure REST API | [Add role assignment](../role-based-access-control/role-assignments-rest.md) |
-|||
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {<template-parameters>},
+ "resources": [
+ {
+ "apiVersion": "2021-02-01",
+ "type": "Microsoft.Web/sites/functions",
+ "name": "[variables('logicappName')]",
+ "location": "[resourceGroup().location]",
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "/subscriptions/<Azure-subscription-ID>/resourceGroups/<Azure-resource-group-name>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<user-assigned-identity-name>": {}
+ },
+ },
+ "properties": {
+ "name": "[variables('appName')]",
+ "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
+ "hostingEnvironment": "",
+ "clientAffinityEnabled": false,
+ "alwaysOn": true
+ },
+ "parameters": {},
+ "dependsOn": []
+ }
+ ],
+ "outputs": {}
+}
+```
-<a name="azure-portal-assign-access"></a>
+If your template also includes the managed identity's resource definition, you can parameterize the `identity` object. This example shows how the child `userAssignedIdentities` object references a `userAssignedIdentityName` variable that you define in your template's `variables` section. This variable references the resource ID for your user-assigned identity.
-### Assign managed identity role-based access in the Azure portal
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2019-01-01/deploymentTemplate.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {<template-parameters>},
+ "resources": [
+ {
+ "apiVersion": "2021-02-01",
+ "type": "Microsoft.Web/sites/functions",
+ "name": "[variables('logicappName')]",
+ "location": "[resourceGroup().location]",
+ "identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "[resourceId(Microsoft.ManagedIdentity/userAssignedIdentities', variables('userAssignedIdentityName'))]": {}
+ }
+ },
+ "properties": {
+ "name": "[variables('appName')]",
+ "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
+ "hostingEnvironment": "",
+ "clientAffinityEnabled": false,
+ "alwaysOn": true
+ },
+ "parameters": {},
+ "dependsOn": [
+ "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
+ "[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('userAssignedIdentityName'))]"
+ ]
+ },
+ {
+ "apiVersion": "2018-11-30",
+ "type": "Microsoft.ManagedIdentity/userAssignedIdentities",
+ "name": "[parameters('Template_UserAssignedIdentityName')]",
+ "location": "[resourceGroup().location]",
+ "properties": {}
+ },
+ ],
+ "outputs": {}
+}
+```
+
+When the logic app resource is created, the `identity` object has the following additional properties:
+
+```json
+"identity": {
+ "type": "UserAssigned",
+ "userAssignedIdentities": {
+ "<resource-ID>": {
+ "principalId": "<principal-ID>",
+ "clientId": "<client-ID>"
+ }
+ }
+}
+```
+
+The `principalId` property value is a unique identifier for the identity that's used for Azure AD administration. The `clientId` property value is a unique identifier for the logic app's new identity that's used for specifying which identity to use during runtime calls. For more information about Azure Resource Manager templates and managed identities for Azure Functions, review [ARM template - Azure Functions](../azure-functions/functions-create-first-function-resource-manager.md#review-the-template) and [Add a user-assigned identity using an ARM template for Azure Functions](../app-service/overview-managed-identity.md?tabs=arm%2Chttp#add-a-user-assigned-identity).
-On the Azure resource where you want to use the managed identity for authentication, you have to assign that identity to a role that can access that target resource. For more general information about this task, review [Assign a managed identity access to another resource using Azure RBAC](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md).
++
+<a name="access-other-resources"></a>
+
+## Give identity access to resources
+
+Before you can use your logic app's managed identity for authentication, you have to set up access for the identity on the Azure resource where you want to use the identity. The way you set up access varies based on the resource that you want the identity to access.
> [!NOTE] > When a managed identity has access to an Azure resource in the same subscription, the identity can
On the Azure resource where you want to use the managed identity for authenticat
> the resource. Likewise, if you have to select your subscription before you can select the > target resource, you must give the identity access to the subscription.
+For example, to access an Azure Blob storage account with your managed identity, you have to set up access by using Azure role-based access control (Azure RBAC) and assign the appropriate role for that identity to the storage account. The steps in this section describe how to complete this task by using the [Azure portal](#azure-portal-assign-role) and [Azure Resource Manager template (ARM template)](../role-based-access-control/role-assignments-template.md). For Azure PowerShell, Azure CLI, and Azure REST API, review the following documentation:
+
+| Tool | Documentation |
+|||
+| Azure PowerShell | [Add role assignment](../active-directory/managed-identities-azure-resources/howto-assign-access-powershell.md) |
+| Azure CLI | [Add role assignment](../active-directory/managed-identities-azure-resources/howto-assign-access-cli.md) |
+| Azure REST API | [Add role assignment](../role-based-access-control/role-assignments-rest.md) |
+|||
+
+However, to access an Azure key vault with your managed identity, you have to create an access policy for that identity on your key vault and assign the appropriate permissions for that identity on that key vault. The later steps in this section describe how to complete this task by using the [Azure portal](#azure-portal-access-policy). For Resource Manager templates, PowerShell, and Azure CLI, review the following documentation:
+
+| Tool | Documentation |
+|||
+| Azure Resource Manager template (ARM template) | [Key Vault access policy resource definition](/templates/microsoft.keyvault/vaults/) |
+| Azure PowerShell | [Assign a Key Vault access policy](../key-vault/general/assign-access-policy.md?tabs=azure-powershell) |
+| Azure CLI | [Assign a Key Vault access policy](../key-vault/general/assign-access-policy.md?tabs=azure-cli) |
+|||
+
+<a name="azure-portal-assign-role"></a>
+
+### Assign managed identity role-based access in the Azure portal
+
+To use a managed identity for authentication, some Azure resources, such as Azure storage accounts, require that you assign that identity to a role that has the appropriate permissions on the target resource. Other Azure resources, such as Azure key vaults, require that you [create an access policy that has the appropriate permissions on the target resource for that identity](#azure-portal-access-policy).
+ 1. In the [Azure portal](https://portal.azure.com), open the resource where you want to use the identity. 1. On the resource's menu, select **Access control (IAM)** > **Add** > **Add role assignment**.
On the Azure resource where you want to use the managed identity for authenticat
| Type | Azure service instance | Subscription | Member | |||--|--| | **System-assigned** | **Logic App** | <*Azure-subscription-name*> | <*your-logic-app-name*> |
- | **User-assigned** (Consumption only) | Not applicable | <*Azure-subscription-name*> | <*your-user-assigned-identity-name*> |
+ | **User-assigned** | Not applicable | <*Azure-subscription-name*> | <*your-user-assigned-identity-name*> |
||||| For more information about assigning roles, review the documentation, [Assign roles using the Azure portal](../role-based-access-control/role-assignments-portal.md).
-1. After you finish setting up access for the identity, you can then use the identity to [authenticate access for triggers and actions that support managed identities](#authenticate-access-with-identity).
+1. After you finish, you can use the identity to [authenticate access for triggers and actions that support managed identities](#authenticate-access-with-identity).
+
+For more general information about this task, review [Assign a managed identity access to another resource using Azure RBAC](../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md).
+
+<a name="azure-portal-access-policy"></a>
+
+### Create access policy in the Azure portal
+
+To use a managed identity for authentication, some Azure resources, such as Azure key vaults, require that you create an access policy that has the appropriate permissions on the target resource for that identity. Other Azure resources, such as Azure storage accounts, require that you [assign that identity to a role that has the appropriate permissions on the target resource](#azure-portal-assign-role).
+
+1. In the [Azure portal](https://portal.azure.com), open the target resource where you want to use the identity. This example uses an Azure key vault as the target resource.
+
+1. On the resource's menu, select **Access policies** > **Create**, which opens the **Create an access policy** pane.
+
+ > [!NOTE]
+ > If the resource doesn't have the **Access policies** option, [try assigning a role assignment instead](#azure-portal-assign-role).
+
+ ![Screenshot showing the Azure portal and key vault example with "Access policies" pane open.](./media/create-managed-service-identity/create-access-policy.png)
+
+1. On the **Permissions** tab, select the required permissions that the identity needs to access the target resource.
+
+ For example, to use the identity with the managed Azure Key Vault connector's **List secrets** operation, the identity needs **List** permissions. So, in the **Secret permissions** column, select **List**.
+
+ ![Screenshot showing "Permissions" tab with "List" permissions selected.](./media/create-managed-service-identity/select-access-policy-permissions.png)
+
+1. When you're ready, select **Next**. On the **Principal** tab, find and select the managed identity, which is a user-assigned identity in this example.
+
+1. Skip the optional **Application** step, select **Next**, and finish creating the access policy.
+
+In the next section about using a managed identity to authenticate access for a trigger or action, the example continues with the steps from an earlier section where you set up access for a managed identity using RBAC and doesn't use Azure Key Vault as the example. However, the general steps to use a managed identity for authentication are the same.
<a name="authenticate-access-with-identity"></a>
These steps show how to use the managed identity with a trigger or action throug
1. From the **Authentication type** list, select **Managed identity**.
- ![Screenshot showing example built-in action with "Authentication type" list open and "Managed identity" selected in Standard.](./media/create-managed-service-identity/built-in-managed-identity-standard.png)
+ ![Screenshot showing example built-in action with "Authentication type" list open and "Managed identity" selected - Standard.](./media/create-managed-service-identity/built-in-managed-identity-standard.png)
+
+ 1. From the list with enabled identities, select the identity that you want to use, for example:
+
+ ![Screenshot showing example built-in action with managed identity selected to use - Standard.](./media/create-managed-service-identity/built-in-select-identity-standard.png)
For more information, review [Example: Authenticate built-in trigger or action with a managed identity](#authenticate-built-in-managed-identity).
These steps show how to use the managed identity with a trigger or action throug
1. On the tenant selection page, select **Connect with managed identity (preview)**, for example:
- ![Screenshot showing Azure Resource Manager action and "Connect with managed identity" selected in Standard.](./media/create-managed-service-identity/select-connect-managed-identity-standard.png)
+ ![Screenshot showing Azure Resource Manager action and "Connect with managed identity" selected - Standard.](./media/create-managed-service-identity/select-connect-managed-identity-standard.png)
1. On the next page, for **Connection name**, provide a name to use for the connection. 1. For the authentication type, choose one of the following options based on your managed connector:
- * **Single-authentication**: These connectors support only one authentication type. From the **Managed identity** list, select the currently enabled managed identity, if not already selected, and then select **Create**, for example:
+ * **Single-authentication**: These connectors support only one authentication type, which is managed identity in this case. From the **Managed identity** list, select the identity that you want to use. When you're ready to create the connection, select **Create**, for example:
- ![Screenshot showing the connection name page and single managed identity selected in Standard.](./media/create-managed-service-identity/single-system-identity-standard.png)
+ ![Screenshot showing the connection name page and available enabled managed identities - Standard.](./media/create-managed-service-identity/single-identity-standard.png)
- * **Multi-authentication**: These connectors support more than one authentication type. From the **Authentication type** list, select **Logic Apps Managed Identity** > **Create**, for example:
+ * **Multi-authentication**: These connectors support more than one authentication type.
+
+ 1. From the **Authentication type** list, select **Logic Apps Managed Identity** > **Create**, for example:
+
+ ![Screenshot showing the connection name page and "Logic Apps Managed Identity" selected - Standard.](./media/create-managed-service-identity/multi-identity-standard.png)
- ![Screenshot showing the connection name page and "Logic Apps Managed Identity" selected in Standard.](./media/create-managed-service-identity/multi-system-identity-standard.png)
+ 1. From the **Managed identity** list, select the identity that you want to use.
+
+ ![Screenshot showing the action's "Parameters" pane and "Managed identity" list - Standard.](./media/create-managed-service-identity/select-multi-identity-standard.png)
For more information, review [Example: Authenticate managed connector trigger or action with a managed identity](#authenticate-managed-connector-managed-identity).
The Azure Resource Manager managed connector has an action, **Read a resource**,
1. When you're ready, select **Create**.
-1. After successfully creating the connection, the designer can fetch any dynamic values, content, or schema by using managed identity authentication.
+1. After the designer successfully creates the connection, the designer can fetch any dynamic values, content, or schema by using managed identity authentication.
1. Continue building the workflow the way that you want.
The Azure Resource Manager managed connector has an action, **Read a resource**,
1. On the connection name page, provide a name for the connection.
- The Azure Resource Manager action is a single-authentication action, so the connection information pane shows a **Managed identity** list that automatically selects the managed identity that's currently enabled on the logic app resource. If you enabled a system-assigned managed identity, the **Managed identity** list selects **System-assigned managed identity**. If you had enabled a user-assigned managed identity instead, the list selects that identity instead.
+ The Azure Resource Manager action is a single-authentication action, so the connection information pane shows a **Managed identity** list that automatically selects the managed identity that's currently enabled on the logic app resource. By default, Standard logic apps automatically have the system-assigned managed identity enabled. The **Managed identity** list shows all the currently enabled identities, for example:
- In this example, **System-assigned managed identity** is the only selection available.
+ ![Screenshot showing Azure Resource Manager action with the connection name entered and "System-assigned managed identity" selected.](./media/create-managed-service-identity/single-identity-standard.png)
- ![Screenshot showing Azure Resource Manager action with the connection name entered and "System-assigned managed identity" selected.](./media/create-managed-service-identity/single-system-identity-standard.png)
-
- If you're using a multiple-authentication trigger or action, such as Azure Blob Storage, the connection information pane shows an **Authentication type** list that includes the **Managed identity** option among other authentication types.
+ If you're using a multiple-authentication trigger or action, such as Azure Blob Storage, the connection information pane shows an **Authentication type** list that includes the **Logic Apps Managed Identity** option among other authentication types. After you select this option, on the next pane, you can select an identity from the **Managed identity** list.
> [!NOTE] > If the managed identity isn't enabled when you try to create the connection, change the connection,
The Azure Resource Manager managed connector has an action, **Read a resource**,
1. When you're ready, select **Create**.
-1. After successfully creating the connection, the designer can fetch any dynamic values, content, or schema by using managed identity authentication.
+1. After the designer successfully creates the connection, the designer can fetch any dynamic values, content, or schema by using managed identity authentication.
1. Continue building the workflow the way that you want.
The Azure Resource Manager managed connector has an action, **Read a resource**,
<a name="logic-app-resource-definition-connection-managed-identity"></a>
-## Logic app resource definition and connections that use a managed identity (Consumption)
+## Logic app resource definition and connections that use a managed identity
A connection that enables and uses a managed identity are a special connection type that works only with a managed identity. At runtime, the connection uses the managed identity that's enabled on the logic app resource. At runtime, the Azure Logic Apps service checks whether any managed connector trigger and actions in the logic app workflow are set up to use the managed identity and that all the required permissions are set up to use the managed identity for accessing the target resources that are specified by the trigger and actions. If successful, Azure Logic Apps retrieves the Azure AD token that's associated with the managed identity and uses that identity to authenticate access to the target resource and perform the configured operation in trigger and actions.
A connection that enables and uses a managed identity are a special connection t
In a **Logic App (Consumption)** resource, the connection configuration is saved in the logic app resource definition's `parameters` object, which contains the `$connections` object that includes pointers to the connection's resource ID along with the identity's resource ID, if the user-assigned identity is enabled.
-This example shows what the configuration looks like when the logic app enables the system-assigned managed identity:
+This example shows what the configuration looks like when the logic app enables the *system-assigned* managed identity:
```json "parameters": {
This example shows what the configuration looks like when the logic app enables
} ```
-This example shows what the configuration looks like when the logic app enables a user-assigned managed identity:
+This example shows what the configuration looks like when the logic app enables a *user-assigned* managed identity:
```json "parameters": {
This example shows what the configuration looks like when the logic app enables
"connectionName": "{connection-name}", "connectionProperties": { "authentication": {
- "identity": "/subscriptions/{Azure-subscription-ID}/resourceGroups/{resourceGroupName}/providers/microsoft.managedidentity/userassignedidentities/{managed-identity-name}",
- "type": "ManagedServiceIdentity"
+ "type": "ManagedServiceIdentity",
+ "identity": "/subscriptions/{Azure-subscription-ID}/resourceGroups/{resourceGroupName}/providers/microsoft.managedidentity/userassignedidentities/{managed-identity-name}"
} }, "id": "/subscriptions/{Azure-subscription-ID}/providers/Microsoft.Web/locations/{Azure-region}/managedApis/{managed-connector-type}"
This example shows what the configuration looks like when the logic app enables
In a **Logic App (Standard)** resource, the connection configuration is saved in the logic app resource or project's `connections.json` file, which contains a `managedApiConnections` JSON object that includes connection configuration information for each managed connector used in a workflow. For example, this connection information includes pointers to the connection's resource ID along with the managed identity properties, such as the resource ID, if the user-assigned identity is enabled.
-This example shows what the configuration looks like when the logic app enables the user-assigned managed identity:
+This example shows what the configuration looks like when the logic app enables the *system-assigned* managed identity:
```json {
This example shows what the configuration looks like when the logic app enables
"api": { "id": "/subscriptions/{Azure-subscription-ID}/providers/Microsoft.Web/locations/{region}/managedApis/<connector-name>" },
+ "authentication": { // Authentication for the internal token store
+ "type": "ManagedServiceIdentity"
+ },
"connection": { "id": "/subscriptions/{Azure-subscription-ID}/resourceGroups/{resource-group-name}/providers/Microsoft.Web/connections/<connection-name>" },
- "connectionRuntimeUrl": <connection-URL>,
- "authentication": { // Authentication with APIHub
+ "connectionProperties": {
+ "authentication": { // Authentication for the target resource
+ "audience": "<resource-URL>",
+ "type": "ManagedServiceIdentity"
+ }
+ },
+ "connectionRuntimeUrl": "<connection-runtime-URL>"
+ }
+ }
+}
+```
+
+This example shows what the configuration looks like when the logic app enables a *user-assigned* managed identity:
+
+```json
+{
+ "managedApiConnections": {
+ "<connector-name>": {
+ "api": {
+ "id": "/subscriptions/{Azure-subscription-ID}/providers/Microsoft.Web/locations/{region}/managedApis/<connector-name>"
+ },
+ "authentication": { // Authentication for the internal token store
"type": "ManagedServiceIdentity" },
+ "connection": {
+ "id": "/subscriptions/{Azure-subscription-ID}/resourceGroups/{resource-group-name}/providers/Microsoft.Web/connections/<connection-name>"
+ },
"connectionProperties": {
- "authentication": { //Authentication with the target resource
+ "authentication": { // Authentication for the target resource
+ "audience": "<resource-URL>",
"type": "ManagedServiceIdentity",
- "identity": "<user-assigned-identity>", // Optional
- "audience": "<resource-URL>"
+ "identity": "<user-assigned-identity>" // Optional
}
- }
+ },
+ "connectionRuntimeUrl": "<connection-runtime-URL>"
} } }
This example shows what the configuration looks like when the logic app enables
<a name="arm-templates-connection-resource-managed-identity"></a>
-## ARM template for managed connections and managed identities (Consumption)
+## ARM template for API connections and managed identities
+
+If you use an ARM template to automate deployment, and your workflow includes an *API connection*, which is created by a [managed connector](../connectors/managed.md) such as Office 365 Outlook, Azure Key Vault, and so on that uses a managed identity, you have an extra step to take.
-If you automate deployment with an ARM template, and your logic app workflow includes a managed connector trigger or action that uses a managed identity, confirm that the underlying connection resource definition includes the `parameterValueType` property with `Alternative` as the property value. Otherwise, your ARM deployment won't set up the connection to use the managed identity for authentication, and the connection won't work in your logic app's workflow. This requirement applies only to [specific managed connector triggers and actions](#triggers-actions-managed-identity) where you selected the [**Connect with managed identity** option](#authenticate-managed-connector-managed-identity).
+In this scenario, check that the underlying connection resource definition includes the `parameterValueSet` object, which includes the `name` property set to `managedIdentityAuth` and the `values` property set to an empty object. Otherwise, your ARM deployment won't set up the connection to use the managed identity for authentication, and the connection won't work in your workflow. This requirement applies only to [specific managed connector triggers and actions](#triggers-actions-managed-identity) where you selected the [**Connect with managed identity** option](#authenticate-managed-connector-managed-identity).
+
+### [Consumption](#tab/consumption)
-For example, here's the underlying connection resource definition for an Azure Automation action that uses a managed identity where the definition includes the `parameterValueType` property, which is set to `Alternative` as the property value:
+For example, here's the underlying connection resource definition for an Azure Automation action in a Consumption logic app resource that uses a managed identity where the definition includes the `parameterValueType` property, which includes the `name` property set to `managedIdentityAuth` and the `values` property set to an empty object. Also note that the `apiVersion` property is set to `2018-07-01-preview`:
```json { "type": "Microsoft.Web/connections", "name": "[variables('automationAccountApiConnectionName')]",
- "apiVersion": "2016-06-01",
+ "apiVersion": "2018-07-01-preview",
"location": "[parameters('location')]", "kind": "V1", "properties": {
For example, here's the underlying connection resource definition for an Azure A
}, "customParameterValues": {}, "displayName": "[variables('automationAccountApiConnectionName')]",
- "parameterValueType": "Alternative"
+ "parameterValueSet":{
+ "name": "managedIdentityAuth",
+ "values": {}
+ }
+},
+```
+
+### [Standard](#tab/standard)
+
+For example, here's the underlying connection resource definition for an Azure Automation action in a Standard logic app resource that uses a managed identity where the definition includes the `parameterValueType` property, which is set to `Alternative`.
+
+> [!NOTE]
+> For Standard, the `kind` property is set to `V2`, and the `apiVersion` property is set to `2016-06-01`:
+
+```json
+{
+ "type": "Microsoft.Web/connections",
+ "apiVersion": "2016-06-01",
+ "name": "[variables('automationAccountApiConnectionName')]",
+ "location": "[parameters('location')]",
+ "kind": "V2",
+ "properties": {
+ "displayName": "[variables('automationAccountApiConnectionName')]",
+ "parameterValueType": "Alternative",
+ "api": {
+ "id": "[subscriptionResourceId('Microsoft.Web/locations/managedApis', parameters('location'), 'azureautomation')]"
+ }
} }, ```
+Following this `Microsoft.Web/connections` resource definition, make sure that you add an access policy that specifies a resource definition for each API connection and provide the following information:
+
+| Parameter | Description |
+|--|-|
+| <*connection-name*> | The name for your API connection, for example, `office365` |
+| <*object-ID*> | The object ID for your Azure AD identity, previously saved from your app registration |
+| <*tenant-ID*> | The tenant ID for your Azure AD identity, previously saved from your app registration |
+|||
+
+```json
+{
+ "type": "Microsoft.Web/connections/accessPolicies",
+ "apiVersion": "2016-06-01",
+ "name": "[concat('<connection-name>'),'/','<object-ID>')]",
+ "location": "<location>",
+ "dependsOn": [
+ "[resourceId('Microsoft.Web/connections', parameters('connection_name'))]"
+ ],
+ "properties": {
+ "principal": {
+ "type": "ActiveDirectory",
+ "identity": {
+ "objectId": "<object-ID>",
+ "tenantId": "<tenant-ID>"
+ }
+ }
+ }
+}
+```
+
+For more information, review the [Microsoft.Web/connections/accesspolicies (ARM template)](/templates/microsoft.web/connections?tabs=json) documentation.
+++
+<a name="setup-identity-apihub-authentiation"></a>
+
+## Set up advanced control over API connection authentication
+
+When your workflow uses an *API connection*, which is created by a [managed connector](../connectors/managed.md) such as Office 365 Outlook, Azure Key Vault, and so on, the Azure Logic Apps service communicates with the target resource, such as your email account, key vault, and so on, using two connections:
+
+![Conceptual diagram showing first connection with authentication between logic app and token store plus second connection between token store and target resource.](./media/create-managed-service-identity/api-connection-authentication-flow.png)
+
+* Connection #1 is set up with authentication for the internal token store.
+
+* Connection #2 is set up with authentication for the target resource.
+
+In a Consumption logic app resource, connection #1 is abstracted from you without any configuration options. In the Standard logic app resource type, you have more control over your logic app. By default, connection #1 is automatically set up to use the system-assigned identity.
+
+However, if your scenario requires finer control over authenticating API connections, you can optionally change the authentication for connection #1 from the default system-assigned identity to any user-assigned identity that you've added to your logic app. This authentication applies to each API connection, so you can mix system-assigned and user-assigned identities across different connections to the same target resource.
+
+In your Standard logic app **connections.json** file, which stores information about each API connection, each connection definition has two `authentication` sections, for example:
+
+```json
+"keyvault": {
+ "api": {
+ "id": "/subscriptions/{Azure-subscription-ID}/providers/Microsoft.Web/locations/{region}/managedApis/keyvault"
+ },
+ "authentication": {
+ "type": "ManagedServiceIdentity",
+ },
+ "connection": {
+ "id": "/subscriptions/{Azure-subscription-ID}/resourceGroups/{resource-group-name}/providers/Microsoft.Web/connections/<connection-name>"
+ },
+ "connectionProperties": {
+ "authentication": {
+ "audience": "https://vault.azure.net",
+ "type": "ManagedServiceIdentity"
+ }
+ },
+ "connectionRuntimeUrl": "<connection-runtime-URL>"
+}
+```
+
+* Mapped to connection #1, the first `authentication` section is the authentication used for communicating with the internal token store. In the past, this section was always set to `ManagedServiceIdentity` for an app that deploys to Azure and had no configurable options.
+
+* Mapped to connection #2, the second `authentication` section is the authentication used for communicating with the target resource can vary, based on the authentication type that you select for that connection.
+
+### Why change the authentication for the token store?
+
+In some scenarios, you might want to share and use the same API connection across multiple logic apps, but not add the system-assigned identity for each logic app to the target resource's access policy.
+
+In other scenarios, you might not want to have the system-assigned identity set up on your logic app entirely, so you can change the authentication to a user-assigned identity and disable the system-assigned identity completely.
+
+### Change the authentication for the token store
+
+1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource.
+
+1. On the resource menu, under **Workflows**, select **Connections**.
+
+1. On the Connections pane, select **JSON View**.
+
+ ![Screenshot showing the Azure portal, Standard logic app resource, "Connections" pane with "JSON View" selected.](./media/create-managed-service-identity/connections-json-view.png)
+
+1. In the JSON editor, find the `managedApiConnections` section, which contains the API connections across all workflows in your logic app resource.
+
+1. Find the connection where you want to add a user-assigned managed identity. For example, suppose your workflow has an Azure Key Vault connection:
+
+ ```json
+ "keyvault": {
+ "api": {
+ "id": "/subscriptions/{Azure-subscription-ID}/providers/Microsoft.Web/locations/{region}/managedApis/keyvault"
+ },
+ "authentication": {
+ "type": "ManagedServiceIdentity"
+ },
+ "connection": {
+ "id": "/subscriptions/{Azure-subscription-ID}/resourceGroups/{resource-group-name}/providers/Microsoft.Web/connections/<connection-name>"
+ },
+ "connectionProperties": {
+ "authentication": {
+ "audience": "https://vault.azure.net",
+ "type": "ManagedServiceIdentity"
+ }
+ },
+ "connectionRuntimeUrl": "<connection-runtime-URL>"
+ }
+ ```
+
+1. In the connection definition, complete the following steps:
+
+ 1. Find the first `authentication` section. If no `identity` property already exists in this `authentication` section, the logic app implicitly uses the system-assigned identity.
+
+ 1. Add an `identity` property by using the example in this step.
+
+ 1. Set the property value to the resource ID for the user-assigned identity.
+
+ ```json
+ "keyvault": {
+ "api": {
+ "id": "/subscriptions/{Azure-subscription-ID}/providers/Microsoft.Web/locations/{region}/managedApis/keyvault"
+ },
+ "authentication": {
+ "type": "ManagedServiceIdentity",
+ // Add "identity" property here
+ "identity": "/subscriptions/{Azure-subscription-ID}/resourcegroups/{resource-group-name}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{identity-resource-ID}"
+ },
+ "connection": {
+ "id": "/subscriptions/{Azure-subscription-ID}/resourceGroups/{resource-group-name}/providers/Microsoft.Web/connections/<connection-name>"
+ },
+ "connectionProperties": {
+ "authentication": {
+ "audience": "https://vault.azure.net",
+ "type": "ManagedServiceIdentity"
+ }
+ },
+ "connectionRuntimeUrl": "<connection-runtime-URL>"
+ }
+ ```
+
+1. In the Azure portal, go to the target resource, and [give access to the user-assigned managed identity](#access-other-resources), based on the target resource's needs.
+
+ For example, for Azure Key Vault, add the identity to the key vault's access policies. For Azure Blob Storage, assign the necessary role for the identity to the storage account.
+ <a name="remove-identity"></a> ## Disable managed identity
logic-apps Logic Apps Limits And Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-limits-and-config.md
ms.suite: integration Previously updated : 11/16/2021 Last updated : 01/12/2022 # Limits and configuration reference for Azure Logic Apps
For more information, review the following documentation:
| Name | Limit | ||-|
-| Managed identities per logic app | Either the system-assigned identity or 1 user-assigned identity |
+| Managed identities per logic app resource | - Consumption: Either the system-assigned identity *or* only one user-assigned identity <p>- Standard: The system-assigned identity *and* any number of user-assigned identities <p>**Note**: By default, a **Logic App (Standard)** resource has the system-assigned managed identity automatically enabled to authenticate connections at runtime. This identity differs from the authentication credentials or connection string that you use when you create a connection. If you disable this identity, connections won't work at runtime. To view this setting, on your logic app's menu, under **Settings**, select **Identity**. |
| Number of logic apps that have a managed identity in an Azure subscription per region | 1,000 | |||
-> [!NOTE]
-> By default, a Logic App (Standard) resource has its system-assigned managed identity automatically enabled to
-> authenticate connections at runtime. This identity differs from the authentication credentials or connection
-> string that you use when you create a connection. If you disable this identity, connections won't work at
-> runtime. To view this setting, on your logic app's menu, under **Settings**, select **Identity**.
- <a name="integration-account-limits"></a> ## Integration account limits
logic-apps Logic Apps Securing A Logic App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/logic-apps-securing-a-logic-app.md
When the [managed identity](../active-directory/managed-identities-azure-resourc
* The **Logic App (Consumption)** resource type can use the system-assigned identity or a *single* manually created user-assigned identity.
-* The **Logic App (Standard)** resource type can use only the system-assigned identity, which is automatically enabled. The user-assigned identity is currently unavailable.
+* The **Logic App (Standard)** resource type supports having the [system-assigned managed identity *and* multiple user-assigned managed identities](create-managed-service-identity.md) enabled at the same time, though you still can only select one identity to use at any time.
+
+ > [!NOTE]
+ > By default, the system-assigned identity is already enabled to authenticate connections at run time.
+ > This identity differs from the authentication credentials or connection string that you use when you
+ > create a connection. If you disable this identity, connections won't work at run time. To view
+ > this setting, on your logic app's menu, under **Settings**, select **Identity**.
1. Before your logic app can use a managed identity, follow the steps in [Authenticate access to Azure resources by using managed identities in Azure Logic Apps](../logic-apps/create-managed-service-identity.md). These steps enable the managed identity on your logic app and set up that identity's access to the target Azure resource.
logic-apps Single Tenant Overview Compare https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/logic-apps/single-tenant-overview-compare.md
The single-tenant model and **Logic App (Standard)** resource type include many
* **Logic App (Standard)** resources can run anywhere because Azure Logic Apps generates Shared Access Signature (SAS) connection strings that these logic apps can use for sending requests to the cloud connection runtime endpoint. Azure Logic Apps service saves these connection strings with other application settings so that you can easily store these values in Azure Key Vault when you deploy in Azure.
+ * The **Logic App (Standard)** resource type supports having the [system-assigned managed identity *and* multiple user-assigned managed identities](create-managed-service-identity.md) enabled at the same time, though you still can only select one identity to use at any time.
+ > [!NOTE]
- > By default, the **Logic App (Standard)** resource type has the [system-assigned managed identity](create-managed-service-identity.md)
- > automatically enabled to authenticate connections at run time. This identity differs from the authentication
- > credentials or connection string that you use when you create a connection. If you disable this identity,
- > connections won't work at run time. To view this setting, on your logic app's menu, under **Settings**, select **Identity**.
- >
- > The user-assigned managed identity is currently unavailable on the **Logic App (Standard)** resource type.
+ > By default, the system-assigned identity is already enabled to authenticate connections at run time.
+ > This identity differs from the authentication credentials or connection string that you use when you
+ > create a connection. If you disable this identity, connections won't work at run time. To view
+ > this setting, on your logic app's menu, under **Settings**, select **Identity**.
* You can locally run, test, and debug your logic apps and their workflows in the Visual Studio Code development environment.
machine-learning Concept Deep Learning Vs Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-deep-learning-vs-machine-learning.md
--++ Last updated 04/12/2021
machine-learning Concept Designer https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/concept-designer.md
--++ Last updated 10/21/2021
machine-learning How To Authenticate Web Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-authenticate-web-service.md
Title: Configure authentication for models deployed as web services
description: Learn how to configure authentication for machine learning models deployed to web services in Azure Machine Learning. ---++ Last updated 10/21/2021
machine-learning How To Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-private-link.md
To create the private DNS zone entries for the workspace, use the following comm
# Add privatelink.api.azureml.ms az network private-dns zone create \ -g <resource-group-name> \
- --name 'privatelink.api.azureml.ms'
+ --name privatelink.api.azureml.ms
az network private-dns link vnet create \ -g <resource-group-name> \
- --zone-name 'privatelink.api.azureml.ms' \
+ --zone-name privatelink.api.azureml.ms \
--name <link-name> \ --virtual-network <vnet-name> \ --registration-enabled false
az network private-endpoint dns-zone-group create \
-g <resource-group-name> \ --endpoint-name <private-endpoint-name> \ --name myzonegroup \
- --private-dns-zone 'privatelink.api.azureml.ms' \
- --zone-name 'privatelink.api.azureml.ms'
+ --private-dns-zone privatelink.api.azureml.ms \
+ --zone-name privatelink.api.azureml.ms
# Add privatelink.notebooks.azure.net az network private-dns zone create \ -g <resource-group-name> \
- --name 'privatelink.notebooks.azure.net'
+ --name privatelink.notebooks.azure.net
az network private-dns link vnet create \ -g <resource-group-name> \
- --zone-name 'privatelink.notebooks.azure.net' \
+ --zone-name privatelink.notebooks.azure.net \
--name <link-name> \ --virtual-network <vnet-name> \ --registration-enabled false
az network private-endpoint dns-zone-group add \
-g <resource-group-name> \ --endpoint-name <private-endpoint-name> \ --name myzonegroup \
- --private-dns-zone 'privatelink.notebooks.azure.net' \
- --zone-name 'privatelink.notebooks.azure.net'
+ --private-dns-zone privatelink.notebooks.azure.net \
+ --zone-name privatelink.notebooks.azure.net
``` # [Azure CLI extension 1.0](#tab/azurecliextensionv1)
machine-learning How To Create Component Pipelines Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-create-component-pipelines-cli.md
description: Create and run machine learning pipelines using the Azure Machine L
--++ Last updated 01/07/2022
machine-learning How To Debug Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-debug-pipelines.md
description: How to troubleshoot when you get errors running a machine learning
--++ Last updated 10/21/2021
machine-learning How To Deploy Advanced Entry Script https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-advanced-entry-script.md
Last updated 10/21/2021-++
machine-learning How To Deploy Local Container Notebook Vm https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-local-container-notebook-vm.md
-++ Last updated 04/22/2021
Learn how to use Azure Machine Learning to deploy a model as a web service on yo
- You are testing a model that is under development. > [!TIP]
-> Deploying a model from a Jupyter Notebook on a compute instance, to a web service on the same VM is a _local deployment_. In this case, the 'local' computer is the compute instance. For more information on deployments, see [Deploy models with Azure Machine Learning](how-to-deploy-and-where.md).
+> Deploying a model from a Jupyter Notebook on a compute instance, to a web service on the same VM is a _local deployment_. In this case, the 'local' computer is the compute instance.
[!INCLUDE [endpoints-option](../../includes/machine-learning-endpoints-preview-note.md)]
machine-learning How To Deploy Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-mlflow-models.md
Title: Deploy MLflow models as web services
description: Set up MLflow with Azure Machine Learning to deploy your ML models as an Azure web service. --++ - Last updated 10/25/2021
machine-learning How To Deploy Model Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-model-cognitive-search.md
---++ Last updated 03/11/2021
machine-learning How To Deploy No Code Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-no-code-deployment.md
Last updated 07/31/2020 -++ # No-code model deployment (preview)
machine-learning How To Deploy Package Models https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-package-models.md
Last updated 10/21/2021 ++
machine-learning How To Deploy Profile Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-profile-model.md
Last updated 07/31/2020 zone_pivot_groups: aml-control-methods-++
machine-learning How To Deploy Update Web Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-deploy-update-web-service.md
Title: Update web services
+ Title: Update deployed web services
description: Learn how to refresh a web service that is already deployed in Azure Machine Learning. You can update settings such as model, environment, and entry script. -++ Last updated 10/21/2021
machine-learning How To Link Synapse Ml Workspaces https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-link-synapse-ml-workspaces.md
ws.compute_targets['Synapse Spark pool alias']
* [How to data wrangle with Azure Synapse (preview)](how-to-data-prep-synapse-spark-pool.md). * [How to use Apache Spark in your machine learning pipeline with Azure Synapse (preview)](how-to-use-synapsesparkstep.md)
-* [Train a model](how-to-set-up-training-targets.md).
+* [Train a model](how-to-set-up-training-targets.md).
+* [How to securely integrate Azure Synapse and Azure Machine Learning workspaces](how-to-private-endpoint-integration-synapse.md).
machine-learning How To Private Endpoint Integration Synapse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-private-endpoint-integration-synapse.md
+
+ Title: Securely integrate with Azure Synapse
+
+description: 'How to use a virtual network when integrating Azure Synapse with Azure Machine Learning.'
+++++++ Last updated : 02/03/2022+++
+# How to securely integrate Azure Machine Learning and Azure Synapse
+
+In this article, learn how to securely integrate with Azure Machine Learning from Azure Synapse. This integration enables you to use Azure Machine Learning from notebooks in your Azure Synapse workspace. Communication between the two workspaces is secured using an Azure Virtual Network.
+
+> [!TIP]
+> You can also perform integration in the opposite direction, using Azure Synapse spark pool from Azure Machine Learning. For more information, see [Link Azure Synapse and Azure Machine Learning](how-to-link-synapse-ml-workspaces.md).
+
+## Prerequisites
+
+* An Azure subscription.
+* An Azure Machine Learning workspace with a private endpoint connection to a virtual network. The following workspace dependency services must also have a private endpoint connection to the virtual network:
+
+ * Azure Storage Account
+
+ > [!TIP]
+ > For the storage account there are three separate private endpoints; one each for blob, file, and dfs.
+
+ * Azure Key Vault
+ * Azure Container Registry
+
+ A quick and easy way to build this configuration is to use a [Microsoft Bicep or HashiCorp Terraform template](tutorial-create-secure-workspace-template.md).
+
+* An Azure Synapse workspace in a __managed__ virtual network, using a __managed__ private endpoint. For more information, see [Azure Synapse Analytics Managed Virtual Network](/azure/synapse-analytics/security/synapse-workspace-managed-vnet).
+
+ > [!WARNING]
+ > The Azure Machine Learning integration is not currently supported in Synapse Workspaces with data exfiltration protection. When configuring your Azure Synapse workspace, do __not__ enable data exfiltration protection. For more information, see [Azure Synapse Analytics Managed Virtual Network](/azure/synapse-analytics/security/synapse-workspace-managed-vnet).
+
+ > [!NOTE]
+ > The steps in this article make the following assumptions:
+ > * The Azure Synapse workspace is in a different resource group than the Azure Machine Learning workspace.
+ > * The Azure Synapse workspace uses a __managed virtual network__. The managed virtual network secures the connectivity between Azure Synapse and Azure Machine Learning. It does __not__ restrict access to the Azure Synapse workspace. You will access the workspace over the public internet.
+
+## Understanding the network communication
+
+In this configuration, Azure Synapse uses a __managed__ private endpoint and virtual network. The managed virtual network and private endpoint secures the internal communications from Azure Synapse to Azure Machine Learning by restricting network traffic to the virtual network. It does __not__ restrict communication between your client and the Azure Synapse workspace.
+
+Azure Machine Learning doesn't provide managed private endpoints or virtual networks, and instead uses a __user-managed__ private endpoint and virtual network. In this configuration, both internal and client/service communication is restricted to the virtual network. For example, if you wanted to directly access the Azure Machine Learning studio from outside the virtual network, you would use one of the following options:
+
+* Create an Azure Virtual Machine inside the virtual network and use Azure Bastion to connect to it. Then connect to Azure Machine Learning from the VM.
+* Create a VPN gateway or use ExpressRoute to connect clients to the virtual network.
+
+Since the Azure Synapse workspace is publicly accessible, you can connect to it without having to create things like a VPN gateway. The Synapse workspace securely connects to Azure Machine Learning over the virtual network. Azure Machine Learning and its resources are secured within the virtual network.
+
+When adding data sources, you can also secure those behind the virtual network. For example, securely connecting to an Azure Storage Account or Data Lake Store Gen 2 through the virtual network.
+
+For more information, see the following articles:
+
+* [Azure Synapse Analytics Managed Virtual Network](/azure/synapse-analytics/security/synapse-workspace-managed-vnet)
+* [Secure Azure Machine Learning workspace resources using virtual networks](how-to-network-security-overview.md).
+* [Connect to a secure Azure storage account from your Synapse workspace](/azure/synapse-analytics/security/connect-to-a-secure-storage-account).
+
+## Configure Azure Synapse
+
+> [!IMPORTANT]
+> Before following these steps, you need an Azure Synapse workspace that is configured to use a managed virtual network. For more information, see [Azure Synapse Analytics Managed Virtual Network](/azure/synapse-analytics/security/synapse-workspace-managed-vnet).
+
+1. From Azure Synapse Studio, [Create a new Azure Machine Learning linked service](/azure/synapse-analytics/machine-learning/quickstart-integrate-azure-machine-learning).
+1. After creating and publishing the linked service, select __Manage__, __Managed private endpoints__, and then __+ New__ in Azure Synapse Studio.
+
+ :::image type="content" source="./media/how-to-private-endpoint-integration-synapse/add-managed-private-endpoint.png" alt-text="Screenshot of the managed private endpoints dialog.":::
+
+1. From the __New managed private endpoint__ page, search for __Azure Machine Learning__ and select the tile.
+
+ :::image type="content" source="./media/how-to-private-endpoint-integration-synapse/new-private-endpoint-select-machine-learning.png" alt-text="Screenshot of selecting Azure Machine Learning.":::
+
+1. When prompted to select the Azure Machine Learning workspace, use the __Azure subscription__ and __Azure Machine Learning workspace__ you added previously as a linked service. Select __Create__ to create the endpoint.
+
+ :::image type="content" source="./media/how-to-private-endpoint-integration-synapse/new-managed-private-endpoint.png" alt-text="Screenshot of the new private endpoint dialog.":::
+
+1. The endpoint will be listed as __Provisioning__ until it has been created. Once created, the __Approval__ column will list a status of __Pending__. You'll approve the endpoint in the [Configure Azure Machine Learning](#configure-azure-machine-learning) section.
+
+ > [!NOTE]
+ > In the following screenshot, a managed private endpoint has been created for the Azure Data Lake Storage Gen 2 associated with this Synapse workspace. For information on how to create an Azure Data Lake Storage Gen 2 and enable a private endpoint for it, see [Provision and secure a linked service with Managed VNet](/azure/synapse-analytics/data-integration/linked-service).
+
+ :::image type="content" source="./media/how-to-private-endpoint-integration-synapse/managed-private-endpoint-connections.png" alt-text="Screenshot of the managed private endpoints list.":::
+
+### Create a Spark pool
+
+To verify that the integration between Azure Synapse and Azure Machine Learning is working, you'll use an Apache Spark pool. For information on creating one, see [Create a Spark pool](/azure/synapse-analytics/quickstart-create-apache-spark-pool-portal).
+
+## Configure Azure Machine Learning
+
+1. From the [Azure portal](https://portal.azure.com), select your __Azure Machine Learning workspace__, and then select __Networking__.
+1. Select __Private endpoints__, and then select the endpoint you created in the previous steps. It should have a status of __pending__. Select __Approve__ to approve the endpoint connection.
+
+ :::image type="content" source="./media/how-to-private-endpoint-integration-synapse/approve-pending-private-endpoint.png" alt-text="Screenshot of the private endpoint approval.":::
+
+1. From the left of the page, select __Access control (IAM)__. Select __+ Add__, and then select __Role assignment__.
+
+ :::image type="content" source="./media/how-to-private-endpoint-integration-synapse/workspace-role-assignment.png" alt-text="Screenshot of the role assignment.":::
+
+1. Select __Contributor__, and then select __Next__.
+
+ :::image type="content" source="./media/how-to-private-endpoint-integration-synapse/contributor-role.png" alt-text="Screenshot of selecting contributor.":::
+
+1. Select __User, group, or service principal__, and then __+ Select members__. Enter the name of the identity created earlier, select it, and then use the __Select__ button.
+
+ :::image type="content" source="./media/how-to-private-endpoint-integration-synapse/add-role-assignment.png" alt-text="Screenshot of assigning the role.":::
+
+1. Select __Review + assign__, verify the information, and then select the __Review + assign__ button.
+
+ > [!TIP]
+ > It may take several minutes for the Azure Machine Learning workspace to update the credentials cache. Until it has been updated, you may receive errors when trying to access the Azure Machine Learning workspace from Synapse.
+
+## Verify connectivity
+
+1. From Azure Synapse Studio, select __Develop__, and then __+ Notebook__.
+
+ :::image type="content" source="./media/how-to-private-endpoint-integration-synapse/add-synapse-notebook.png" alt-text="Screenshot of adding a notebook.":::
+
+1. In the __Attach to__ field, select the Apache Spark pool for your Azure Synapse workspace, and enter the following code in the first cell:
+
+ ```python
+ from notebookutils.mssparkutils import azureML
+
+ # getWorkspace() takes the linked service name,
+ # not the Azure Machine Learning workspace name.
+ ws = azureML.getWorkspace("AzureMLService1")
+
+ print(ws.name)
+ ```
+
+ This code snippet connects to the linked workspace, and then prints the workspace info. In the printed output, the value displayed is the name of the Azure Machine Learning workspace, not the linked service name that was used in the `getWorkspace()` call. For more information on using the `ws` object, see the [Workspace](/python/api/azureml-core/azureml.core.workspace.workspace) class reference.
+
+## Next steps
+
+* [Quickstart: Create a new Azure Machine Learning linked service in Synapse](/azure/synapse-analytics/machine-learning/quickstart-integrate-azure-machine-learning).
+* [Link Azure Synapse Analytics and Azure Machine Learning workspaces](how-to-link-synapse-ml-workspaces.md).
+
machine-learning How To Search Cross Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-search-cross-workspace.md
+
+ Title: Search for machine learning assets across workspaces
+
+description: Learn about global search in Azure Machine Learning.
++++++ Last updated : 2/16/2022++
+# Search for Azure Machine Learning assets across multiple workspaces (Public Preview)
+
+## Overview
+
+Users can now search for machine learning assets such as jobs, models, and components across all workspaces, resource groups, and subscriptions in their organization through a unified global view.
++
+## Get started
+
+### Global homepage
+
+From this centralized global view, select from recently visited workspaces or browse documentation and tutorial resources.
+
+![Screenshot showing the global view homepage.](./media/how-to-search-cross-workspace/global-home.png)
+
+### Search
+
+Type search text into the global search bar and hit enter to trigger a 'contains' search.
+The search will scan across all metadata fields for the given asset. Results are sorted by relevance as determined by the relevance weightings for the asset columns.
+
+![Screenshot showing the search bar experience.](./media/how-to-search-cross-workspace/search-bar.png)
+
+Use the asset quick links to navigate to search results for jobs, models, and components created by you.
+
+Change the scope of applicable subscriptions and workspaces by clicking the 'Change' link.
+
+![Screenshot showing how to change scope of workspaces and subscriptions reflected in results.](./media/how-to-search-cross-workspace/settings.png)
++
+### Structured search
+
+Click on any number of filters to create more specific search queries. The following filters are supported:
+* Job:
+* Model:
+* Component:
+* Tags:
+* SubmittedBy:
+
+If an asset filter (job, model, component) is present, results will be scoped to those tabs. Other filters will apply to all assets unless an asset filter is also present in the query. Similarly, free text search can be provided alongside filters but will be scoped to the tabs chosen by asset filters if present.
+
+> [!TIP]
+> * Filters search for exact matches of text. Use free text queries for a contains search.
+> * Quotations are required around values that include spaces or other special characters.
+> * If duplicate filters are provided, only the first will be recognized in search results.
+> * Input text of any language is supported but filter strings must match the provided options (ex. submittedBy:).
+> * The tags filter can accept multiple key:value pairs separated by a comma (ex. tags:"key1:value1, key2:value2").
++
+### Results
+
+Explore the Jobs, Models, and Components tabs to view all search results. Click on an asset to be directed to the details page in the context of the relevant workspace. Results from workspaces a user doesn't have access to won't be displayed, click on the 'details' button to view the list of workspaces.
+
+![Screenshot showing search results of query.](./media/how-to-search-cross-workspace/results.png)
+
+### Filters
+
+To add more specificity to the search results, use the column filters sidebar.
+
+### Custom views
+
+Customize the display of columns in the search results table. These views can be saved and shared as well.
+
+![Screenshot showing how to create custom column views on the search results page.](./media/how-to-search-cross-workspace/custom-views.jpg)
++
+### Known issues
+
+If you've previously used this feature, a search result error may occur. Reselect your preferred workspaces in the Directory + Subscription + Workspace tab.
machine-learning How To Select Algorithms https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-select-algorithms.md
--++ Last updated 10/21/2021
machine-learning How To Troubleshoot Deployment Local https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-troubleshoot-deployment-local.md
description: Try a local model deployment as a first step in troubleshooting mod
-++ Last updated 10/21/2021
machine-learning How To Troubleshoot Deployment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-troubleshoot-deployment.md
Last updated 10/21/2021++ #Customer intent: As a data scientist, I want to figure out why my model deployment fails so that I can fix it.
machine-learning How To Use Mlflow Cli Runs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-mlflow-cli-runs.md
The following code uses `mlflow` and the [`subprocess`](https://docs.python.org/
```Python import mlflow
-import subprocess
-#Get MLfLow URI through the Azure ML CLI (v2) and convert to string
-MLFLOW_TRACKING_URI = subprocess.run(["az", "ml", "workspace", "show", "--query", "mlflow_tracking_uri", "-o", "tsv"], stdout=subprocess.PIPE, text=True)
+## Construct AzureML MLFLOW TRACKING URI
+def get_azureml_mlflow_tracking_uri(region, subscription_id, resource_group, workspace):
+ return "azureml://{}.api.azureml.ms/mlflow/v1.0/subscriptions/{}/resourceGroups/{}/providers/Microsoft.MachineLearningServices/workspaces/{}".format(region, subscription_id, resource_group, workspace)
-MLFLOW_TRACKING_URI = str(MLFLOW_TRACKING_URI.stdout).strip()
+region='<REGION>' ## example: westus
+subscription_id = '<SUBSCRIPTION_ID>' ## example: 11111111-1111-1111-1111-111111111111
+resource_group = '<RESOURCE_GROUP>' ## example: myresourcegroup
+workspace = '<AML_WORKSPACE_NAME>' ## example: myworkspacename
+
+MLFLOW_TRACKING_URI = get_azureml_mlflow_tracking_uri(region, subscription_id, resource_group, workspace)
## Set the MLFLOW TRACKING URI mlflow.set_tracking_uri(MLFLOW_TRACKING_URI) ## Make sure the MLflow URI looks something like this:
-## azureml://westus.api.azureml.ms/mlflow/v1.0/subscriptions/<Sub-ID>/resourceGroups/<RG>/providers/Microsoft.MachineLearningServices/workspaces/<WS>
+## azureml://<REGION>.api.azureml.ms/mlflow/v1.0/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.MachineLearningServices/workspaces/<AML_WORKSPACE_NAME>
-print("MLFlow Tracking URI:",MLFLOW_TRACKING_URI)
+print("MLFlow Tracking URI:", MLFLOW_TRACKING_URI)
``` # [Terminal](#tab/terminal)
machine-learning How To Use Reinforcement Learning https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-reinforcement-learning.md
description: Learn how to use Azure Machine Learning reinforcement learning (pre
--++ Last updated 10/21/2021
machine-learning How To Version Track Datasets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-version-track-datasets.md
description: Learn how to version machine learning datasets and how versioning w
---++ Last updated 10/21/2021
machine-learning Overview What Is Machine Learning Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/overview-what-is-machine-learning-studio.md
--++ Last updated 10/21/2021 adobe-target: true
machine-learning Tutorial Create Secure Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-create-secure-workspace.md
Previously updated : 12/03/2021 Last updated : 02/04/2022
There are several ways that you can connect to the secured workspace. The steps
Use the following steps to create a Data Science Virtual Machine for use as a jump box: 1. In the [Azure portal](https://portal.azure.com), select the portal menu in the upper left corner. From the menu, select __+ Create a resource__ and then enter __Data science virtual machine__. Select the __Data science virtual machine - Windows__ entry, and then select __Create__.
-1. From the __Basics__ tab, select the __subscription__, __resource group__, and __Region__ you previously used for the virtual network. Provide a unique __Virtual machine name__, __Username__, and __Password__. Leave other fields at the default values.
+1. From the __Basics__ tab, select the __subscription__, __resource group__, and __Region__ you previously used for the virtual network. Provide values for the following fields:
+
+ * __Virtual machine name__: A unique name for the VM.
+ * __Username__: The username you will use to login to the VM.
+ * __Password__: The password for the username.
+ * __Security type__: Standard.
+ * __Image__: Data Science Virtual Machine - Windows Server 2019 - Gen1.
+
+ > [!IMPORTANT]
+ > Do not select a Gen2 image.
+
+ You can leave other fields at the default values.
:::image type="content" source="./media/tutorial-create-secure-workspace/create-virtual-machine-basic.png" alt-text="Image of VM basic configuration":::
machine-learning Tutorial Designer Automobile Price Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-designer-automobile-price-deploy.md
Title: 'Tutorial: Designer - deploy no-code models' description: Deploy a machine learning model to predict car prices with the Azure Machine Learning designer.-+
machine-learning Tutorial Designer Automobile Price Train Score https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-designer-automobile-price-train-score.md
Title: 'Tutorial: Designer - train a no-code regression model'
description: Train a regression model that predicts car prices using the Azure Machine Learning designer. --++
You need an Azure Machine Learning workspace to use the designer. The workspace
1. Select **Designer**.
- ![Screenshot of the visual workspace showing how to access the designer](./media/tutorial-designer-automobile-price-train-score/launch-designer.png)
+ :::image type="content" source="./media/tutorial-designer-automobile-price-train-score/launch-designer.png" alt-text="Screenshot of the visual workspace showing how to access the designer.":::
1. Select **Easy-to-use prebuilt components**.
There are several sample datasets included in the designer for you to experiment
1. Select the dataset **Automobile price data (Raw)**, and drag it onto the canvas.
- ![Drag data to canvas](./media/tutorial-designer-automobile-price-train-score/drag-data.gif)
+ :::image type="content" source="./media/tutorial-designer-automobile-price-train-score/drag-data.gif" alt-text="Gif of dragging data to the canvas.":::
### Visualize the data
When you train a model, you have to do something about the data that's missing.
> You create a flow of data through your pipeline when you connect the output port of one component to an input port of another. >
- ![Connect components](./media/tutorial-designer-automobile-price-train-score/connect-modules.gif)
+ :::image type="content" source="./media/tutorial-designer-automobile-price-train-score/connect-modules.gif" alt-text="Screenshot of connecting components.":::
1. Select the **Select Columns in Dataset** component.
When you train a model, you have to do something about the data that's missing.
1. In the lower right, select **Save** to close the column selector.
- ![Exclude a column](./media/tutorial-designer-automobile-price-train-score/exclude-column.png)
+ :::image type="content" source="./media/tutorial-designer-automobile-price-train-score/exclude-column.png" alt-text="Screenshot of select columns with exclude highlighted.":::
1. Select the **Select Columns in Dataset** component.
Your dataset still has missing values after you remove the **normalized-losses**
Your pipeline should now look something like this:
- :::image type="content" source="./media/tutorial-designer-automobile-price-train-score/pipeline-clean.png" alt-text="Select-column":::
+ :::image type="content" source="./media/tutorial-designer-automobile-price-train-score/pipeline-clean.png" alt-text="Screenshot of automobilie price data connected to select columns in dataset componet which is connected to clean missing data.":::
## Train a machine learning model
After the run completes, you can view the results of the pipeline run. First, lo
Here you can see the predicted prices and the actual prices from the testing data.
- :::image type="content" source="./media/tutorial-designer-automobile-price-train-score/score-result.png" alt-text="Screenshot of the output visualization highlighting the Scored Label column":::
+ :::image type="content" source="./media/tutorial-designer-automobile-price-train-score/score-result.png" alt-text="Screenshot of the output visualization highlighting the Scored Label column.":::
### Evaluate models
machine-learning Tutorial Pipeline Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/tutorial-pipeline-python-sdk.md
---++ Last updated 01/28/2022
networking Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/networking/fundamentals/networking-overview.md
Title: Azure networking services overview description: Learn about networking services in Azure, including connectivity, application protection, application delivery, and network monitoring services. Previously updated : 04/07/2021 Last updated : 02/03/2022
ExpressRoute enables you to extend your on-premises networks into the Microsoft
:::image type="content" source="./media/networking-overview/expressroute-connection-overview.png" alt-text="Azure ExpressRoute" border="false"::: ### <a name="vpngateway"></a>VPN Gateway
-VPN Gateway helps you create encrypted cross-premises connections to your virtual network from on-premises locations, or create encrypted connections between VNets. There are different configurations available for VPN Gateway connections, such as, site-to-site, point-to-site, or VNet-to-VNet.
+VPN Gateway helps you create encrypted cross-premises connections to your virtual network from on-premises locations or create encrypted connections between VNets. There are different configurations available for VPN Gateway connections, such as site-to-site, point-to-site, and VNet-to-VNet.
The following diagram illustrates multiple site-to-site VPN connections to the same virtual network.
-For more information about different types of VPN connections, see [VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md).
+For more information about different types of VPN connections, see [What is VPN Gateway?](../../vpn-gateway/vpn-gateway-about-vpngateways.md).
### <a name="virtualwan"></a>Virtual WAN
-Azure Virtual WAN is a networking service that provides optimized and automated branch connectivity to, and through, Azure. Azure regions serve as hubs that you can choose to connect your branches to. You can leverage the Azure backbone to also connect branches and enjoy branch-to-VNet connectivity.
-Azure Virtual WAN brings together many Azure cloud connectivity services such as site-to-site VPN, ExpressRoute, point-to-site user VPN into a single operational interface. Connectivity to Azure VNets is established by using virtual network connections. For more information, see [What is Azure virtual WAN?](../../virtual-wan/virtual-wan-about.md).
+Azure Virtual WAN is a networking service that provides optimized and automated branch connectivity to, and through, Azure. Azure regions serve as hubs that you can choose to connect your branches to. You can leverage the Azure backbone to also connect branches for branch-to-VNet connectivity.
+Azure Virtual WAN brings together many Azure cloud connectivity services such as site-to-site VPN, ExpressRoute, and point-to-site user VPN into a single operational interface. Connectivity to Azure VNets is established by using virtual network connections. For more information, see [What is Azure Virtual WAN?](../../virtual-wan/virtual-wan-about.md).
### <a name="dns"></a>Azure DNS Azure DNS is a hosting service for DNS domains that provides name resolution by using Microsoft Azure infrastructure. By hosting your domains in Azure, you can manage your DNS records by using the same credentials, APIs, tools, and billing as your other Azure services. For more information, see [What is Azure DNS?](../../dns/dns-overview.md). ### <a name="bastion"></a>Azure Bastion
-The Azure Bastion service is a new fully platform-managed PaaS service that you provision inside your virtual network. It provides secure and seamless RDP/SSH connectivity to your virtual machines directly in the Azure portal over TLS. When you connect via Azure Bastion, your virtual machines do not need a public IP address. For more information, see [What is Azure Bastion?](../../bastion/bastion-overview.md).
+The Azure Bastion service is a fully platform-managed PaaS service that you provision inside your virtual network. It provides secure and seamless RDP/SSH connectivity to your virtual machines directly in the Azure portal over TLS. When you connect via Azure Bastion, your virtual machines do not need a public IP address. For more information, see [What is Azure Bastion?](../../bastion/bastion-overview.md).
### <a name="nat"></a>Virtual network NAT Gateway Virtual Network NAT (network address translation) simplifies outbound-only Internet connectivity for virtual networks. When configured on a subnet, all outbound connectivity uses your specified static public IP addresses. Outbound connectivity is possible without load balancer or public IP addresses directly attached to virtual machines.
For more information, see [What is virtual network NAT gateway?](../../virtual-n
### <a name="azurepeeringservice"></a> Azure Peering Service Azure Peering service enhances customer connectivity to Microsoft cloud services such as Microsoft 365, Dynamics 365, software as a service (SaaS) services, Azure, or any Microsoft services accessible via the public internet. For more information, see [What is Azure Peering Service?](../../peering-service/about.md).
-### <a name="edge-zones"></a>Azure Edge Zones
-
-Azure Edge Zone is a family of offerings from Microsoft Azure that enables data processing close to the user. You can deploy VMs, containers, and other selected Azure services into Edge Zones to address the low latency and high throughput requirements of applications.
-
-### <a name="orbital"></a>Azure Orbital
-
-Azure Orbital is a fully managed cloud-based ground station as a service that lets you communicate with your spacecraft or satellite constellations, downlink and uplink data, process your data in the cloud, chain services with Azure services in unique scenarios, and generate products for your customers. This system is built on top of the Azure global infrastructure and low-latency global fiber network.
- ## <a name="protect"></a>Application protection services This section describes networking services in Azure that help protect your network resources - Protect your applications using any or a combination of these networking services in Azure - DDoS protection, Private Link, Firewall, Web Application Firewall, Network Security Groups, and Virtual Network Service Endpoints.
Azure Monitor for Networks provides a comprehensive view of health and metrics f
To learn about how view ExpressRoute circuit metrics, resource logs and alerts, see [ExpressRoute monitoring, metrics, and alerts](../../expressroute/expressroute-monitoring-metrics-alerts.md?toc=%2fazure%2fnetworking%2ftoc.json). ### <a name="azuremonitor"></a>Azure Monitor Azure Monitor maximizes the availability and performance of your applications by delivering a comprehensive solution for collecting, analyzing, and acting on telemetry from your cloud and on-premises environments. It helps you understand how your applications are performing and proactively identifies issues affecting them and the resources they depend on. For more information, see [Azure Monitor Overview](../../azure-monitor/overview.md?toc=%2fazure%2fnetworking%2ftoc.json).
-### <a name="vnettap"></a>Virtual Network TAP
-Azure virtual network TAP (Terminal Access Point) allows you to continuously stream your virtual machine network traffic to a network packet collector or analytics tool. The collector or analytics tool is provided by a [network virtual appliance](https://azure.microsoft.com/solutions/network-appliances/) partner.
-
-The following image shows how virtual network TAP works:
--
-For more information, see [What is Virtual Network TAP](../../virtual-network/virtual-network-tap-overview.md).
## Next steps
role-based-access-control Role Assignments External Users https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/role-based-access-control/role-assignments-external-users.md
Follow these steps to add a guest user to your directory using the Azure Active
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Make sure your organization's external collaboration settings are configured such that you're allowed to invite guests. For more information, see [Enable B2B external collaboration and manage who can invite guests](../active-directory/external-identities/delegate-invitations.md).
+1. Make sure your organization's external collaboration settings are configured such that you're allowed to invite guests. For more information, see [Configure external collaboration settings](../active-directory/external-identities/external-collaboration-settings-configure.md).
1. Click **Azure Active Directory** > **Users** > **New guest user**.
search Cognitive Search Concept Intro https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-concept-intro.md
A [skillset](cognitive-search-defining-skillset.md) that's assembled using built
+ PDFs with combined image and text. Embedded text can be extracted without AI enrichment, but adding image and language skills can unlock more information than what could be obtained through standard text-based indexing.
-+ Unstructured or semi-structured documents containing content that has inherent meaning or context that is hidden in the larger document.
++ Unstructured or semi-structured documents containing content that has inherent meaning or organization that is hidden in the larger document. Blobs in particular often contain a large body of content that is packed into a single "field". By attaching image and natural language processing skills to an indexer, you can create information that is extant in the raw content, but not otherwise surfaced as distinct fields.
search Cognitive Search Defining Skillset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-defining-skillset.md
An indexer drives skillset execution. You need an [indexer](search-howto-create-
> [!TIP] > Enable [enrichment caching](cognitive-search-incremental-indexing-conceptual.md) to reuse the content you've already processed and lower the cost of development.
-## Skillset definition
+## Add a skillset definition
Start with the basic structure. In the [Create Skillset REST API](/rest/api/searchservice/create-skillset), the body of the request is authored in JSON and has the following sections:
Start with the basic structure. In the [Create Skillset REST API](/rest/api/sear
"name":"skillset-template", "description":"A description makes the skillset self-documenting (comments aren't allowed in JSON itself)", "skills":[
-
+
], "cognitiveServices":{ "@odata.type":"#Microsoft.Azure.Search.CognitiveServicesByKey",
After the name and description, a skillset has four main properties:
+ `encryptionKey` (optional) specifies an Azure Key Vault and [customer-managed keys](search-security-manage-encryption-keys.md) used to encrypt sensitive content in a skillset definition. Remove this property if you aren't using customer-managed encryption.
-## Add a skills array
+## Insert a skills array
-Within a skillset definition, the skills array specifies which skills to execute. The following example shows two unrelated, [built-in skills](cognitive-search-predefined-skills.md). Notice that each skill has a type, context, inputs, and outputs.
+Inside the skillset definition, the skills array specifies which skills to execute. All skills have a type, context, inputs, and outputs. The following example shows two unrelated, [built-in skills](cognitive-search-predefined-skills.md). Notice that each skill has a type, context, inputs, and outputs.
```json "skills":[
Each skill is unique in terms of its input values and the parameters that it tak
Common parameters include "odata.type", "inputs", and "outputs". The other parameters, namely "categories" and "defaultLanguageCode", are examples of parameters that are specific to Entity Recognition.
-+ **"odata.type"** uniquely identifies each skill. You can find the type in the [skill reference documentation](cognitive-search-predefined-skills.md).
++ **"odata.type"** uniquely identifies each skill. You can find the type in the [skill reference documentation](cognitive-search-predefined-skills.md).
-+ **"context"** is a node in an enrichment tree and it represents the level at which operations take place. All skills have this property. If the "context" field is not explicitly set, the default context is `"/document"`. In the example, the context is the whole document, which means that the entity recognition skill is called once per document.
++ **"context"** is a node in an enrichment tree and it represents the level at which operations take place. All skills have this property. If the "context" field isn't explicitly set, the default context is `"/document"`. In the example, the context is the whole document, which means that the entity recognition skill is called once per document. The context also determines where outputs are produced in the enrichment tree. In this example, the skill returns a property called `"organizations"`, captured as `orgs`, which is added as a child node of `"/document"`. In downstream skills, the path to this node is `"/document/orgs"`. For a particular document, the value of `"/document/orgs"` is an array of organizations extracted from the text (for example: `["Microsoft", "LinkedIn"]`). For more information about path syntax, see [How to reference annotations in a skillset](cognitive-search-concept-annotations-syntax.md).
Outputs exist only during processing. To chain this output to the input of a dow
Outputs from the one skill can conflict with outputs from a different skill. If you have multiple skills that return the same output, use the `"targetName"` for name disambiguation in enrichment node paths.
-Some situations call for referencing each element of an array separately. For example, suppose you want to pass *each element* of `"/document/orgs"` separately to another skill. To do so, add an asterisk to the path: `"/document/orgs/*"`
+Some situations call for referencing each element of an array separately. For example, suppose you want to pass *each element* of `"/document/orgs"` separately to another skill. To do so, add an asterisk to the path: `"/document/orgs/*"`.
-The second skill for sentiment analysis follows the same pattern as the first enricher. It takes `"/document/content"` as input, and returns a sentiment score for each content instance. Since you did not set the "context" field explicitly, the output (mySentiment) is now a child of `"/document"`.
+The second skill for sentiment analysis follows the same pattern as the first enricher. It takes `"/document/content"` as input, and returns a sentiment score for each content instance. Since you didn't set the "context" field explicitly, the output (mySentiment) is now a child of `"/document"`.
```json {
The second skill for sentiment analysis follows the same pattern as the first en
} ```
+## Set context and input source
+
+1. Set the skill's [context property](cognitive-search-working-with-skillsets.md#context). Context determines the level at which operations take place, and where outputs are produced in the enrichment tree. It's usually one of the following examples:
+
+ | Context example | Description |
+ |--|-|
+ | "context": "/document" | (Default) Inputs and outputs are at the document level. |
+ | "context": "/document/pages/*" | Some skills like sentiment analysis perform better over smaller chunks of text. If you're splitting a large content field into pages or sentences, the context should be over each component part. |
+ | "context": "/document/normalized_images/*" | Inputs and outputs are one per image in the parent document. |
+
+1. Set the skill's input source to the node that's providing the data to be processed. For text-based skills, it's a field in the document or row that provides text. For image-based skills, the node providing the input is normalized images.
+
+ | Source example | Description |
+ |--|-|
+ | "source": "/document/content" | For blobs, the source is usually the blob's content property. |
+ | "source": "/document/some-named-field" | For text-based skills, such as entity recognition or key phrase extraction, the origin should be a field that contains sufficient text to be analyzed, such as a "description" or "summary". |
+ | "source": "/document/normalized_images/*" | For image content, the source is image that's been normalized during document cracking. |
+
+If the skill iterates over an array, both context and input source should include `/*` in the correct positions.
+ ## Add a custom skill This section includes an example of a [custom skill](cognitive-search-custom-skill-web-api.md). The URI points to an Azure Function, which in turn invokes the model or transformation that you provide. For more information, see [Define a custom interface](cognitive-search-custom-skill-interface.md).
This screenshot shows the results of an entity recognition skill that detected p
+ Assemble a representative sample of your content in Blob Storage or another supported data source and run the [**Import data** wizard](search-import-data-portal.md).
- The wizard automates several steps that can be challenging the first time around. It defines fields in an index, field mappings in an indexer, and projections in a knowledge store if you are using one. For some skills, such as OCR or image analysis, the wizard adds utility skills that merge the image and text content that was separated during document cracking.
+ The wizard automates several steps that can be challenging the first time around. It defines fields in an index, field mappings in an indexer, and projections in a knowledge store if you're using one. For some skills, such as OCR or image analysis, the wizard adds utility skills that merge the image and text content that was separated during document cracking.
+ Alternatively, you can [import sample Postman collections](https://github.com/Azure-Samples/azure-search-postman-samples) that provide a full articulation of the object definitions required to evaluate a skill.
search Cognitive Search Working With Skillsets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-working-with-skillsets.md
Because a skill's inputs and outputs are reading from and writing to enrichment
## Context
-Each skill has a context, which can be the entire document (`/document`) or a node lower in the tree (`/document/countries/`). A context determines:
+Each skill has a context, which can be the entire document (`/document`) or a node lower in the tree (`/document/countries/*`). A context determines:
+ The number of times the skill executes, over a single value (once per field, per document), or for context values of type collection, where adding an `/*` results in skill invocation, once for each instance in the collection.
search Search Security Rbac https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-security-rbac.md
Title: Use Azure RBAC
+ Title: Use Azure RBAC roles
description: Use Azure role-based access control (Azure RBAC) for granular permissions on service administration and content tasks.
Previously updated : 11/19/2021 Last updated : 02/03/2022
-# Use Azure role-based access control (Azure RBAC) in Azure Cognitive Search
+# Use Azure role-based access controls (Azure RBAC) in Azure Cognitive Search
Azure provides a global [role-based access control (RBAC) authorization system](../role-based-access-control/role-assignments-portal.md) for all services running on the platform. In Cognitive Search, you can: + Use generally available roles for service administration.
-+ Use new preview roles for content management (creating and managing indexes and other top-level objects), [**available in preview**](#step-1-preview-sign-up).
++ Use new preview roles for data requests, including creating, loading, and querying indexes.
-> [!NOTE]
-> Search Service Contributor is a "generally available" role that has "preview" capabilities. It's the only role that supports a true hybrid of service and content management tasks, allowing all operations on a given search service. To get the preview capabilities of content management on this role, [**sign up for the preview**](#step-1-preview-sign-up).
-
-A few Azure RBAC scenarios are **not** supported, or not covered in this article:
-
-+ Outbound indexer connections are documented in ["Set up an indexer connection to a data source using a managed identity"](search-howto-managed-identities-data-sources.md). For a search service that has a managed identity assigned to it, you can create roles assignments that allow external data services, such as Azure Blob Storage, read-access on blobs by your trusted search service.
-
-+ User identity access over search results (sometimes referred to as row-level security or document-level security) is not supported. For document-level security, a workaround is to use [security filters](search-security-trimming-for-azure-search.md) to trim results by user identity, removing documents for which the requestor should not have access.
+Per-user access over search results (sometimes referred to as row-level security or document-level security) is not supported. As a workaround, [create security filters](search-security-trimming-for-azure-search.md) that trim results by user identity, removing documents for which the requestor should not have access.
## Built-in roles used in Search
-In Cognitive Search, built-in roles include generally available and preview roles, whose assigned membership consists of Azure Active Directory users and groups.
-
-Role assignments are cumulative and pervasive across all tools and client libraries used to create or manage a search service. These clients include the Azure portal, Management REST API, Azure PowerShell, Azure CLI, and the management client library of Azure SDKs.
-
-Role assignments can be scoped to the search service or to individual top-level resources, like an index. Using the portal, roles can only be defined for the service, but not specific top-level resources. Use PowerShell or the Azure CLI for [granular access to specific objects](#rbac-single-index).
-
-There are no regional, tier, or pricing restrictions for using Azure RBAC on Azure Cognitive Search, but your search service must be in the Azure public cloud.
+Built-in roles include generally available and preview roles.
-| Role | Applies to | Description |
-| - | - | -- |
-| [Owner](../role-based-access-control/built-in-roles.md#owner) | Service ops (generally available) | Full access to the search resource, including the ability to assign Azure roles. Subscription administrators are members by default. |
-| [Contributor](../role-based-access-control/built-in-roles.md#contributor) | Service ops (generally available) | Same level of access as Owner, minus the ability to assign roles or change authorization options. |
-| [Reader](../role-based-access-control/built-in-roles.md#reader) | Service ops (generally available) | Limited access to partial service information. In the portal, the Reader role can access information in the service Overview page, in the Essentials section and under the Monitoring tab. All other tabs and pages are off limits. </br></br>This role has access to service information: resource group, service status, location, subscription name and ID, tags, URL, pricing tier, replicas, partitions, and search units. </br></br>This role also has access to service metrics: search latency, percentage of throttled requests, average queries per second. </br></br>There is no access to API keys, role assignments, content (indexes or synonym maps), or content metrics (storage consumed, number of objects). |
-| [Search Service Contributor](../role-based-access-control/built-in-roles.md#search-service-contributor) | Service ops (generally available), and top-level objects (preview) | This role is a combination of Contributor at the service-level, but with full access to all actions on indexes, synonym maps, indexers, data sources, and skillsets through [`Microsoft.Search/searchServices/*`](../role-based-access-control/resource-provider-operations.md#microsoftsearch). This role is for search service administrators who need to fully manage the service. </br></br>Like Contributor, members of this role cannot make or manage role assignments or change authorization options. |
-| [Search Index Data Contributor](../role-based-access-control/built-in-roles.md#search-index-data-contributor) | Documents collection (preview) | Provides full access to content in all indexes on the search service. This role is for developers or index owners who need to import, refresh, or query the documents collection of an index. |
-| [Search Index Data Reader](../role-based-access-control/built-in-roles.md#search-index-data-reader) | Documents collection (preview) | Provides read-only access to search indexes on the search service. This role is for apps and users who run queries. |
+| Role | Description and availability |
+| - | - |
+| [Owner](../role-based-access-control/built-in-roles.md#owner) | (Generally available) Full access to the search resource, including the ability to assign Azure roles. Subscription administrators are members by default. |
+| [Contributor](../role-based-access-control/built-in-roles.md#contributor) | (Generally available) Same level of access as Owner, minus the ability to assign roles or change authorization options. |
+| [Reader](../role-based-access-control/built-in-roles.md#reader) | (Generally available) Limited access to partial service information. In the portal, the Reader role can access information in the service Overview page, in the Essentials section and under the Monitoring tab. All other tabs and pages are off limits. </br></br>This role has access to service information: resource group, service status, location, subscription name and ID, tags, URL, pricing tier, replicas, partitions, and search units. This role also has access to service metrics: search latency, percentage of throttled requests, average queries per second. </br></br>There is no access to API keys, role assignments, content (indexes or synonym maps), or content metrics (storage consumed, number of objects). |
+| [Search Service Contributor](../role-based-access-control/built-in-roles.md#search-service-contributor) | (Generally available and preview) This role is equivalent to Contributor at the service-level, but with full access to all actions on indexes, synonym maps, indexers, data sources, and skillsets through [`Microsoft.Search/searchServices/*`](../role-based-access-control/resource-provider-operations.md#microsoftsearch). This role is for search service administrators who need to fully manage the service. </br></br>Like Contributor, members of this role cannot make or manage role assignments or change authorization options. Your service must be enabled for the preview for data requests. |
+| [Search Index Data Contributor](../role-based-access-control/built-in-roles.md#search-index-data-contributor) | (Preview) Provides full access to content in all indexes on the search service. This role is for developers or index owners who need to import, refresh, or query the documents collection of an index. |
+| [Search Index Data Reader](../role-based-access-control/built-in-roles.md#search-index-data-reader) | (Preview) Provides read-only access to search indexes on the search service. This role is for apps and users who run queries. |
> [!NOTE] > Azure resources have the concept of [control plane and data plane](../azure-resource-manager/management/control-plane-and-data-plane.md) categories of operations. In Cognitive Search, "control plane" refers to any operation supported in the [Management REST API](/rest/api/searchmanagement/) or equivalent client libraries. The "data plane" refers to operations against the search service endpoint, such as indexing or queries, or any other operation specified in the [Search REST API](/rest/api/searchservice/) or equivalent client libraries. Most roles apply to just one plane. The exception is Search Service Contributor which supports actions across both.
-## Preview limitations
+<a name="preview-limitations"></a>
-+ The Azure RBAC preview is currently only available in Azure public cloud regions and isn't available in Azure Government, Azure Germany, or Azure China 21Vianet.
+## Preview capabilities and limitations
-+ This preview capability is available under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) and should not be rolled into a production environment.
++ Role-based access control for data plane operations, such as creating an index or querying an index, is currently in public preview and available under [supplemental terms of use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-+ If a subscription is migrated to a new tenant, the RBAC preview will need to be re-enabled.
++ There are no regional, tier, or pricing restrictions for using Azure RBAC preview , but your search service must be in the Azure public cloud. The preview isn't available in Azure Government, Azure Germany, or Azure China 21Vianet.+++ If you migrate your Azure subscription to a new tenant, the RBAC preview will need to be re-enabled. + Adoption of Azure RBAC might increase the latency of some requests. Each unique combination of service resource (index, indexer, etc.) and service principal used on a request will trigger an authorization check. These authorization checks can add up to 200 milliseconds of latency to a request. + In rare cases where requests originate from a high number of different service principals, all targeting different service resources (indexes, indexers, etc.), it's possible for the authorization checks to result in throttling. Throttling would only happen if hundreds of unique combinations of search service resource and service principal were used within a second.
-## Step 1: Preview sign-up
+<a name="step-1-preview-sign-up"></a>
-**Applies to:** Search Index Data Contributor, Search Index Data Reader, Search Service Contributor
+## Sign up for the preview
-Skip this step if you are using generally available roles (Owner, Contributor, Reader) or if you want just the service-level actions of Search Service Contributor.
+**Applies to:** Search Index Data Contributor, Search Index Data Reader, Search Service Contributor
-New built-in preview roles provide permissions over content on the search service. Although built-in roles are always visible in the Azure portal, preview registration is required to make them operational.
+New built-in preview roles grant permissions over content on the search service. Although built-in roles are always visible in the Azure portal, preview registration is required to make them operational.
-1. Open [Azure portal](https://portal.azure.com/) and find your search service
+1. Open [Azure portal](https://portal.azure.com/) and find your search service.
1. On the left-nav pane, select **Keys**.
You can also sign up for the preview using Azure Feature Exposure Control (AFEC)
> [!NOTE] > Once you add the preview to your subscription, all services in the subscription will be permanently enrolled in the preview. If you don't want RBAC on a given service, you can disable RBAC for data plane operations as described in a later section.
-## Step 2: Preview configuration
+<a name="step-2-preview-configuration"></a>
-**Applies to:** Search Index Data Contributor, Search Index Data Reader, Search Service Contributor
+## Enable RBAC preview for data plane operations
-Skip this step if you are using generally available roles (Owner, Contributor, Reader) or just the service-level actions of Search Service Contributor.
+**Applies to:** Search Index Data Contributor, Search Index Data Reader, Search Service Contributor
In this step, configure your search service to recognize an **authorization** header on data requests that provide an OAuth2 access token.
If you are using Postman or another web testing tool, see the Tip below for help
} ```
-1. [Assign roles](#assign-roles) on the service and verify they are working correctly against the data plane.
+1. [Assign roles](#step-3-assign-roles) on the service and verify they are working correctly against the data plane.
> [!TIP] > Management REST API calls are authenticated through Azure Active Directory. For guidance on setting up a security principle and a request, see this blog post [Azure REST APIs with Postman (2021)](https://blog.jongallant.com/2021/02/azure-rest-apis-postman-2021/). The previous example was tested using the instructions and Postman collection provided in the blog post.
-<a name="assign-roles"></a>
+<a name="step-3-assign-roles"></a>
-## Step 3: Assign roles
+## Assign roles
-Roles can be assigned using any of the [supported approaches](../role-based-access-control/role-assignments-steps.md) described in Azure role-based access control documentation.
+Role assignments are cumulative and pervasive across all tools and client libraries. You can assign roles using any of the [supported approaches](../role-based-access-control/role-assignments-steps.md) described in Azure role-based access control documentation.
You must be an **Owner** or have [Microsoft.Authorization/roleAssignments/write](/azure/templates/microsoft.authorization/roleassignments) permissions to manage role assignments. ### [**Azure portal**](#tab/roles-portal)
+Role assignments in the portal are service-wide. If you want to [grant permissions to a single index](#rbac-single-index), use PowerShell or the Azure CLI instead.
+ 1. Open the [Azure portal](https://ms.portal.azure.com). 1. Navigate to your search service.
You must be an **Owner** or have [Microsoft.Authorization/roleAssignments/write]
+ Owner + Contributor + Reader
- + Search Service Contributor
+ + Search Service Contributor (preview for data plane requests)
+ Search Index Data Contributor (preview) + Search Index Data Reader (preview)
Recall that you can only scope access to top-level resources, such as indexes, s
-## Step 4: Test
+## Test role assignments
### [**Azure portal**](#tab/test-portal)
var tokenCredential = new ClientSecretCredential(aadTenantId, aadClientId, aadS
SearchClient srchclient = new SearchClient(serviceEndpoint, indexName, tokenCredential); ```
-Additional details on using [AAD authentication with the Azure SDK for .NET](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/identity/Azure.Identity) are available in the SDK's GitHub repo.
+More details about using [AAD authentication with the Azure SDK for .NET](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/identity/Azure.Identity) are available in the SDK's GitHub repo.
> [!NOTE] > If you get a 403 error, verify that your search service is enrolled in the preview program and that your service is configured for preview role assignments.
Additional details on using [AAD authentication with the Azure SDK for .NET](htt
## Grant access to a single index
-In some scenarios, you may want to scope down an application's access to a single resource, such as an index.
+In some scenarios, you may want to limit application's access to a single resource, such as an index.
-The portal doesn't currently support granting access to just a single index, but it can be done with [PowerShell](../role-based-access-control/role-assignments-powershell.md) or the [Azure CLI](../role-based-access-control/role-assignments-cli.md).
+The portal doesn't currently support role assignments at this level of granularity, but it can be done with [PowerShell](../role-based-access-control/role-assignments-powershell.md) or the [Azure CLI](../role-based-access-control/role-assignments-cli.md).
In PowerShell, use [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment), providing the Azure user or group name, and the scope of the assignment.
The PowerShell example shows the JSON syntax for creating a custom role.
## Disable API key authentication
-API keys cannot be deleted, but they can be disabled on your service. If you are using Search Service Contributor, Search Index Data Contributor, and Search Index Data Reader roles and Azure AD authentication, you can disable API keys, causing the search service to refuse all data-related requests that pass an API key in the header for content-related requests.
+API keys cannot be deleted, but they can be disabled on your service. If you are using the Search Service Contributor, Search Index Data Contributor, and Search Index Data Reader preview roles and Azure AD authentication, you can disable API keys, causing the search service to refuse all data-related requests that pass an API key in the header for content-related requests.
To disable [key-based authentication](search-security-api-keys.md), use the Management REST API version 2021-04-01-Preview and send two consecutive requests for [Update Service](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update).
security End To End https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/end-to-end.md
The [Azure Security Benchmark](../benchmarks/introduction.md) program includes a
| [Azure confidential computing](../../confidential-computing/overview.md) | Allows you to isolate your sensitive data while it's being processed in the cloud. | | [Azure DevOps](/azure/devops/user-guide/what-is-azure-devops) | Your development projects benefit from multiple layers of security and governance technologies, operational practices, and compliance policies when stored in Azure DevOps. | | **Customer Access** | |
-| [Azure AD External Identities](../../active-directory/external-identities/compare-with-b2c.md) | With External Identities in Azure AD, you can allow people outside your organization to access your apps and resources, while letting them sign in using whatever identity they prefer. |
+| [Azure AD External Identities](../../active-directory/external-identities/external-identities-overview.md) | With External Identities in Azure AD, you can allow people outside your organization to access your apps and resources, while letting them sign in using whatever identity they prefer. |
| | You can share your apps and resources with external users via [Azure AD B2B](../../active-directory/external-identities/what-is-b2b.md) collaboration. | | | [Azure AD B2C](../../active-directory-b2c/overview.md) lets you support millions of users and billions of authentications per day, monitoring and automatically handling threats like denial-of-service, password spray, or brute force attacks. |
security Feature Availability https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/security/fundamentals/feature-availability.md
The following table displays the current Defender for Cloud feature availability
| <li> [Microsoft Defender for container registries](../../defender-for-cloud/defender-for-container-registries-introduction.md) <sup>[1](#footnote1)</sup> (deprecated) | GA | GA <sup>[2](#footnote2)</sup> | | <li> [Microsoft Defender for container registries scanning of images in CI/CD workflows](../../defender-for-cloud/defender-for-container-registries-cicd.md) <sup>[3](#footnote3)</sup> | Public Preview | Not Available | | <li> [Microsoft Defender for Kubernetes](../../defender-for-cloud/defender-for-kubernetes-introduction.md) <sup>[4](#footnote4)</sup> (deprecated) | GA | GA |
-| <li> [Defender extension for Azure Arc enabled Kubernetes clusters](../../defender-for-cloud/defender-for-kubernetes-azure-arc.md) <sup>[5](#footnote5)</sup> | Public Preview | Not Available |
+| <li> [Defender extension for Arc-enabled Kubernetes, Servers, or Data services](../../defender-for-cloud/defender-for-kubernetes-azure-arc.md) <sup>[5](#footnote5)</sup> | Public Preview | Not Available |
| <li> [Microsoft Defender for Azure SQL database servers](../../defender-for-cloud/defender-for-sql-introduction.md) | GA | GA | | <li> [Microsoft Defender for SQL servers on machines](../../defender-for-cloud/defender-for-sql-introduction.md) | GA | GA | | <li> [Microsoft Defender for open-source relational databases](../../defender-for-cloud/defender-for-databases-introduction.md) | GA | Not Available |
service-connector How To Integrate Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-connector/how-to-integrate-key-vault.md
This page shows the supported authentication types and client types of Azure Key
### Java - Spring Boot **Service Principal**+ | Default environment variable name | Description | Example value | | | | | | azure.keyvault.uri | Your Key Vault endpoint URL | `"https://{yourKeyVaultName}.vault.azure.net/"` |
site-recovery Vmware Azure Set Up Replication Tutorial Preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/site-recovery/vmware-azure-set-up-replication-tutorial-preview.md
Follow these steps to enable replication:
10. Create a new replication policy if needed.
- A default replication policy gets created under the vault with 3 days recovery point retention and 4 hours app consistency frequency. You can create a new replication policy as per your RPO requirements.
+ A default replication policy gets created under the vault with 3 days recovery point retention and app-consistent recovery points disabled by default. You can create a new replication policy or modify the existing one as per your RPO requirements.
- Select **Create new**.
- - Enter the Name.
+ - Enter the **Name**.
- - Enter **Recovery point retention** in days.
+ - Enter a value for **Retention period (in days)**. You can enter any value ranging from 0 to 15.
- - Select **App-consistent snapshot frequency in hours** as per business requirements
+ - **Enable app consistency frequency** if you wish and enter a value for **App-consistent snapshot frequency (in hours)** as per business requirements.
- Select **OK** to save the policy.
storage Point In Time Restore Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/point-in-time-restore-overview.md
The retention period begins a few minutes after you enable point-in-time restore
The retention period for point-in-time restore must be at least one day less than the retention period specified for soft delete. For example, if the soft delete retention period is set to 7 days, then the point-in-time restore retention period may be between 1 and 6 days. > [!IMPORTANT]
-> The time that it takes to restore a set of data is based on the number of write and delete operations made during the restore period. For example, an account with one million objects with 3,000 objects added per day and 1,000 objects deleted per day will require approximately two hours to restore to a point 30 days in the past. A retention period and restoration more than 90 days in the past would not be recommended for an account with this rate of change.
+> Enabling blob versioning as a pre-requisite for point-in-time restore may result in multiple versions for a blob. These versions continue to exist after the retention period for point-in-time restore. For example, you set the retention period for point-in-time restore to 7 days, then versions older than 7 days continue to exist. To optimize costs by deleting or tiering older versions use [Lifecycle Management](/azure/storage/blobs/lifecycle-management-overview).
+
+The time that it takes to restore a set of data is based on the number of write and delete operations made during the restore period. For example, an account with one million blobs with 3,000 blobs added per day and 1,000 blobs deleted per day will require approximately two hours to restore to a point 30 days in the past. A retention period and restoration more than 90 days in the past would not be recommended for an account with this rate of change.
### Permissions for point-in-time restore
Point-in-time restore for block blobs has the following limitations and known is
- Only block blobs in a standard general-purpose v2 storage account can be restored as part of a point-in-time restore operation. Append blobs, page blobs, and premium block blobs are not restored. - If you have deleted a container during the retention period, that container will not be restored with the point-in-time restore operation. If you attempt to restore a range of blobs that includes blobs in a deleted container, the point-in-time restore operation will fail. To learn about protecting containers from deletion, see [Soft delete for containers](soft-delete-container-overview.md).
+- If you use permanent delete to purge soft deleted versions for a blob, point-in-time restore may not be able to restore that blob correctly.
- If a blob has moved between the hot and cool tiers in the period between the present moment and the restore point, the blob is restored to its previous tier. Restoring block blobs in the archive tier is not supported. For example, if a blob in the hot tier was moved to the archive tier two days ago, and a restore operation restores to a point three days ago, the blob is not restored to the hot tier. To restore an archived blob, first move it out of the archive tier. For more information, see [Overview of blob rehydration from the archive tier](archive-rehydrate-overview.md). - If an immutability policy is configured, then a restore operation can be initiated, but any blobs that are protected by the immutability policy will not be modified. A restore operation in this case will not result in the restoration of a consistent state to the date and time given. - A block that has been uploaded via [Put Block](/rest/api/storageservices/put-block) or [Put Block from URL](/rest/api/storageservices/put-block-from-url), but not committed via [Put Block List](/rest/api/storageservices/put-block-list), is not part of a blob and so is not restored as part of a restore operation.
There is no charge to enable point-in-time restore. However, enabling point-in-t
Billing for point-in-time restore depends on the amount of data processed to perform the restore operation. The amount of data processed is based on the number of changes that occurred between the restore point and the present moment. For example, assuming a relatively constant rate of change to block blob data in a storage account, a restore operation that goes back in time 1 day would cost 1/10th of a restore that goes back in time 10 days.
-To estimate the cost of a restore operation, review the change feed log to estimate the amount of data that was modified during the restore period. For example, if the retention period for change feed is 30 days, and the size of the change feed is 10 MB, then restoring to a point 10 days earlier would cost approximately one-third of the price listed for an LRS account in that region. Restoring to a point that is 27 days earlier would cost approximately nine-tenths of the price listed.
+In addition to charges for the changefeed data processed, point-in-time restores also incur charges for any transactions involved in performing the restore.
For more information about pricing for point-in-time restore, see [Block blob pricing](https://azure.microsoft.com/pricing/details/storage/blobs/).
storage Secure File Transfer Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md
Previously updated : 11/22/2021 Last updated : 02/03/2022
This article describes limitations and known issues of SFTP support in Azure Blo
- When a firewall is configured, connections from non-allowed IPs are not rejected as expected. However, if there is a successful connection for an authenticated user then all data plane operations will be rejected.
-## Supported algorithms
-
-| Host key | Key exchange | Ciphers/encryption | Integrity/MAC | Public key |
-|-|--|--|||
-| rsa-sha2-256 | ecdh-sha2-nistp384 | aes128-gcm@openssh.com | hmac-sha2-256 | ssh-rsa |
-| rsa-sha2-512 | ecdh-sha2-nistp256 | aes256-gcm@openssh.com | hmac-sha2-512 | ecdsa-sha2-nistp256 |
-| ecdsa-sha2-nistp256 | diffie-hellman-group14-sha256 | aes128-cbc| | ecdsa-sha2-nistp384 |
-| ecdsa-sha2-nistp384| diffie-hellman-group16-sha512 | aes256-cbc | |
-||| aes192-cbc ||
-
-SFTP support in Azure Blob Storage currently limits its cryptographic algorithm support in accordance to the Microsoft Security Development Lifecycle (SDL). We strongly recommend that customers utilize SDL approved algorithms to securely access their data. More details can be found [here](/security/sdl/cryptographic-recommendations)
- ## Security - Host keys are published [here](secure-file-transfer-protocol-host-keys.md). During the public preview, host keys will rotate up to once per month.
storage Secure File Transfer Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/secure-file-transfer-protocol-support-how-to.md
Title: Connect to Azure Blob Storage using SFTP (preview) | Microsoft Docs
-description: Learn how to enable SFTP support in your Azure Blob Storage account so that you can directly connect to your Azure Storage account by using an SFTP client.
+description: Learn how to enable SFTP support for your Azure Blob Storage account so that you can directly connect to your Azure Storage account by using an SFTP client.
Previously updated : 11/22/2021 Last updated : 02/03/2022
To learn more about SFTP support in Azure Blob Storage, see [SSH File Transfer P
> > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. >
-> To enroll in the preview, complete [this form](https://forms.office.com/r/gZguN0j65Y) AND request to join via 'Preview features' in Azure portal.
+> To enroll in the preview, see [this form](https://forms.office.com/r/gZguN0j65Y).
## Prerequisites -- A standard general-purpose v2 or premium block blob storage account. You can also enable SFTP as you create the account. For more information on these types of storage accounts, see [Storage account overview](../common/storage-account-overview.md).
+- A standard general-purpose v2 or premium block blob storage account. You can also enable SFTP as create the account. For more information on these types of storage accounts, see [Storage account overview](../common/storage-account-overview.md).
- The account redundancy option of the storage account is set to either locally-redundant storage (LRS) or zone-redundant storage (ZRS).
To learn more about SFTP support in Azure Blob Storage, see [SSH File Transfer P
Before you can enable SFTP support, you must register the SFTP feature with your subscription.
+### [Portal](#tab/azure-portal)
+ 1. Sign in to the [Azure portal](https://portal.azure.com/). 2. Open the configuration page of your subscription.
Before you can enable SFTP support, you must register the SFTP feature with your
> [!div class="mx-imgBorder"] > ![Preview setting](./media/secure-file-transfer-protocol-support-how-to/preview-features-setting.png)
-4. In the **Preview features** page, select the **SFTP support in Azure Blob Storage** feature, and then select **Register**.
+4. In the **Preview features** page, select the **AllowSFTP** feature, and then select **Register**.
+
+### [PowerShell](#tab/powershell)
+
+1. Open a Windows PowerShell command window.
+
+2. Install **Az.Storage** preview module.
+
+ ```powershell
+ Install-Module -Name Az.Storage -AllowPrerelease
+ ```
+
+ For more information about how to install PowerShell modules, see [Install the Azure PowerShell module](/powershell/azure/install-az-ps)
+
+3. Sign in to your Azure subscription with the `Connect-AzAccount` command and follow the on-screen directions.
+
+ ```powershell
+ Connect-AzAccount
+ ```
+
+4. If your identity is associated with more than one subscription, then set your active subscription.
+
+ ```powershell
+ $context = Get-AzSubscription -SubscriptionId <subscription-id>
+ Set-AzContext $context
+ ```
+
+ Replace the `<subscription-id>` placeholder value with the ID of your subscription.
+
+5. Register the `AllowSFTP` feature by using the [Register-AzProviderFeature](/powershell/module/az.resources/register-azproviderfeature) command.
+
+ ```powershell
+ Register-AzProviderFeature -ProviderNamespace Microsoft.Storage -FeatureName AllowSFTP
+ ```
+
+ > [!NOTE]
+ > The registration process might not complete immediately. Make sure to verify that the feature is registered before using it.
+
+### [Azure CLI](#tab/azure-cli)
+
+1. Open the [Azure Cloud Shell](../../cloud-shell/overview.md), or if you've [installed](/cli/azure/install-azure-cli) the Azure CLI locally, open a command console application such as Windows PowerShell.
+
+2. Install the `storage-preview` extension.
+
+ ```azurecli
+ az extension add -n storage-preview
+ ```
+
+2. If you're using Azure CLI locally, run the login command.
+
+ ```azurecli
+ az login
+ ```
+
+ If the CLI can open your default browser, it will do so and load an Azure sign-in page.
+
+ Otherwise, open a browser page at [https://aka.ms/devicelogin](https://aka.ms/devicelogin) and enter the authorization code displayed in your terminal. Then, sign in with your account credentials in the browser.
+
+1. If your identity is associated with more than one subscription, then set your active subscription to subscription of the storage account.
+
+ ```azurecli
+ az account set --subscription <subscription-id>
+ ```
+
+ Replace the `<subscription-id>` placeholder value with the ID of your subscription.
+
+4. Register the `AllowSFTP` feature by using the [az feature register](/cli/azure/feature#az_feature_register) command.
+
+ ```azurecli
+ az feature register --namespace Microsoft.Storage --name AllowSFTP
+ ```
+
+ > [!NOTE]
+ > The registration process might not complete immediately. Make sure to verify that the feature is registered before using it.
++ ### Verify feature registration Verify that the feature is registered before continuing with the other steps in this article.
+#### [Portal](#tab/azure-portal)
+ 1. Open the **Preview features** page of your subscription.
-2. Locate the **SFTP support in Azure Blob Storage** feature and make sure that **Registered** appears in the **State** column.
+2. Locate the **AllowSFTP** feature and make sure that **Registered** appears in the **State** column.
+
+#### [PowerShell](#tab/powershell)
+
+To verify that the registration is complete, use the [Get-AzProviderFeature](/powershell/module/az.resources/get-azproviderfeature) command.
+
+```powershell
+Get-AzProviderFeature -ProviderNamespace Microsoft.Storage -FeatureName AllowSFTP
+```
+
+#### [Azure CLI](#tab/azure-cli)
+
+To verify that the registration is complete, use the [az feature](/cli/azure/feature#az_feature_show) command.
+
+```azurecli
+az feature show --namespace Microsoft.Storage --name AllowSFTP
+```
++ ## Enable SFTP support
+### [Portal](#tab/azure-portal)
+ 1. In the [Azure portal](https://portal.azure.com/), navigate to your storage account. 2. Under **Settings**, select **SFTP**.
Verify that the feature is registered before continuing with the other steps in
>[!NOTE] > If no local users appear in the SFTP configuration page, you'll need to add at least one of them. To add local users, see the next section.
+### [PowerShell](#tab/powershell)
+
+To enable SFTP support, call the [Set-AzStorageAccount](/powershell/module/az.storage/set-azstorageaccount) command and set the `-EnableSftp` parameter to true. Remember to replace the values in angle brackets with your own values:
+
+```powershell
+$resourceGroupName = "<resource-group>"
+$storageAccountName = "<storage-account>"
+
+Set-AzStorageAccount -ResourceGroupName $resourceGroupName -Name $storageAccountName -EnableSftp $true
+```
+
+### [Azure CLI](#tab/azure-cli)
+
+To enable SFTP support, call the [az storage account update](/cli/azure/storage/account#az-storage-account-update) command and set the `--enable-sftp` parameter to true. Remember to replace the values in angle brackets with your own values:
+
+```azurecli
+az storage account update -g <resource-group> -n <storage-account> --enable-sftp=true
+```
++++ ## Configure permissions
-Azure Storage does not support shared access signature (SAS), or Azure Active directory (Azure AD) authentication for accessing the SFTP endpoint. Instead, you must use an identity called local user that can be secured with an Azure generated password or a secure shell (SSH) key pair. To grant access to a connecting client, the storage account must have an identity associated with the password or key pair. That identity is called a *local user*.
+Azure Storage doesn't support shared access signature (SAS), or Azure Active directory (Azure AD) authentication for accessing the SFTP endpoint. Instead, you must use an identity called local user that can be secured with an Azure generated password or a secure shell (SSH) key pair. To grant access to a connecting client, the storage account must have an identity associated with the password or key pair. That identity is called a *local user*.
+
+In this section, you'll learn how to create a local user, choose an authentication method, and assign permissions for that local user.
-In this section, you'll learn how to create a local user, choose an authentication method, and then assign permissions for that local user.
+To learn more about the SFTP permissions model, see [SFTP Permissions model](secure-file-transfer-protocol-support.md#sftp-permissions-model).
-To learn more about the SFTP permissions model, see [SFTP Permissions model](secure-file-transfer-protocol-support.md#sftp-permissions-model).
+### [Portal](#tab/azure-portal)
1. In the [Azure portal](https://portal.azure.com/), navigate to your storage account.
To learn more about the SFTP permissions model, see [SFTP Permissions model](sec
If you enabled password authentication, then the Azure generated password appears in a dialog box after the local user has been added. > [!IMPORTANT]
- > You can't retrieve this password later, so make sure you copy the password, and then store it in a place where you can find it.
- >
- > If you do lose your password, you can generate a new password.
+ > You can't retrieve this password later, so make sure to copy the password, and then store it in a place where you can find it.
If you chose to generate a new key pair, then you'll be prompted to download the private key of that key pair after the local user has been added.
+### [PowerShell](#tab/powershell)
+
+1. Decide which containers you want to make available to the local user and the types of operations that you want to enable this local user to perform. Create a permission scope object by using the the **New-AzStorageLocalUserPermissionScope** command, and setting the `-Permission` parameter of that command to one or more letters that correspond to access permission levels. Possible values are Read(r), Write (w), Delete (d), List (l), and Create (c).
+
+ The following example sets creates a permission scope object that gives read and write permission to the `mycontainer` container.
+
+ ```powershell
+ $permissionScope = New-AzStorageLocalUserPermissionScope -Permission rw -Service blob -ResourceName mycontainer
+ ```
+
+2. Decide which methods of authentication you'd like associate with this local user. You can associate a password and / or an SSH key.
+
+ > [!IMPORTANT]
+ > While you can enable both forms of authentication, SFTP clients can connect by using only one of them. Multifactor authentication, whereby both a valid password and a valid public and private key pair are required for successful authentication is not supported.
+
+ If you want to use an SSH key, you'll need to public key of the public / private key pair. You can use existing public keys stored in Azure or use any existing public keys outside of Azure.
+
+ To find existing keys in Azure, see [List keys](../../virtual-machines/ssh-keys-portal.md#list-keys). When SFTP clients connect to Azure Blob Storage, those clients need to provide the private key associated with this public key.
+
+ If you want to use a public key outside of Azure, but you don't yet have one, then see [Generate keys with ssh-keygen](../../virtual-machines/linux/create-ssh-keys-detailed.md#generate-keys-with-ssh-keygen) for guidance about how to create one.
+
+ If you want to use a password to authenticate the local user, you can generate one after the local user is created.
+
+3. If want to use an SSH key, create a public key object by using the **New-AzStorageLocalUserSshPublicKey** command. Set the `-Key` parameter to a string that contains the key type and public key. In the following example, the key type is `ssh-rsa` and the key is `ssh-rsa a2V5...`.
+
+ ```powershell
+ $sshkey = "ssh-rsa a2V5..."
+ $sshkey = New-AzStorageLocalUserSshPublicKey -Key $sshkey -Description "description for ssh public key"
+ ```
+
+4. Create a local user by using the **Set-AzStorageLocalUser** command. Set the `-PermissionScope` parameter to the permission scope object that you created earlier. If you are using an SSH key, then set the `SshAuthorization` parameter to the public key object that you created in the previous step. If you want to use a password to authenticate this local user, then set the `-HasSshPassword` parameter to `$true`.
+
+ The following example creates a local user and then prints the key and permission scopes to the console.
+
+ ```powershell
+ $UserName = "mylocalusername"
+ $localuser = Set-AzStorageLocalUser -ResourceGroupName $resourceGroupName -StorageAccountName $storageAccountName -UserName $UserName -HomeDirectory "mycontainer" -SshAuthorizedKey $sshkey -PermissionScope $permissionScope -HasSharedKey $true -HasSshKey $true -HasSshPassword $true
+
+ $localuser
+ $localuser.SshAuthorizedKeys | ft
+ $localuser.PermissionScopes | ft
+ ```
+
+5. If you want to use a password to authenticate the user, you can create a password by using the **New-AzStorageLocalUserSshPassword** command. Set the `-UserName` parameter to the user name.
+
+ The following example generates a password for the user.
+
+ ```powershell
+ $password = New-AzStorageLocalUserSshPassword -ResourceGroupName $resourceGroupName -StorageAccountName $storageAccountName -UserName $UserName
+ $password
+ ```
+ > [!IMPORTANT]
+ > You can't retrieve this password later, so make sure to copy the password, and then store it in a place where you can find it. If you lose this password, you'll have to generate a new one.
+
+### [Azure CLI](#tab/azure-cli)
+
+1. First, decide which methods of authentication you'd like associate with this local user. You can associate a password and / or an SSH key.
+
+ > [!IMPORTANT]
+ > While you can enable both forms of authentication, SFTP clients can connect by using only one of them. Multifactor authentication, whereby both a valid password and a valid public and private key pair are required for successful authentication is not supported.
+
+ If you want to use an SSH key, you'll need to public key of the public / private key pair. You can use existing public keys stored in Azure or use any existing public keys outside of Azure.
+
+ To find existing keys in Azure, see [List keys](../../virtual-machines/ssh-keys-portal.md#list-keys). When SFTP clients connect to Azure Blob Storage, those clients need to provide the private key associated with this public key.
+
+ If you want to use a public key outside of Azure, but you don't yet have one, then see [Generate keys with ssh-keygen](../../virtual-machines/linux/create-ssh-keys-detailed.md#generate-keys-with-ssh-keygen) for guidance about how to create one.
+
+ If you want to use a password to authenticate the local user, you can generate one after the local user is created.
+
+2. Create a local user by using the [az storage account local-user create](/cli/azure/storage/account/local-user#az-storage-account-local-user-create) command. Use the parameters of this command to specify the container and permission level. If you want to use an SSH key, then set the `--has-ssh-key` parameter to a string that contains the key type and public key. If you want to use a password to authenticate this local user, then set the `--has-ssh-password` parameter to `true`.
+
+ The following example gives a local user name `contosouser` read and write access to a container named `contosocontainer`. An ssh-rsa key with a key value of `ssh-rsa a2V5...` is used for authentication.
+
+ ```azurecli
+ az storage account local-user create --account-name contosoaccount -g contoso-resource-group -n contosouser --home-directory contosocontainer --permission-scope permissions=rw service=blob resource-name=contosocontainer --ssh-authorized-key key="ssh-rsa ssh-rsa a2V5..." --has-ssh-key true --has-ssh-password true
+ ```
+3. If you want to use a password to authenticate the user, you can create a password by using the [az storage account local-user regenerate-password](/cli/azure/storage/account/local-user#az-storage-account-local-user-regenerate-password) command. Set the `-n` parameter to the local user name.
+
+ The following example generates a password for the user.
+
+ ```azurecli
+ az storage account local-user regenerate-password --account-name contosoaccount -g contoso-resource-group -n contosouser
+ ```
+ > [!IMPORTANT]
+ > You can't retrieve this password later, so make sure to copy the password, and then store it in a place where you can find it. If you lose this password, you'll have to generate a new one.
+
+
+ ## Connect an SFTP client You can use any SFTP client to securely connect and then transfer files. The following screenshot shows a Windows PowerShell session that uses [Open SSH](/windows-server/administration/openssh/openssh_overview) and password authentication to connect and then upload a file named `logfile.txt`.
storage Secure File Transfer Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/secure-file-transfer-protocol-support.md
Previously updated : 11/15/2021 Last updated : 02/03/2022
cd mydirectory
put logfile.txt ```
+## Supported algorithms
+
+You can use any SFTP client to securely connect and then transfer files. Connecting clients must use one the algorithms listed below.
+
+| Host key | Key exchange | Ciphers/encryption | Integrity/MAC | Public key |
+|-|--|--|||
+| rsa-sha2-256 | ecdh-sha2-nistp384 | aes128-gcm@openssh.com | hmac-sha2-256 | ssh-rsa |
+| rsa-sha2-512 | ecdh-sha2-nistp256 | aes256-gcm@openssh.com | hmac-sha2-512 | ecdsa-sha2-nistp256 |
+| ecdsa-sha2-nistp256 | diffie-hellman-group14-sha256 | aes128-cbc| | ecdsa-sha2-nistp384 |
+| ecdsa-sha2-nistp384| diffie-hellman-group16-sha512 | aes256-cbc | |
+||| aes192-cbc ||
+
+SFTP support in Azure Blob Storage currently limits its cryptographic algorithm support in accordance to the Microsoft Security Development Lifecycle (SDL). We strongly recommend that customers utilize SDL approved algorithms to securely access their data. More details can be found [here](/security/sdl/cryptographic-recommendations)
+ ## Known issues and limitations See the [Known issues](secure-file-transfer-protocol-known-issues.md) article for a complete list of issues and limitations with the current release of SFTP support.
storage File Sync Disaster Recovery Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/file-sync/file-sync-disaster-recovery-best-practices.md
For a robust disaster recovery solution, most customers should consider ZRS. ZRS
### Geo-redundancy
-If you enable either GRS or GZRS on the storage account containing your cloud endpoint, you need to enable it on your Storage Sync Service as well. This ensures all information about your Azure File Sync topology and the data contained in your cloud endpoint is asynchronously copied to the paired secondary region in the event of a disaster.
-
-For resources that are configured with either GRS or GZRS, Microsoft will initiate the failover for your service if the primary region is judged to be permanently unrecoverable or unavailable for a long time. The Azure File Sync service will automatically fail over to the paired region in the event of a region disaster when the Storage Sync Service is using GRS or GZRS. If you are using Azure File Sync configured with GRS or GZRS, there is no action required from you in the event of a disaster.
+If your storage account is configured with either GRS or GZRS replication, Microsoft will initiate the failover of the Storage Sync Service if the primary region is judged to be permanently unrecoverable or unavailable for a long time. There is no action required from you in the event of a disaster.
Although you can manually request a failover of your Storage Sync Service to your GRS or GZRS paired region, we don't recommend doing this outside of large-scale regional outages since the process isn't seamless and may incur extra cost. To initiate the process, open a support ticket and request that both your Azure storage accounts that contain your Azure file share and your Storage Sync Service be failed over.
synapse-analytics Concepts Lake Database https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/database-designer/concepts-lake-database.md
The lake database in Azure Synapse Analytics enables customers to bring together
## Database designer
-The new database designer gives you the possibility to create a data model for your lake database and add additional information to it. Every Entity and Attribute can be described to provide more information along the model, which not only contains Entities but relationships as well. Particular the lack to model relationships has been a challenge for the interaction on the data lake. This challenge is now addressed with an integrated designer that provides possibilities that have been available in databases but not on the lake. Also the capability to add descriptions and possible demo values to the model allows people who are interacting with it in the future to have information where they need it to get a better understanding about the data.
+The new database designer gives you the possibility to create a data model for your lake database and add additional information to it. Every Entity and Attribute can be described to provide more information about the model, which not only contains Entities but relationships as well. In particular, the lack to model relationships has been a challenge for the interaction on the data lake. This challenge is now addressed with an integrated designer that provides possibilities that have been available in databases but not on the lake. Also the capability to add descriptions and possible demo values to the model allows people who are interacting with it in the future to have information where they need it to get a better understanding about the data.
## Data storage
virtual-desktop Automatic Migration https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/automatic-migration.md
Before you use the migration module, make sure you have the following things rea
- PowerShell or PowerShell ISE to run the scripts you'll see in this article. The Microsoft.RdInfra.RDPowershell module doesn't work in PowerShell Core. >[!IMPORTANT]
->Migration only creates service objects in the US geography. If you try to migrate your service objects to another geography, it won't work. Also, if you have more than 200 app groups in your Azure Virtual Desktop (classic) deployment, you won't be able to migrate. You'll only be able to migrate if you rebuild your environment to reduce the number of app groups within your Azure Active Directory (Azure AD) tenant.
+>Migration only creates service objects in the US geography. If you try to migrate your service objects to another geography, it won't work. Also, if you have more than 500 app groups in your Azure Virtual Desktop (classic) deployment, you won't be able to migrate. You'll only be able to migrate if you rebuild your environment to reduce the number of app groups within your Azure Active Directory (Azure AD) tenant.
## Prepare your PowerShell environment
virtual-desktop Total Costs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/remote-app-streaming/total-costs.md
Previously updated : 11/12/2021 Last updated : 02/04/2022
You can use the [Azure Pricing Calculator](https://azure.microsoft.com/pricing/c
3. Enter the values for your deployment into the fields to estimate your monthly Azure bill based on your expected compute, storage, and networking usage. >[!NOTE]
->Currently, the Azure Pricing Calculator Azure Virtual Desktop module can only estimate consumption costs for session host VMs and the aggregate additional storage of any optional Azure Virtual Desktop features requiring storage that you choose to deploy. However, you can add estimates for other Azure Virtual Desktop features in separate modules within the same Azure Pricing calculator page to get a more complete or modular cost estimate.
+>Currently, the Azure Pricing Calculator Azure Virtual Desktop module can only estimate consumption costs for session host VMs and the aggregate additional storage of any optional Azure Virtual Desktop features requiring storage that you choose to deploy. Your total cost may also include egress network traffic to Microsoft 365 services, such as OneDrive for Business or Exchange Online. However, you can add estimates for other Azure Virtual Desktop features in separate modules within the same Azure Pricing calculator page to get a more complete or modular cost estimate.
> >You can add extra Azure Pricing Calculator modules to estimate the cost impact of other components of your deployment, including but not limited to: >
virtual-machines Disks Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/disks-redundancy.md
Title: Redundancy options for Azure managed disks
description: Learn about zone-redundant storage and locally-redundant storage for Azure managed disks. Previously updated : 09/01/2021 Last updated : 02/03/2022
Zone-redundant storage (ZRS) synchronously replicates your Azure managed disk ac
A ZRS disk lets you recover from failures in availability zones. If a zone went down, a ZRS disk can be attached to a virtual machine (VM) in a different zone. ZRS disks can also be shared between VMs for improved availability with clustered or distributed applications like SQL FCI, SAP ASCS/SCS, or GFS2. A shared ZRS disk can be attached to primary and secondary VMs in different zones to take advantage of both ZRS and [availability zones](../availability-zones/az-overview.md). If your primary zone fails, you can quickly fail over to the secondary VM using [SCSI persistent reservation](disks-shared-enable.md#supported-scsi-pr-commands).
+For more information on ZRS disks, see [Zone Redundant Storage (ZRS) option for Azure Disks for high availability](https://youtu.be/RSHmhmdHXcY).
+ ### Billing implications For details see the [Azure pricing page](https://azure.microsoft.com/pricing/details/managed-disks/).
virtual-machines Field Programmable Gate Arrays Attestation https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/field-programmable-gate-arrays-attestation.md
Title: Azure FPGFA Attestation Service
+ Title: Azure FPGA Attestation Service
description: Attestation service for the NP-series VMs.-+ Last updated 04/01/2021-+ # FPGA attestation for Azure NP-Series VMs (Preview)
virtual-machines Vm Applications How To https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/vm-applications-how-to.md
Previously updated : 11/02/2021 Last updated : 02/03/2022
$applicationName = myApp
New-AzGalleryApplication ` -ResourceGroupName $rgName ` -GalleryName $galleryName `
+ -Location "East US" `
-Name $applicationName ` -SupportedOSType Linux ` -Description "Backend Linux application for finance."
New-AzGalleryApplicationVersion `
To add the application to an existing VM, get the application version and use that to get the VM application version ID. Use the ID to add the application to the VM configuration. ```azurepowershell-interactive
-$vm = Get-AzVM -ResourceGroupName $rgname -Name myVM
-$vmapp = Get-AzGalleryApplicationVersion `
- -ResourceGroupName $rgname `
+$vmname = "myVM"
+$vm = Get-AzVM -ResourceGroupName $rgname -Name $vmname
+$appversion = Get-AzGalleryApplicationVersion `
+ -GalleryApplicationName $applicationname `
-GalleryName $galleryname `
- -ApplicationName $applicationname `
- -Version $version
-
-$vm = Add-AzVmGalleryApplication `
- -VM $vm `
- -Id $vmapp.Id
-
-Update-AzVm -ResourceGroupName $rgname -VM $vm
+ -Name $version `
+ -ResourceGroupName $rgname
+$packageid = $appversion.Id
+$app = New-AzVmGalleryApplication -PackageReferenceId $packageid
+Add-AzVmGalleryApplication -VM $vmname -GalleryApplication $app
+Update-AzVM -ResourceGroupName $rgname -VM $vmname
```
+
+Verify the application succeeded:
+```powershell-interactive
+Get-AzVM -ResourceGroupName $rgname -VMName $vmname -Status
+```
### [REST](#tab/rest2)
virtual-machines Vm Applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/vm-applications.md
Previously updated : 11/02/2021 Last updated : 02/03/2022
The VM application packages use multiple resource types:
| Resource | Description| |-|| | **Azure compute gallery** | A gallery is a repository for managing and sharing application packages. Users can share the gallery resource and all the child resources will be shared automatically. The gallery name must be unique per subscription. For example, you may have one gallery to store all your OS images and another gallery to store all your VM applications.|
-| **VM application** | This is the definition of your VM application. This is a *logical* resource that stores the common metadata for all the versions under it. For example, you may have an application definition for Apache Tomcat and have multiple versions within it. |
-| **VM Application version** | This is the deployable resource. You can globally replicate your VM application versions to target regions closer to your VM infrastructure. The VM Application Version must be replicated to a region before it may be deployed on a VM in that region. |
+| **VM application** | The definition of your VM application. It is a *logical* resource that stores the common metadata for all the versions under it. For example, you may have an application definition for Apache Tomcat and have multiple versions within it. |
+| **VM Application version** | The deployable resource. You can globally replicate your VM application versions to target regions closer to your VM infrastructure. The VM Application Version must be replicated to a region before it may be deployed on a VM in that region. |
## Limitations
rmdir /S /Q C:\\myapp
During the preview, the VM application extension always returns a success regardless of whether any VM app failed while being installed/updated/removed. The VM Application extension will only report the extension status as failure when there is a problem with the extension or the underlying infrastructure. To know whether a particular VM application was successfully added to the VM instance, please check the message of the VMApplication extension.
-To learn more about getting the status of VM extensions, see [Virtual machine extensions and features for Windows](extensions/features-windows.md#view-extension-status).
+To learn more about getting the status of VM extensions, see [Virtual machine extensions and features for Linux](extensions/features-linux.md#view-extension-status) and [Virtual machine extensions and features for Windows](extensions/features-windows.md#view-extension-status).
To get status of VM extensions, use [Get-AzVM](/powershell/module/az.compute/get-azvm): ```azurepowershell-interactive
-Get-AzVM -name <VMSS name> -ResourceGroupName <resource group name> -InstanceView | convertto-json
+Get-AzVM -name <VM name> -ResourceGroupName <resource group name> -Status | convertto-json -Depth 10
```
-To get status of VMSS extensions, use [Get-AzVMSS](/powershell/module/az.compute/get-azvmss):
+To get status of scale set extensions, use [Get-AzVMSS](/powershell/module/az.compute/get-azvmss):
```azurepowershell-interactive
-Get-AzVmss -name <VMSS name> -ResourceGroupName <resource group name> -InstanceView | convertto-json
+Get-AzVmss -name <VMSS name> -ResourceGroupName <resource group name> -Status | convertto-json -Depth 10
``` ## Error messages
Get-AzVmss -name <VMSS name> -ResourceGroupName <resource group name> -InstanceV
|--|--| | Current VM Application Version {name} was deprecated at {date}. | You tried to deploy a VM Application version that has already been deprecated. Try using `latest` instead of specifying a specific version. | | Current VM Application Version {name} supports OS {OS}, while current OSDisk's OS is {OS}. | You tried to deploy a Linux application to Windows instance or vice versa. |
-| The maximum number of VM applications (max=5, current={count}) has been exceeded. Use fewer applications and retry the request. | We currently only support five VM applications per VM or VMSS. |
+| The maximum number of VM applications (max=5, current={count}) has been exceeded. Use fewer applications and retry the request. | We currently only support five VM applications per VM or scale set. |
| More than one VMApplication was specified with the same packageReferenceId. | The same application was specified more than once. |
-| Subscription not authorized to access this image. | The subscription does not have access to this application version. |
-| Storage account in the arguments does not exist. | There are no applications for this subscription. |
-| The platform image {image} is not available. Verify that all fields in the storage profile are correct. For more details about storage profile information, please refer to https://aka.ms/storageprofile. | The application does not exist. |
+| Subscription not authorized to access this image. | The subscription doesn't have access to this application version. |
+| Storage account in the arguments doesn't exist. | There are no applications for this subscription. |
+| The platform image {image} is not available. Verify that all fields in the storage profile are correct. For more details about storage profile information, please refer to https://aka.ms/storageprofile. | The application doesn't exist. |
| The gallery image {image} is not available in {region} region. Please contact image owner to replicate to this region, or change your requested region. | The gallery application version exists, but it was not replicated to this region. | | The SAS is not valid for source uri {uri}. | A `Forbidden` error was received from storage when attempting to retrieve information about the url (either mediaLink or defaultConfigurationLink). |
-| The blob referenced by source uri {uri} does not exist. | The blob provided for the mediaLink or defaultConfigurationLink properties does not exist. |
+| The blob referenced by source uri {uri} doesn't exist. | The blob provided for the mediaLink or defaultConfigurationLink properties doesn't exist. |
| The gallery application version url {url} cannot be accessed due to the following error: remote name not found. Ensure that the blob exists and that it's either publicly accessible or is a SAS url with read privileges. | The most likely case is that a SAS uri with read privileges was not provided. | | The gallery application version url {url} cannot be accessed due to the following error: {error description}. Ensure that the blob exists and that it's either publicly accessible or is a SAS url with read privileges. | There was an issue with the storage blob provided. The error description will provide more information. | | Operation {operationName} is not allowed on {application} since it is marked for deletion. You can only retry the Delete operation (or wait for an ongoing one to complete). | Attempt to update an application thatΓÇÖs currently being deleted. |
Get-AzVmss -name <VMSS name> -ResourceGroupName <resource group name> -InstanceV
| Gallery image version publishing profile regions {regions} must contain the location of image version {location}. | The list of regions for replication must contain the location where the application version is. | | Duplicate regions are not allowed in target publishing regions. | The publishing regions may not have duplicates. | | Gallery application version resources currently do not support encryption. | The encryption property for target regions is not supported for VM Applications |
-| Entity name does not match the name in the request URL. | The gallery application version specified in the request url does not match the one specified in the request body. |
+| Entity name doesn't match the name in the request URL. | The gallery application version specified in the request url doesn't match the one specified in the request body. |
| The gallery application version name is invalid. The application version name should follow Major(int32).Minor(int32).Patch(int32) format, where int is between 0 and 2,147,483,647 (inclusive). e.g. 1.0.0, 2018.12.1 etc. | The gallery application version must follow the format specified. |
virtual-machines Businessobjects Deployment Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machines/workloads/sap/businessobjects-deployment-guide.md
Azure SQL Database offers the following three purchasing models:
The DTU-based purchasing model offers a blend of compute, memory, and I/O resources in three service tiers, to support light and heavy database workloads. Compute sizes within each tier provide a different mix of these resources, to which you can add additional storage resources. It's best suited for customers who want simple, pre-configure resource options.
- [Service Tiers](../../../azure-sql/database/service-tiers-dtu.md#compare-the-dtu-based-service-tiers) in the DTU-based purchase model is differentiated by a range of compute sizes with a fixed amount of included storage, fixed retention period of backups, and fixed price.
+ [Service Tiers](../../../azure-sql/database/service-tiers-dtu.md#compare-service-tiers) in the DTU-based purchasing model is differentiated by a range of compute sizes with a fixed amount of included storage, fixed retention period of backups, and fixed price.
- Serverless
virtual-network Troubleshoot Nat https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-network/nat-gateway/troubleshoot-nat.md
# Troubleshoot Azure Virtual Network NAT connectivity
-This article helps administrators diagnose and resolve connectivity problems when using Virtual Network NAT.
+This article provides guidance on how to configure your NAT gateway to ensure outbound connectivity. This article also provides mitigating steps to resolve common configuration and connectivity issues with NAT gateway.
-## Problems
+## Common connection issues with NAT gateway
-* [SNAT exhaustion](#snat-exhaustion)
-* [ICMP ping is failing](#icmp-ping-is-failing)
-* [Connectivity failures](#connectivity-failures)
-* [IPv6 coexistence](#ipv6-coexistence)
-* [Connection doesn't originate from NAT gateway IP(s)](#connection-doesnt-originate-from-nat-gateway-ips)
+* [Configuration issues with NAT gateway](#configuration-issues-with-nat-gateway)
+* [Configuration issues with your subnets and virtual network](#configuration-issues-with-subnets-and-virtual-networks-using-nat-gateway)
+* [SNAT exhaustion due to NAT gateway configuration](#snat-exhaustion-due-to-nat-gateway-configuration)
+* [Connection issues with NAT gateway and integrated services](#connection-issues-with-nat-gateway-and-integrated-services)
+* [NAT gateway public IP not being used for outbound traffic](#nat-gateway-public-ip-not-being-used-for-outbound-traffic)
+* [Connection failures in the Azure infrastructure](#connection-failures-in-the-azure-infrastructure)
+* [Connection failures in the path between Azure and the public internet destination](#connection-failures-with-public-internet-transit)
+* [Connection failures at the public internet destination](#connection-failures-at-the-public-internet-destination)
+* [Connection failures due to TCP Resets received](#connection-failures-due-to-tcp-resets-received)
-To resolve these problems, follow the steps in the following section.
+## Configuration issues with NAT gateway
-## Resolution
+### NAT gateway configuration basics
-### SNAT exhaustion
+Check the following configurations to ensure that NAT gateway can be used to direct traffic outbound:
+1. At least one public IP address or one public IP prefix is attached to NAT gateway. At least one public IP address must be associated with the NAT gateway for it to provide outbound connectivity.
+2. At least one subnet is attached to a NAT gateway. You can attach multiple subnets to a NAT gateway for going outbound, but those subnets must exist within the same virtual network. NAT gateway cannot span beyond a single virtual network.
-A single [NAT gateway resource](nat-gateway-resource.md) supports from 64,000 up to 1 million concurrent flows. Each IP address provides 64,000 SNAT ports to the available inventory. You can use up to 16 IP addresses per NAT gateway resource. The SNAT mechanism is described [here](nat-gateway-resource.md#source-network-address-translation) in more detail.
+### How to validate connectivity
-Frequently the root cause of SNAT exhaustion is an anti-pattern for how outbound connectivity is established, managed, or configurable timers changed from their default values. Review this section carefully.
+[Virtual Network NAT gateway](/azure/virtual-network/nat-gateway/nat-overview#vnet-nat-basics) supports IPv4 UDP and TCP protocols. ICMP is not supported and is expected to fail.
-#### Steps
+To validate end-to-end connectivity of NAT gateway, follow these steps:
+1. Validate that your [NAT gateway public IP address is being used](/azure/virtual-network/nat-gateway/tutorial-create-nat-gateway-portal#test-nat-gateway).
+2. Conduct TCP connection tests and UDP-specific application layer tests.
+3. Look at NSG flow logs to analyze outbound traffic flows from NAT gateway.
-1. Check if you have modified the default idle timeout to a value higher than 4 minutes.
-2. Investigate how your application is creating outbound connectivity (for example, code review or packet capture).
-3. Determine if this activity is expected behavior or whether the application is misbehaving. Use [metrics](nat-metrics.md) in Azure Monitor to substantiate your findings. Use "Failed" category for SNAT Connections metric.
-4. Evaluate if appropriate patterns are followed.
-5. Evaluate if SNAT port exhaustion should be mitigated with additional IP addresses assigned to NAT gateway resource.
+Refer to the table below for which tools to use to validate NAT gateway connectivity.
-#### Design patterns
+| Operating system | Generic TCP connection test | TCP application layer test | UDP |
+|||||
+| Linux | nc (generic connection test) | curl (TCP application layer test) | application specific |
+| Windows | [PsPing](/sysinternals/downloads/psping) | PowerShell [Invoke-WebRequest](/powershell/module/microsoft.powershell.utility/invoke-webrequest) | application specific |
+
+To analyze outbound traffic from NAT gateway, use NSG flow logs.
+* To learn more about NSG flow logs, see [NSG flow log overview](/azure/network-watcher/network-watcher-nsg-flow-logging-overview).
+* For guides on how to enable NSG flow logs, see [Enabling NSG flow logs](/azure/network-watcher/network-watcher-nsg-flow-logging-overview#enabling-nsg-flow-logs).
+* For guides on how to read NSG flow logs, see [Working with NSG flow logs](/azure/network-watcher/network-watcher-nsg-flow-logging-overview#working-with-flow-logs).
+
+## Configuration issues with subnets and virtual networks using NAT gateway
+
+### Basic SKU resources cannot exist in the same subnet as NAT gateway
+
+NAT gateway is not compatible with basic resources, such as Basic Load Balancer or Basic Public IP. Basic resources must be placed on a subnet not associated with a NAT Gateway. Basic Load Balancer and Basic Public IP can be upgraded to standard to work with NAT gateway.
+* To upgrade a basic load balancer to standard, see [upgrade from basic public to standard public load balancer](/azure/load-balancer/upgrade-basic-standard).
+* To upgrade a basic public IP to standard, see [upgrade from basic public to standard public IP](/azure/virtual-network/ip-services/public-ip-upgrade-portal).
+
+### NAT gateway cannot be attached to a gateway subnet
+
+NAT gateway cannot be deployed in a gateway subnet. VPN gateway uses gateway subnets for VPN connections between site-to-site Azure virtual networks and local networks or between two Azure virtual networks. See [VPN gateway overview](/azure/vpn-gateway-about-vpngateways) to learn more about how gateway subnets are used.
+
+### IPv6 coexistence
+
+[Virtual Network NAT gateway](nat-overview.md) supports IPv4 UDP and TCP protocols. NAT gateway cannot be associated to an IPv6 Public IP address or IPv6 Public IP Prefix. NAT gateway can be deployed on a dual stack subnet, but will still only use IPv4 Public IP addresses for directing outbound traffic. Deploy NAT gateway on a dual stack subnet when you need IPv6 resources to exist in the same subnet as IPv4 resources.
+
+## SNAT exhaustion due to NAT gateway configuration
+
+Common SNAT exhaustion issues with NAT gateway typically have to do with the configurations on the NAT gateway. Common SNAT exhaustion issues include:
+* NAT gateway idle timeout timers being set higher than their default value of 4 minutes.
+* Outbound connectivity on NAT gateway not scaled out enough.
+
+### Idle timeout timers have been changed to higher value their default values
+
+NAT gateway resources have a default TCP idle timeout of 4 minutes. If this setting is changed to a higher value, NAT gateway will hold on to flows longer and can cause [unnecessary pressure on SNAT port inventory](nat-gateway-resource.md#timers).
-Always take advantage of connection reuse and connection pooling whenever possible. These patterns will avoid resource exhaustion problems and result in predictable behavior. Primitives for these patterns can be found in many development libraries and frameworks.
+UDP flows (for example DNS lookups) allocate SNAT ports for the duration of the idle timeout. The longer the idle timeout, the higher the pressure on SNAT ports.
-_**Solution:**_ Use appropriate patterns and best practices
+Check the following [NAT gateway metrics](nat-metrics.md) in Azure Monitor to determine if SNAT port exhaustion is happening:
-- NAT gateway resources have a default TCP idle timeout of 4 minutes. If this setting is changed to a higher value, NAT will hold on to flows longer and can cause [unnecessary pressure on SNAT port inventory](nat-gateway-resource.md#timers).-- Atomic requests (one request per connection) are a poor design choice. Such anti-pattern limits scale, reduces performance, and decreases reliability. Instead, reuse HTTP/S connections to reduce the numbers of connections and associated SNAT ports. The application scale will increase and performance improve due to reduced handshakes, overhead, and cryptographic operation cost when using TLS.-- DNS can introduce many individual flows at volume when the client is not caching the DNS resolvers result. Use caching.-- UDP flows (for example DNS lookups) allocate SNAT ports for the duration of the idle timeout. The longer the idle timeout, the higher the pressure on SNAT ports. Use short idle timeout (for example 4 minutes).-- Use connection pools to shape your connection volume.-- Never silently abandon a TCP flow and rely on TCP timers to clean up flow. If you don't let TCP explicitly close the connection, state remains allocated at intermediate systems and endpoints and makes SNAT ports unavailable for other connections. This pattern can trigger application failures and SNAT exhaustion. -- Don't change OS-level TCP close related timer values without expert knowledge of impact. While the TCP stack will recover, your application performance can be negatively impacted when the endpoints of a connection have mismatched expectations. The desire to change timers is usually a sign of an underlying design problem. Review following recommendations.
+*Total SNAT Connection*
+* "Sum" aggregation shows high connection volume.
+* "Failed" connection state shows transient or persistent failures over time.
-SNAT exhaustion can also be amplified with other anti-patterns in the underlying application. Review these additional patterns and best practices to improve the scale and reliability of your service.
+*Dropped Packets*
+* "Sum" aggregation shows packets dropping consistent with high connection volume.
-- Explore impact of reducing [TCP idle timeout](nat-gateway-resource.md#timers) to lower values including default idle timeout of 4 minutes to free up SNAT port inventory earlier.-- Consider [asynchronous polling patterns](/azure/architecture/patterns/async-request-reply) for long-running operations to free up connection resources for other operations.-- Long-lived flows (for example reused TCP connections) should use TCP keepalives or application layer keepalives to avoid intermediate systems timing out. Increasing the idle timeout is a last resort and may not resolve the root cause. A long timeout can cause low rate failures when timeout expires and introduce delay and unnecessary failures.-- Graceful [retry patterns](/azure/architecture/patterns/retry) should be used to avoid aggressive retries/bursts during transient failure or failure recovery.
-Creating a new TCP connection for every HTTP operation (also known as "atomic connections") is an anti-pattern. Atomic connections will prevent your application from scaling well and waste resources. Always pipeline multiple operations into the same connection. Your application will benefit in transaction speed and resource costs. When your application uses transport layer encryption (for example TLS), there's a significant cost associated with the processing of new connections. Review [Azure Cloud Design Patterns](/azure/architecture/patterns/) for additional best practice patterns.
+**Mitigation**
-#### Additional possible mitigations
+Explore the impact of reducing TCP idle timeout to lower values including default idle timeout of 4 minutes to free up SNAT port inventory earlier.
-_**Solution:**_ Scale outbound connectivity as follows:
+Consider [asynchronous polling patterns](/azure/architecture/patterns/async-request-reply) for long-running operations to free up connection resources for other operations.
+
+Long-lived flows (for example reused TCP connections) should use TCP keepalives or application layer keepalives to avoid intermediate systems timing out. Increasing the idle timeout is a last resort and may not resolve the root cause. A long timeout can cause low rate failures when timeout expires and introduce delay and unnecessary failures.
+
+### Outbound connectivity not scaled out enough
+
+NAT gateway provides 64,000 SNAT ports to a subnetΓÇÖs resources for each public IP address attached to it. If outbound connections are dropping because SNAT ports are being exhausted, then NAT gateway may not be scaled out enough to handle the workload. More public IP addresses may need to be added to NAT gateway in order to provide more SNAT ports for outbound connectivity.
+
+The table below describes two common scenarios in which outbound connectivity may not be scaled out enough and how to validate and mitigate these issues:
| Scenario | Evidence |Mitigation | ||||
-| You're experiencing contention for SNAT ports and SNAT port exhaustion during periods of high usage. | "Failed" category for SNAT Connections [metric](nat-metrics.md) in Azure Monitor shows transient or persistent failures over time and high connection volume. | Determine if you can add additional public IP address resources or public IP prefix resources. This addition will allow for up to 16 IP addresses in total to your NAT gateway. This addition will provide more inventory for available SNAT ports (64,000 per IP address) and allow you to scale your scenario further.|
-| You've already given 16 IP addresses and still are experiencing SNAT port exhaustion. | Attempt to add additional IP address fails. Total number of IP addresses from public IP address resources or public IP prefix resources exceeds a total of 16. | Distribute your application environment across multiple subnets and provide a NAT gateway resource for each subnet. Reevaluate your design pattern(s) to optimize based on preceding [guidance](#design-patterns). |
+| You're experiencing contention for SNAT ports and SNAT port exhaustion during periods of high usage. | You run the following [metrics](nat-metrics.md) in Azure Monitor: **Total SNAT Connection**: "Sum" aggregation shows high connection volume. "Failed" connection state shows transient or persistent failures over time. **Dropped Packets**: "Sum" aggregation shows packets dropping consistent with high connection volume. | Determine if you can add more public IP addresses or public IP prefixes. This addition will allow for up to 16 IP addresses in total to your NAT gateway. This addition will provide more inventory for available SNAT ports (64,000 per IP address) and allow you to scale your scenario further.|
+| You've already given 16 IP addresses and still are experiencing SNAT port exhaustion. | Attempt to add more IP addresses fails. Total number of IP addresses from public IP address resources or public IP prefix resources exceeds a total of 16. | Distribute your application environment across multiple subnets and provide a NAT gateway resource for each subnet. |
>[!NOTE] >It is important to understand why SNAT exhaustion occurs. Make sure you are using the right patterns for scalable and reliable scenarios. Adding more SNAT ports to a scenario without understanding the cause of the demand should be a last resort. If you do not understand why your scenario is applying pressure on SNAT port inventory, adding more SNAT ports to the inventory by adding more IP addresses will only delay the same exhaustion failure as your application scales. You may be masking other inefficiencies and anti-patterns.
-### ICMP ping is failing
+## Connection issues with NAT gateway and integrated services
-[Virtual Network NAT](nat-overview.md) supports IPv4 UDP and TCP protocols. ICMP isn't supported and expected to fail.
+### Azure App Service regional VNet integration turned off
-_**Solution:**_ Instead, use TCP connection tests (for example "TCP ping") and UDP-specific application layer tests to validate end to end connectivity.
+NAT gateway can be used with Azure app services to allow applications to make outbound calls from a virtual network. To use this integration between Azure app services and NAT gateway, regional virtual network integration must be enabled. See [how regional virtual network integration works](/azure/app-service/overview-vnet-integration#how-regional-virtual-network-integration-works) to learn more.
-The following table can be used a starting point for which tools to use to start tests.
+To use NAT gateway with Azure App services, follow these steps:
+1. Ensure that your application(s) are integrated with a subnet.
+2. Ensure that regional virtual network integration is enabled for the subnet that your apps will use for going outbound by turning on **Route All**.
+3. Create a NAT gateway resource.
+4. Create a new public IP address or attach an existing public IP address in your network to NAT gateway.
+5. Assign NAT gateway to the same subnet being used for VNet integration with your application(s).
-| Operating system | Generic TCP connection test | TCP application layer test | UDP |
-|||||
-| Linux | nc (generic connection test) | curl (TCP application layer test) | application specific |
-| Windows | [PsPing](/sysinternals/downloads/psping) | PowerShell [Invoke-WebRequest](/powershell/module/microsoft.powershell.utility/invoke-webrequest) | application specific |
+To see step-by-step instructions on how to configure NAT gateway with VNet integration, see [Configuring NAT gateway integration](/azure/app-service/networking/nat-gateway-integration#configuring-nat-gateway-integration)
-### Connectivity failures
+A couple important notes about the NAT gateway and Azure App Services integration:
+* VNet integration does not provide inbound private access to your app from the virtual network.
+* Because of the nature of how VNet integration operates, the traffic from virtual network integration does not show up in Azure Network Watcher or NSG flow logs.
-Connectivity issues with [Virtual Network NAT](nat-overview.md) can be caused by several different issues:
+### Port 25 cannot be used for regional VNet integration with NAT gateway
-* permanent failures due to configuration mistakes.
-* transient or persistent [SNAT exhaustion](#snat-exhaustion) of the NAT gateway,
-* transient failures in the Azure infrastructure,
-* transient failures in the path between Azure and the public Internet destination,
-* transient or persistent failures at the public Internet destination.
+Port 25 is an SMTP port that is used to send email. Azure app services regional VNet integration cannot use port 25 by design. In a scenario where regional VNet integration is enabled for NAT gateway to connect an application to an email SMTP server, traffic will be blocked on port 25 despite NAT gateway working with all other ports for outbound traffic. This block on port 25 cannot be removed.
-Use tools like the following to validation connectivity. [ICMP ping isn't supported](#icmp-ping-is-failing).
+**Work around solution:**
+* Set up port forwarding to a Windows VM to route traffic to Port 25.
-| Operating system | Generic TCP connection test | TCP application layer test | UDP |
-|||||
-| Linux | nc (generic connection test) | curl (TCP application layer test) | application specific |
-| Windows | [PsPing](/sysinternals/downloads/psping) | PowerShell [Invoke-WebRequest](/powershell/module/microsoft.powershell.utility/invoke-webrequest) | application specific |
+## NAT gateway public IP not being used for outbound traffic
-#### Configuration
+### VMs hold on to prior SNAT IP with active connection after NAT gateway added to a VNet
-Check your configuration:
-1. Does the NAT gateway resource have at least one public IP resource or one public IP prefix resource? You must at least have one IP address associated with the NAT gateway for it to be able to provide outbound connectivity.
-2. Is the virtual network's subnet configured to use the NAT gateway?
-3. Are you using UDR (user-defined route) and are you overriding the destination? NAT gateway resources become the default route (0/0) on configured subnets.
+[Virtual Network NAT gateway](nat-overview.md) supersedes outbound connectivity for a subnet. When transitioning from default SNAT or load balancer outbound SNAT to using NAT gateway, new connections will immediately begin using the IP address(es) associated with the NAT gateway resource. However, if a virtual machine still has an established connection during the switch to NAT gateway, the connection will continue using the old SNAT IP address that was assigned when the connection was established.
-#### SNAT exhaustion
+Test and resolve issues with VMs holding on to old SNAT IP addresses by:
+1. Make sure you are really establishing a new connection and that connections are not being reused due to having already existed in the OS or because the browser was caching the connections in a connection pool. For example, when using curl in PowerShell, make sure to specify the -DisableKeepalive parameter to force a new connection. If you are using a browser, connections may also be pooled.
+2. It is not necessary to reboot a virtual machine in a subnet configured to NAT gateway. However, if a virtual machine is rebooted, the connection state is flushed. When the connection state has been flushed, all connections will begin using the NAT gateway resource's IP address(es). However, this is a side effect of the virtual machine being rebooted and not an indicator that a reboot is required.
-Review section on [SNAT exhaustion](#snat-exhaustion) in this article.
+If you are still having trouble, open a support case for further troubleshooting.
-#### Azure infrastructure
+### UDR supersedes NAT gateway for going outbound
-Azure monitors and operates its infrastructure with great care. Transient failures can occur, there's no guarantee that transmissions are lossless. Use design patterns that allow for SYN retransmissions for TCP applications. Use connection timeouts large enough to permit TCP SYN retransmission to reduce transient impacts caused by a lost SYN packet.
+When NAT gateway is attached to a subnet also associated with a user defined route (UDR) for routing traffic to the internet, the UDR will take precedence over the NAT gateway. The internet traffic will flow from the IP configured for the UDR rather than from the NAT gateway public IP address(es).
-_**Solution:**_
+The order of precedence for internet routing configurations is as follows:
-* Check for [SNAT exhaustion](#snat-exhaustion).
-* The configuration parameter in a TCP stack that controls the SYN retransmission behavior is called RTO ([Retransmission Time-Out](https://tools.ietf.org/html/rfc793)). The RTO value is adjustable but typically 1 second or higher by default with exponential back-off. If your application's connection time-out is too short (for example 1 second), you may see sporadic connection timeouts. Increase the application connection time-out.
-* If you observe longer, unexpected timeouts with default application behaviors, open a support case for further troubleshooting.
-
-We don't recommend artificially reducing the TCP connection timeout or tuning the RTO parameter.
+UDR >> NAT gateway >> default system
-#### Public Internet transit
+Test and resolve issues with a UDR configured to your virtual network by:
+1. [Testing that the NAT gateway public IP](/azure/virtual-network/nat-gateway/tutorial-create-nat-gateway-portal#test-nat-gateway) is used for outbound traffic. If a different IP is being used, it could be because of a UDR, follow the remaining steps on how to check for and remove UDRs.
+2. Check for UDRs in the virtual networkΓÇÖs route table, refer to [view route tables](/azure/virtual-network/manage-route-table#view-route-tables).
+3. Remove the UDR from the route table by following [create, change, or delete an Azure route table](/azure/virtual-network/manage-route-table#change-a-route-table).
-The chances of transient failures increases with a longer path to the destination and more intermediate systems. It's expected that transient failures can increase in frequency over [Azure infrastructure](#azure-infrastructure).
+Once the UDR is removed from the routing table, the NAT gateway public IP should now take precedence in routing outbound traffic to the internet.
-Follow the same guidance as preceding [Azure infrastructure](#azure-infrastructure) section.
+## Connection failures in the Azure infrastructure
-#### Internet endpoint
+Azure monitors and operates its infrastructure with great care. However, transient failures can still occur, there is no guarantee that transmissions are lossless. Use design patterns that allow for SYN retransmissions for TCP applications. Use connection timeouts large enough to permit TCP SYN retransmission to reduce transient impacts caused by a lost SYN packet.
-The previous sections apply, along with the Internet endpoint that communication is established with. Other factors that can impact connectivity success are:
+**What to check for:**
-* traffic management on destination side, including
-- API rate limiting imposed by the destination side-- Volumetric DDoS mitigations or transport layer traffic shaping
-* firewall or other components at the destination
-
-Usually packet captures at the source and the destination (if available) are required to determine what is taking place.
-
-_**Solution:**_
+* Check for [SNAT exhaustion](#snat-exhaustion-due-to-nat-gateway-configuration).
+* The configuration parameter in a TCP stack that controls the SYN retransmission behavior is called RTO ([Retransmission Time-Out](https://tools.ietf.org/html/rfc793)). The RTO value is adjustable but typically 1 second or higher by default with exponential back-off. If your application's connection time-out is too short (for example 1 second), you may see sporadic connection timeouts. Increase the application connection time-out.
+* If you observe longer, unexpected timeouts with default application behaviors, open a support case for further troubleshooting.
-* Check for [SNAT exhaustion](#snat-exhaustion).
-* Validate connectivity to an endpoint in the same region or elsewhere for comparison.
-* If you're creating high volume or transaction rate testing, explore if reducing the rate reduces the occurrence of failures.
-* If changing rate impacts the rate of failures, check if API rate limits or other constraints on the destination side might have been reached.
-* If your investigation is inconclusive, open a support case for further troubleshooting.
+We don't recommend artificially reducing the TCP connection timeout or tuning the RTO parameter.
-#### TCP Resets received
+## Connection failures with public internet transit
-The NAT gateway generates TCP resets on the source VM for traffic that isn't recognized as in progress.
+The chances of transient failures increases with a longer path to the destination and more intermediate systems. It's expected that transient failures can increase in frequency over [Azure infrastructure](#connection-failures-in-the-azure-infrastructure).
-One possible reason is the TCP connection has idle timed out. You can adjust the idle timeout from 4 minutes to up to 120 minutes.
+Follow the same guidance as preceding [Azure infrastructure](#connection-failures-in-the-azure-infrastructure) section.
-TCP Resets aren't generated on the public side of NAT gateway resources. TCP resets on the destination side are generated by the source VM, not the NAT gateway resource.
+## Connection failures at the public internet destination
-_**Solution:**_
+The previous sections apply, along with the internet endpoint that communication is established with. Other factors that can impact connectivity success are:
-* Review [design patterns](#design-patterns) recommendations.
-* Open a support case for further troubleshooting if necessary.
+* Traffic management on destination side, including,
+- API rate limiting imposed by the destination side.
+- Volumetric DDoS mitigations or transport layer traffic shaping.
+* Firewall or other components at the destination.
-### IPv6 coexistence
+Use NAT gateway [metrics](nat-metrics.md) in Azure monitor to diagnose connection issues:
+* Look at packet count at the source and the destination (if available) to determine how many connection attempts were made.
+* Look at dropped packets to see how many packets were dropped by NAT gateway.
-[Virtual Network NAT](nat-overview.md) supports IPv4 UDP and TCP protocols. NAT cannot be associated to an IPv6 Public IP address or IPv6 Public IP Prefix. However, NAT can be deployed on a dual stack subnet.
+What else to check for:
+* Check for [SNAT exhaustion](#snat-exhaustion-due-to-nat-gateway-configuration).
+* Validate connectivity to an endpoint in the same region or elsewhere for comparison.
+* If you are creating high volume or transaction rate testing, explore if reducing the rate reduces the occurrence of failures.
+* If changing rate impacts the rate of failures, check if API rate limits or other constraints on the destination side might have been reached.
-_**Solution:**_ Deploy NAT gateway on a dual stack subnet.
+If your investigation is inconclusive, open a support case for further troubleshooting.
-### Connection doesn't originate from NAT gateway IP(s)
+## Connection failures due to TCP Resets received
-You configure NAT gateway, IP address(es) to use, and which subnet should use a NAT gateway resource. However, connections from virtual machine instances that existed before the NAT gateway was deployed don't use the IP address(es). They appear to be using IP address(es) not used with the NAT gateway resource.
+The NAT gateway generates TCP resets on the source VM for traffic that isn't recognized as in progress.
-_**Solution:**_
+One possible reason is the TCP connection has idle timed out. You can adjust the idle timeout from 4 minutes to up to 120 minutes.
-[Virtual Network NAT](nat-overview.md) replaces the outbound connectivity for the subnet it is configured on. When transitioning from default SNAT or load balancer outbound SNAT to using NAT gateways, new connections will immediately begin using the IP address(es) associated with the NAT gateway resource. However, if a virtual machine still has an established connection during the switch to NAT gateway resource, the connection will continue using the old SNAT IP address that was assigned when the connection was established. Make sure you are really establishing a new connection rather than reusing a connection that already existed because the OS or the browser was caching the connections in a connection pool. For example, when using _curl_ in PowerShell, make sure to specify the _-DisableKeepalive_ parameter to force a new connection. If you're using a browser, connections may also be pooled.
+TCP Resets aren't generated on the public side of NAT gateway resources. TCP resets on the destination side are generated by the source VM, not the NAT gateway resource.
-It's not necessary to reboot a virtual machine configuring a subnet for a NAT gateway resource. However, if a virtual machine is rebooted, the connection state is flushed. When the connection state has been flushed, all connections will begin using the NAT gateway resource's IP address(es). However, this is a side effect of the virtual machine being rebooted and not an indicator that a reboot is required.
+Keep in mind that a long timeout can cause low-rate failures when timeout expires and introduce delay and unnecessary connection failures.
-If you are still having trouble, open a support case for further troubleshooting.
+Open a support case for further troubleshooting if necessary.
## Next steps
virtual-wan Nat Rules Vpn Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-wan/nat-rules-vpn-gateway.md
In order to use NAT, VPN devices need to use any-to-any (wildcard) traffic selec
You can configure and view NAT rules on your VPN gateway settings at any time.
+## <a name="type"></a>NAT type: static & dynamic
+
+NAT on a gateway device translates the source and/or destination IP addresses, based on the NAT policies or rules to avoid address conflict. There are different types of NAT translation rules:
+
+* **Static NAT**: Static rules define a fixed address mapping relationship. For a given IP address, it will be mapped to the same address from the target pool. The mappings for static rules are stateless because the mapping is fixed. For example, a NAT rule created to map 10.0.0.0/24 to 192.168.0.0/24 will have a fixed 1-1 mapping. 10.0.0.0 is translated to 192.168.0.0, 10.0.0.1 is translated to 192.168.0.1, and so on.
+
+* **Dynamic NAT**: For dynamic NAT, an IP address can be translated to different target IP addresses and TCP/UDP port based on availability, or with a different combination of IP address and TCP/UDP port. The latter is also called NAPT, Network Address and Port Translation. Dynamic rules will result in stateful translation mappings depending on the traffic flows at any given time. Due to the nature of Dynamic NAT and the ever changing IP/Port combinations, flows that make use of Dyanmic NAT rules have to be initiated from the **InternalMapping** (Pre-NAT) IP Range. The dynamic mapping is released once the flow is disconnected or gracefully terminated.
+
+Another consideration is the address pool size for translation. If the target address pool size is the same as the original address pool, use static NAT rule to define a 1:1 mapping in a sequential order. If the target address pool is smaller than the original address pool, use dynamic NAT rule to accommodate the differences.
+ > [!NOTE] > Site-to-site NAT is not supported with Site-to-site VPN connections where policy based traffic selectors are used.
You can configure and view NAT rules on your VPN gateway settings at any time.
1. On the **Edit NAT Rule** page, you can **Add/Edit/Delete** a NAT rule using the following values: * **Name:** A unique name for your NAT rule.
- * **Type:** Static. Static one-to-one NAT establishes a one-to-one relationship between an internal address and an external address.
+ * **Type:** Static or Dynamic. Static one-to-one NAT establishes a one-to-one relationship between an internal address and an external address while Dynamic NAT assigns an IP and port based on availability.
+ * **IP Configuration ID:** A NAT rule must be configured to a specific VPN Gateway instance. This is applicable to Dynamic NAT only. Static NAT rules are automatically applied to both VPN Gateway instances.
* **Mode:** IngressSnat or EgressSnat. * IngressSnat mode (also known as Ingress Source NAT) is applicable to traffic entering the Azure hubΓÇÖs Site-to-site VPN gateway. * EgressSnat mode (also known as Egress Source NAT) is applicable to traffic leaving the Azure hubΓÇÖs Site-to-site VPN gateway.
You can configure and view NAT rules on your VPN gateway settings at any time.
* **Link Connection:** Connection resource that virtually connects a VPN site to the Azure Virtual WAN Hub's Site-to-site VPN gateway. > [!NOTE]
-> If you want the Site-to-site VPN Gateway to advertise translated (**ExternalMapping**) address prefixes via BGP, click the **Enable BGP Translation** button, due to which on-premises will automatically learn the post-NAT range of Egress Rules and Azure (Virtual WAN Hub, connected Virtual Networks, VPN and ExpressRoute branches) will automatically learn the post-NAT range of Ingress rules.
+> If you want the Site-to-site VPN Gateway to advertise translated (**ExternalMapping**) address prefixes via BGP, click the **Enable BGP Translation** button, due to which on-premises will automatically learn the post-NAT range of Egress Rules and Azure (Virtual WAN Hub, connected Virtual Networks, VPN and ExpressRoute branches) will automatically learn the post-NAT range of Ingress rules. The new POST NAT ranges will be shown in the Effective Routes table in a Virtual Hub.
> Please note that the **Enable Bgp Translation** setting is applied to all NAT rules on the Virtual WAN Hub Site-to-site VPN Gateway. ## <a name="examples"></a>Example configurations
The following diagram shows the projected end result:
* The Site-to-site VPN Gateway automatically translates the on-premises BGP peer IP address **if** the on-premises BGP peer IP address is contained within the **Internal Mapping** of an **Ingress NAT Rule**. As a result, the VPN site's **Link Connection BGP address** must reflect the NAT-translated address (part of the External Mapping). For instance, if the on-premises BGP IP address is 10.30.0.133 and there is an **Ingress NAT Rule** that translates 10.30.0.0/24 to 127.30.0.0/24, the VPN Site's **Link Connection BGP Address** must be configured to be the translated address (127.30.0.133).-
+* In Dynamic NAT, on-premises BGP peer IP cannot be part of the pre-NAT address range (**Interal Mapping**) as IP and port translations are not fixed. If there is a need to translate the on-premises BGP peering IP, please create a separate **Static NAT Rule** that translates BGP Peering IP address only.
+
+ For instance, if the on-premises network has an address space of 10.0.0.0/24 with an on-premise BGP peer IP of 10.0.0.1 and there is an **Ingress Dynamic NAT Rule** to translate 10.0.0.0/24 to 192.198.0.0/32, a separate **Ingress Static NAT Rule** translating 10.0.0.1/32 to 192.168.0.02/32 is required and the corresponding VPN site's **Link Connection BGP address** must be updated to the NAT-translated address (part of the External Mapping).
### Ingress SNAT (VPN site with statically configured routes) **Ingress SNAT rules** are applied on packets that are entering Azure through the Virtual WAN Site-to-site VPN gateway. In this scenario, you want to connect two Site-to-site VPN branches to Azure. VPN Site 1 connects via Link A, and VPN Site 2 connects via Link B. Each site has the same address space 10.30.0.0/24.
-In this example, we will NAT VPN site 1 to 127.30.0.0.0/24. However, because the VPN Site is not connected to the Site-to-site VPN Gateway via BGP, the configuration steps are slightly different than the BGP-enabled example.
+In this example, we will NAT VPN site 1 to 172.30.0.0.0/24. However, because the VPN Site is not connected to the Site-to-site VPN Gateway via BGP, the configuration steps are slightly different than the BGP-enabled example.
:::image type="content" source="./media/nat-rules-vpn-gateway/diagram-static.png" alt-text="Screenshot showing diagram configurations for VPN sites that use static routing.":::
In the preceding examples, an on-premises device wants to reach a resource in a
This section shows checks to verify that your configuration is set up properly.
+#### Validate Dynamic NAT Rules
+
+ * Use Dynamic NAT Rules if the target address pool is smaller than the original address pool.
+ * As IP/Port combinations are not fixed in a Dynamic NAT Rule, the on-premises BGP Peer IP cannot be part of the pre-NAT (**InternalMapping**) addres range. Please create a specific Static NAT Rule that translates the BGP Peering IP address only.
+
+ For example:
+
+ * **On-Premises Address Range:** 10.0.0.0/24
+ * **On-premises BGP IP:** 10.0.0.1
+ * **Ingress Dynamic NAT Rule:** 192.168.0.1/32
+ * **Ingress Static NAT Rule:** 10.0.0.1 -> 192.168.0.2
+
+ #### Validate DefaultRouteTable, rules, and routes Branches in Virtual WAN associate to the **DefaultRouteTable**, implying all branch connections learn routes that are populated within the DefaultRouteTable. You will see the NAT rule with the translated prefix in the effective routes of the DefaultRouteTable.
From the previous example:
* **Next Hop Type:** VPN_S2S_Gateway * **Next Hop:** VPN_S2S_Gateway Resource + #### Validate address prefixes This example applies to resources in Virtual Networks that are associated to the DefaultRouteTable.
virtual-wan Virtual Wan Global Transit Network Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-wan/virtual-wan-global-transit-network-architecture.md
The Remote User-to-branch path lets remote users who are using a point-to-site c
The VNet-to-VNet transit enables VNets to connect to each other in order to interconnect multi-tier applications that are implemented across multiple VNets. Optionally, you can connect VNets to each other through VNet Peering and this may be suitable for some scenarios where transit via the VWAN hub is not necessary.
-## <a name="DefaultRoute"></a>Force Tunneling and Default Route in Azure Virtual WAN
+## <a name="DefaultRoute"></a>Force tunneling and default route
Force Tunneling can be enabled by configuring the enable default route on a VPN, ExpressRoute, or Virtual Network connection in Virtual WAN.
vpn-gateway Tutorial Site To Site Portal https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/vpn-gateway/tutorial-site-to-site-portal.md
Title: 'Tutorial - Connect on-premises network to virtual network: Azure portal'
+ Title: 'Tutorial - Connect an on-premises network and a virtual network: S2S VPN: Azure portal'
description: In this tutorial, learn how to create a site-to-site VPN Gateway IPsec connection from your on-premises network to a VNet.
Last updated 02/02/2022
-# Tutorial: Create a Site-to-Site connection in the Azure portal
+# Tutorial: Create a site-to-site VPN connection in the Azure portal
-Azure VPN gateways provide cross-premises connectivity between customer premises and Azure. This tutorial shows you how to use the Azure portal to create a Site-to-Site VPN gateway connection from your on-premises network to the VNet. You can also create this configuration using [Azure PowerShell](vpn-gateway-create-site-to-site-rm-powershell.md) or [Azure CLI](vpn-gateway-howto-site-to-site-resource-manager-cli.md).
+Azure VPN gateways provide cross-premises connectivity between customer premises and Azure. This tutorial shows you how to use the Azure portal to create a site-to-site VPN gateway connection from your on-premises network to the VNet. You can also create this configuration using [Azure PowerShell](vpn-gateway-create-site-to-site-rm-powershell.md) or [Azure CLI](vpn-gateway-howto-site-to-site-resource-manager-cli.md).
In this tutorial, you learn how to:
In this tutorial, you learn how to:
## <a name="CreatVNet"></a>Create a virtual network
-Create a virtual network (VNet) using the following values:
+In this section, you'll create a virtual network (VNet) using the following values:
* **Resource group:** TestRG1 * **Name:** VNet1
In this step, you create the virtual network gateway for your VNet. Creating a g
### Create the gateway
-Create a VPN gateway using the following values:
+Create a virtual network gateway (VPN gateway) using the following values:
* **Name:** VNet1GW * **Region:** East US
Create a VPN gateway using the following values:
* **Configure BGP:** Disabled [!INCLUDE [Create a vpn gateway](../../includes/vpn-gateway-add-gw-portal-include.md)]+ [!INCLUDE [Configure PIP settings](../../includes/vpn-gateway-add-gw-pip-portal-include.md)] You can see the deployment status on the Overview page for your gateway. A gateway can take up to 45 minutes to fully create and deploy. After the gateway is created, you can view the IP address that has been assigned to it by looking at the virtual network in the portal. The gateway appears as a connected device.
You can see the deployment status on the Overview page for your gateway. A gatew
You can view the gateway public IP address on the **Overview** page for your gateway. To see additional information about the public IP address object, select the name/IP address link next to **Public IP address**.
Create a local network gateway using the following values:
## <a name="VPNDevice"></a>Configure your VPN device
-Site-to-Site connections to an on-premises network require a VPN device. In this step, you configure your VPN device. When configuring your VPN device, you need the following values:
+Site-to-site connections to an on-premises network require a VPN device. In this step, you configure your VPN device. When configuring your VPN device, you need the following values:
-* A shared key. This is the same shared key that you specify when creating your Site-to-Site VPN connection. In our examples, we use a basic shared key. We recommend that you generate a more complex key to use.
+* A shared key. This is the same shared key that you specify when creating your site-to-site VPN connection. In our examples, we use a basic shared key. We recommend that you generate a more complex key to use.
* The Public IP address of your virtual network gateway. You can view the public IP address by using the Azure portal, PowerShell, or CLI. To find the Public IP address of your VPN gateway using the Azure portal, navigate to **Virtual network gateways**, then select the name of your gateway. [!INCLUDE [Configure a VPN device](../../includes/vpn-gateway-configure-vpn-device-include.md)]
-## <a name="CreateConnection"></a>Create a VPN connection
+## <a name="CreateConnection"></a>Create VPN connections
-Create the Site-to-Site VPN connection between your virtual network gateway and your on-premises VPN device.
+Create a site-to-site VPN connection between your virtual network gateway and your on-premises VPN device.
Create a connection using the following values:
Create a connection using the following values:
[!INCLUDE [Add a site-to-site connection](../../includes/vpn-gateway-add-site-to-site-connection-portal-include.md)]
+### <a name="addconnect"></a>To add another connection
+
+You can connect to multiple on-premises sites from the same VPN gateway. If you want to configure multiple connections, the address spaces canΓÇÖt overlap between any of the connections.
+
+1. To add an additional connection, navigate to the VPN gateway, then select **Connections** to open the Connections page.
+1. Select **+Add** to add your connection. Adjust the connection type to reflect either VNet-to-VNet (if connecting to another VNet gateway), or Site-to-site.
+1. If you're connecting using Site-to-site and you haven't already created a local network gateway for the site you want to connect to, you can create a new one.
+1. Specify the shared key that you want to use, then select **OK** to create the connection.
+ ## <a name="VerifyConnection"></a>Verify the VPN connection [!INCLUDE [Verify the connection](../../includes/vpn-gateway-verify-connection-portal-include.md)]
Create a connection using the following values:
## Optional steps
-### <a name="addconnect"></a>Add additional connections to the gateway
-
-You can add additional connections if none of the address spaces overlap between connections.
-
-1. To add an additional connection, navigate to the VPN gateway, then select **Connections** to open the Connections page.
-1. Select **+Add** to add your connection. Adjust the connection type to reflect either VNet-to-VNet (if connecting to another VNet gateway), or Site-to-site.
-1. If you're connecting using Site-to-site and you haven't already created a local network gateway for the site you want to connect to, you can create a new one.
-1. Specify the shared key that you want to use, then select **OK** to create the connection.
- ### <a name="resize"></a>Resize a gateway SKU There are specific rules regarding resizing vs. changing a gateway SKU. In this section, we'll resize the SKU. For more information, see [Gateway settings - resizing and changing SKUs](vpn-gateway-about-vpn-gateway-settings.md#resizechange).
There are specific rules regarding resizing vs. changing a gateway SKU. In this
### <a name="reset"></a>Reset a gateway
-Resetting an Azure VPN gateway is helpful if you lose cross-premises VPN connectivity on one or more Site-to-Site VPN tunnels. In this situation, your on-premises VPN devices are all working correctly, but aren't able to establish IPsec tunnels with the Azure VPN gateways.
+Resetting an Azure VPN gateway is helpful if you lose cross-premises VPN connectivity on one or more site-to-site VPN tunnels. In this situation, your on-premises VPN devices are all working correctly, but aren't able to establish IPsec tunnels with the Azure VPN gateways.
[!INCLUDE [reset a gateway](../../includes/vpn-gateway-reset-gw-portal-include.md)] ### <a name="additional"></a>Additional configuration considerations
-S2S configurations can be customized in a variety ways. For more information, see the following articles:
+S2S configurations can be customized in a variety of ways. For more information, see the following articles:
* For information about BGP, see the [BGP Overview](vpn-gateway-bgp-overview.md) and [How to configure BGP](vpn-gateway-bgp-resource-manager-ps.md). * For information about forced tunneling, see [About forced tunneling](vpn-gateway-forced-tunneling-rm.md).