Updates from: 02/18/2022 02:07:05
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Self Asserted Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/self-asserted-technical-profile.md
Previously updated : 01/14/2022 Last updated : 02/17/2022
You can also call a REST API technical profile with your business logic, overwri
| Attribute | Required | Description | | | -- | -- |
-| setting.operatingMode <sup>1</sup>| No | For a sign-in page, this property controls the behavior of the username field, such as input validation and error messages. Expected values: `Username` or `Email`. |
+| setting.operatingMode <sup>1</sup>| No | For a sign-in page, this property controls the behavior of the username field, such as input validation and error messages. Expected values: `Username` or `Email`. Check out the [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/technical-profiles/self-asserted#operating-mode) of this metadata. |
| AllowGenerationOfClaimsWithNullValues| No| Allow to generate a claim with null value. For example, in a case user doesn't select a checkbox.| | ContentDefinitionReferenceId | Yes | The identifier of the [content definition](contentdefinitions.md) associated with this technical profile. | | EnforceEmailVerification | No | For sign-up or profile edit, enforces email verification. Possible values: `true` (default), or `false`. |
-| setting.retryLimit | No | Controls the number of times a user can try to provide the data that is checked against a validation technical profile. For example, a user tries to sign-up with an account that already exists and keeps trying until the limit reached.
+| setting.retryLimit | No | Controls the number of times a user can try to provide the data that is checked against a validation technical profile. For example, a user tries to sign-up with an account that already exists and keeps trying until the limit reached. Check out the [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/technical-profiles/self-asserted#retry-limit) of this metadata.|
| SignUpTarget <sup>1</sup>| No | The sign-up target exchange identifier. When the user clicks the sign-up button, Azure AD B2C executes the specified exchange identifier. |
-| setting.showCancelButton | No | Displays the cancel button. Possible values: `true` (default), or `false` |
-| setting.showContinueButton | No | Displays the continue button. Possible values: `true` (default), or `false` |
-| setting.showSignupLink <sup>2</sup>| No | Displays the sign-up button. Possible values: `true` (default), or `false` |
-| setting.forgotPasswordLinkLocation <sup>2</sup>| No| Displays the forgot password link. Possible values: `AfterLabel` (default) displays the link directly after the label or after the password input field when there is no label, `AfterInput` displays the link after the password input field, `AfterButtons` displays the link on the bottom of the form after the buttons, or `None` removes the forgot password link.|
-| setting.enableRememberMe <sup>2</sup>| No| Displays the [Keep me signed in](session-behavior.md?pivots=b2c-custom-policy#enable-keep-me-signed-in-kmsi) checkbox. Possible values: `true` , or `false` (default). |
-| setting.inputVerificationDelayTimeInMilliseconds <sup>3</sup>| No| Improves user experience, by waiting for the user to stop typing, and then validate the value. Default value 2000 milliseconds. |
+| setting.showCancelButton | No | Displays the cancel button. Possible values: `true` (default), or `false`. Check out the [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/technical-profiles/self-asserted#show-the-cancel-button) of this metadata.|
+| setting.showContinueButton | No | Displays the continue button. Possible values: `true` (default), or `false`. Check out the [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/technical-profiles/self-asserted#show-the-continue-button) of this metadata. |
+| setting.showSignupLink <sup>2</sup>| No | Displays the sign-up button. Possible values: `true` (default), or `false`. Check out the [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/technical-profiles/self-asserted#show-sign-up-link) of this metadata. |
+| setting.forgotPasswordLinkLocation <sup>2</sup>| No| Displays the forgot password link. Possible values: `AfterLabel` (default) displays the link directly after the label or after the password input field when there is no label, `AfterInput` displays the link after the password input field, `AfterButtons` displays the link on the bottom of the form after the buttons, or `None` removes the forgot password link. Check out the [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/technical-profiles/self-asserted#forgot-password-link-location) of this metadata.|
+| setting.enableRememberMe <sup>2</sup>| No| Displays the [Keep me signed in](session-behavior.md?pivots=b2c-custom-policy#enable-keep-me-signed-in-kmsi) checkbox. Possible values: `true` , or `false` (default). [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/technical-profiles/self-asserted#enable-remember-me-kmsi) of this metadata. |
+| setting.inputVerificationDelayTimeInMilliseconds <sup>3</sup>| No| Improves user experience, by waiting for the user to stop typing, and then validate the value. Default value 2000 milliseconds. Check out the [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/technical-profiles/self-asserted#input-verification-delay-time-in-milliseconds) of this metadata. |
| IncludeClaimResolvingInClaimsHandling  | No | For input and output claims, specifies whether [claims resolution](claim-resolver-overview.md) is included in the technical profile. Possible values: `true`, or `false` (default). If you want to use a claims resolver in the technical profile, set this to `true`. | |setting.forgotPasswordLinkOverride <sup>4</sup>| No | A password reset claims exchange to be executed. For more information, see [Self-service password reset](add-password-reset-policy.md). | Notes:+ 1. Available for content definition [DataUri](contentdefinitions.md#datauri) type of `unifiedssp`, or `unifiedssd`. 1. Available for content definition [DataUri](contentdefinitions.md#datauri) type of `unifiedssp`, or `unifiedssd`. [Page layout version](page-layout.md) 1.1.0 and above. 1. Available for [page layout version](page-layout.md) 1.2.0 and above.
active-directory Mobile App Quickstart Portal Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mobile-app-quickstart-portal-android.md
+
+ Title: "Quickstart: Add sign in with Microsoft to an Android app | Azure"
+
+description: In this quickstart, learn how Android applications can call an API that requires access tokens issued by the Microsoft identity platform.
+++++++ Last updated : 02/15/2022+++
+#Customer intent: As an application developer, I want to learn how Android native apps can call protected APIs that require login and access tokens using the Microsoft identity platform.
++
+# Quickstart: Sign in users and call the Microsoft Graph API from an Android app
++
+In this quickstart, you download and run a code sample that demonstrates how an Android application can sign in users and get an access token to call the Microsoft Graph API.
+
+See [How the sample works](#how-the-sample-works) for an illustration.
+
+Applications must be represented by an app object in Azure Active Directory so that the Microsoft identity platform can provide tokens to your application.
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* Android Studio
+* Android 16+
+
+### Step 1: Configure your application in the Azure portal
+For the code sample in this quickstart to work, add a **Redirect URI** compatible with the Auth broker.
+> [!div id="makechanges" class="nextstepaction" class="configure-app-button"]
+> [Make these changes for me]()
+
+> [!div id="appconfigured" class="alert alert-info"]
+> ![Already configured](media/quickstart-v2-android/green-check.png) Your application is configured with these attributes
+
+### Step 2: Download the project
+
+Run the project using Android Studio.
+> [!div class="nextstepaction"]
+> [Download the code sample](https://github.com/Azure-Samples/ms-identity-android-java/archive/master.zip)
++
+### Step 3: Your app is configured and ready to run
+
+We have configured your project with values of your app's properties and it's ready to run.
+The sample app starts on the **Single Account Mode** screen. A default scope, **user.read**, is provided by default, which is used when reading your own profile data during the Microsoft Graph API call. The URL for the Microsoft Graph API call is provided by default. You can change both of these if you wish.
+
+![MSAL sample app showing single and multiple account usage](./media/quickstart-v2-android/quickstart-sample-app.png)
+
+Use the app menu to change between single and multiple account modes.
+
+In single account mode, sign in using a work or home account:
+
+1. Select **Get graph data interactively** to prompt the user for their credentials. You'll see the output from the call to the Microsoft Graph API in the bottom of the screen.
+2. Once signed in, select **Get graph data silently** to make a call to the Microsoft Graph API without prompting the user for credentials again. You'll see the output from the call to the Microsoft Graph API in the bottom of the screen.
+
+In multiple account mode, you can repeat the same steps. Additionally, you can remove the signed-in account, which also removes the cached tokens for that account.
+
+> [!div class="sxs-lookup"]
+> > [!NOTE]
+> > `Enter_the_Supported_Account_Info_Here`
+
+## How the sample works
+![Screenshot of the sample app](media/quickstart-v2-android/android-intro.svg)
++
+The code is organized into fragments that show how to write a single and multiple accounts MSAL app. The code files are organized as follows:
+
+| File | Demonstrates |
+|||
+| MainActivity | Manages the UI |
+| MSGraphRequestWrapper | Calls the Microsoft Graph API using the token provided by MSAL |
+| MultipleAccountModeFragment | Initializes a multi-account application, loads a user account, and gets a token to call the Microsoft Graph API |
+| SingleAccountModeFragment | Initializes a single-account application, loads a user account, and gets a token to call the Microsoft Graph API |
+| res/auth_config_multiple_account.json | The multiple account configuration file |
+| res/auth_config_single_account.json | The single account configuration file |
+| Gradle Scripts/build.grade (Module:app) | The MSAL library dependencies are added here |
+
+We'll now look at these files in more detail and call out the MSAL-specific code in each.
+
+### Adding MSAL to the app
+
+MSAL ([com.microsoft.identity.client](https://javadoc.io/doc/com.microsoft.identity.client/msal)) is the library used to sign in users and request tokens used to access an API protected by Microsoft identity platform. Gradle 3.0+ installs the library when you add the following to **Gradle Scripts** > **build.gradle (Module: app)** under **Dependencies**:
+
+```java
+dependencies {
+ ...
+ implementation 'com.microsoft.identity.client:msal:2.+'
+ ...
+}
+```
+
+This instructs Gradle to download and build MSAL from maven central.
+
+You must also add references to maven to the **allprojects** > **repositories** portion of the **build.gradle (Module: app)** like so:
+
+```java
+allprojects {
+ repositories {
+ mavenCentral()
+ google()
+ mavenLocal()
+ maven {
+ url 'https://pkgs.dev.azure.com/MicrosoftDeviceSDK/DuoSDK-Public/_packaging/Duo-SDK-Feed/maven/v1'
+ }
+ maven {
+ name "vsts-maven-adal-android"
+ url "https://identitydivision.pkgs.visualstudio.com/_packaging/AndroidADAL/maven/v1"
+ credentials {
+ username System.getenv("ENV_VSTS_MVN_ANDROIDADAL_USERNAME") != null ? System.getenv("ENV_VSTS_MVN_ANDROIDADAL_USERNAME") : project.findProperty("vstsUsername")
+ password System.getenv("ENV_VSTS_MVN_ANDROIDADAL_ACCESSTOKEN") != null ? System.getenv("ENV_VSTS_MVN_ANDROIDADAL_ACCESSTOKEN") : project.findProperty("vstsMavenAccessToken")
+ }
+ }
+ jcenter()
+ }
+}
+```
+
+### MSAL imports
+
+The imports that are relevant to the MSAL library are `com.microsoft.identity.client.*`. For example, you'll see `import com.microsoft.identity.client.PublicClientApplication;` which is the namespace for the `PublicClientApplication` class, which represents your public client application.
+
+### SingleAccountModeFragment.java
+
+This file demonstrates how to create a single account MSAL app and call a Microsoft Graph API.
+
+Single account apps are only used by a single user. For example, you might just have one account that you sign into your mapping app with.
+
+#### Single account MSAL initialization
+
+In `auth_config_single_account.json`, in `onCreateView()`, a single account `PublicClientApplication` is created using the config information stored in the `auth_config_single_account.json` file. This is how you initialize the MSAL library for use in a single-account MSAL app:
+
+```java
+...
+// Creates a PublicClientApplication object with res/raw/auth_config_single_account.json
+PublicClientApplication.createSingleAccountPublicClientApplication(getContext(),
+ R.raw.auth_config_single_account,
+ new IPublicClientApplication.ISingleAccountApplicationCreatedListener() {
+ @Override
+ public void onCreated(ISingleAccountPublicClientApplication application) {
+ /**
+ * This test app assumes that the app is only going to support one account.
+ * This requires "account_mode" : "SINGLE" in the config json file.
+ **/
+ mSingleAccountApp = application;
+ loadAccount();
+ }
+
+ @Override
+ public void onError(MsalException exception) {
+ displayError(exception);
+ }
+ });
+```
+
+#### Sign in a user
+
+In `SingleAccountModeFragment.java`, the code to sign in a user is in `initializeUI()`, in the `signInButton` click handler.
+
+Call `signIn()` before trying to acquire tokens. `signIn()` behaves as though `acquireToken()` is called, resulting in an interactive prompt for the user to sign in.
+
+Signing in a user is an asynchronous operation. A callback is passed that calls the Microsoft Graph API and update the UI once the user signs in:
+
+```java
+mSingleAccountApp.signIn(getActivity(), null, getScopes(), getAuthInteractiveCallback());
+```
+
+#### Sign out a user
+
+In `SingleAccountModeFragment.java`, the code to sign out a user is in `initializeUI()`, in the `signOutButton` click handler. Signing a user out is an asynchronous operation. Signing the user out also clears the token cache for that account. A callback is created to update the UI once the user account is signed out:
+
+```java
+mSingleAccountApp.signOut(new ISingleAccountPublicClientApplication.SignOutCallback() {
+ @Override
+ public void onSignOut() {
+ updateUI(null);
+ performOperationOnSignOut();
+ }
+
+ @Override
+ public void onError(@NonNull MsalException exception) {
+ displayError(exception);
+ }
+});
+```
+
+#### Get a token interactively or silently
+
+To present the fewest number of prompts to the user, you'll typically get a token silently. Then, if there's an error, attempt to get to token interactively. The first time the app calls `signIn()`, it effectively acts as a call to `acquireToken()`, which will prompt the user for credentials.
+
+Some situations when the user may be prompted to select their account, enter their credentials, or consent to the permissions your app has requested are:
+
+* The first time the user signs in to the application
+* If a user resets their password, they'll need to enter their credentials
+* If consent is revoked
+* If your app explicitly requires consent
+* When your application is requesting access to a resource for the first time
+* When MFA or other Conditional Access policies are required
+
+The code to get a token interactively, that is with UI that will involve the user, is in `SingleAccountModeFragment.java`, in `initializeUI()`, in the `callGraphApiInteractiveButton` click handler:
+
+```java
+/**
+ * If acquireTokenSilent() returns an error that requires an interaction (MsalUiRequiredException),
+ * invoke acquireToken() to have the user resolve the interrupt interactively.
+ *
+ * Some example scenarios are
+ * - password change
+ * - the resource you're acquiring a token for has a stricter set of requirement than your Single Sign-On refresh token.
+ * - you're introducing a new scope which the user has never consented for.
+ **/
+mSingleAccountApp.acquireToken(getActivity(), getScopes(), getAuthInteractiveCallback());
+```
+
+If the user has already signed in, `acquireTokenSilentAsync()` allows apps to request tokens silently as shown in `initializeUI()`, in the `callGraphApiSilentButton` click handler:
+
+```java
+/**
+ * Once you've signed the user in,
+ * you can perform acquireTokenSilent to obtain resources without interrupting the user.
+ **/
+ mSingleAccountApp.acquireTokenSilentAsync(getScopes(), AUTHORITY, getAuthSilentCallback());
+```
+
+#### Load an account
+
+The code to load an account is in `SingleAccountModeFragment.java` in `loadAccount()`. Loading the user's account is an asynchronous operation, so callbacks to handle when the account loads, changes, or an error occurs is passed to MSAL. The following code also handles `onAccountChanged()`, which occurs when an account is removed, the user changes to another account, and so on.
+
+```java
+private void loadAccount() {
+ ...
+
+ mSingleAccountApp.getCurrentAccountAsync(new ISingleAccountPublicClientApplication.CurrentAccountCallback() {
+ @Override
+ public void onAccountLoaded(@Nullable IAccount activeAccount) {
+ // You can use the account data to update your UI or your app database.
+ updateUI(activeAccount);
+ }
+
+ @Override
+ public void onAccountChanged(@Nullable IAccount priorAccount, @Nullable IAccount currentAccount) {
+ if (currentAccount == null) {
+ // Perform a cleanup task as the signed-in account changed.
+ performOperationOnSignOut();
+ }
+ }
+
+ @Override
+ public void onError(@NonNull MsalException exception) {
+ displayError(exception);
+ }
+ });
+```
+
+#### Call Microsoft Graph
+
+When a user is signed in, the call to Microsoft Graph is made via an HTTP request by `callGraphAPI()` that is defined in `SingleAccountModeFragment.java`. This function is a wrapper that simplifies the sample by doing some tasks such as getting the access token from the `authenticationResult` and packaging the call to the MSGraphRequestWrapper, and displaying the results of the call.
+
+```java
+private void callGraphAPI(final IAuthenticationResult authenticationResult) {
+ MSGraphRequestWrapper.callGraphAPIUsingVolley(
+ getContext(),
+ graphResourceTextView.getText().toString(),
+ authenticationResult.getAccessToken(),
+ new Response.Listener<JSONObject>() {
+ @Override
+ public void onResponse(JSONObject response) {
+ /* Successfully called graph, process data and send to UI */
+ ...
+ }
+ },
+ new Response.ErrorListener() {
+ @Override
+ public void onErrorResponse(VolleyError error) {
+ ...
+ }
+ });
+}
+```
+
+### auth_config_single_account.json
+
+This is the configuration file for an MSAL app that uses a single account.
+
+See [Understand the Android MSAL configuration file ](msal-configuration.md) for an explanation of these fields.
+
+Note the presence of `"account_mode" : "SINGLE"`, which configures this app to use a single account.
+
+`"client_id"` is preconfigured to use an app object registration that Microsoft maintains.
+`"redirect_uri"`is preconfigured to use the signing key provided with the code sample.
+
+```json
+{
+ "client_id" : "0984a7b6-bc13-4141-8b0d-8f767e136bb7",
+ "authorization_user_agent" : "DEFAULT",
+ "redirect_uri" : "msauth://com.azuresamples.msalandroidapp/1wIqXSqBj7w%2Bh11ZifsnqwgyKrY%3D",
+ "account_mode" : "SINGLE",
+ "broker_redirect_uri_registered": true,
+ "authorities" : [
+ {
+ "type": "AAD",
+ "audience": {
+ "type": "AzureADandPersonalMicrosoftAccount",
+ "tenant_id": "common"
+ }
+ }
+ ]
+}
+```
+
+### MultipleAccountModeFragment.java
+
+This file demonstrates how to create a multiple account MSAL app and call a Microsoft Graph API.
+
+An example of a multiple account app is a mail app that allows you to work with multiple user accounts such as a work account and a personal account.
+
+#### Multiple account MSAL initialization
+
+In the `MultipleAccountModeFragment.java` file, in `onCreateView()`, a multiple account app object (`IMultipleAccountPublicClientApplication`) is created using the config information stored in the `auth_config_multiple_account.json file`:
+
+```java
+// Creates a PublicClientApplication object with res/raw/auth_config_multiple_account.json
+PublicClientApplication.createMultipleAccountPublicClientApplication(getContext(),
+ R.raw.auth_config_multiple_account,
+ new IPublicClientApplication.IMultipleAccountApplicationCreatedListener() {
+ @Override
+ public void onCreated(IMultipleAccountPublicClientApplication application) {
+ mMultipleAccountApp = application;
+ loadAccounts();
+ }
+
+ @Override
+ public void onError(MsalException exception) {
+ ...
+ }
+ });
+```
+
+The created `MultipleAccountPublicClientApplication` object is stored in a class member variable so that it can be used to interact with the MSAL library to acquire tokens and load and remove the user account.
+
+#### Load an account
+
+Multiple account apps usually call `getAccounts()` to select the account to use for MSAL operations. The code to load an account is in the `MultipleAccountModeFragment.java` file, in `loadAccounts()`. Loading the user's account is an asynchronous operation. So a callback handles the situations when the account is loaded, changes, or an error occurs.
+
+```java
+/**
+ * Load currently signed-in accounts, if there's any.
+ **/
+private void loadAccounts() {
+ if (mMultipleAccountApp == null) {
+ return;
+ }
+
+ mMultipleAccountApp.getAccounts(new IPublicClientApplication.LoadAccountsCallback() {
+ @Override
+ public void onTaskCompleted(final List<IAccount> result) {
+ // You can use the account data to update your UI or your app database.
+ accountList = result;
+ updateUI(accountList);
+ }
+
+ @Override
+ public void onError(MsalException exception) {
+ displayError(exception);
+ }
+ });
+}
+```
+
+#### Get a token interactively or silently
+
+Some situations when the user may be prompted to select their account, enter their credentials, or consent to the permissions your app has requested are:
+
+* The first time users sign in to the application
+* If a user resets their password, they'll need to enter their credentials
+* If consent is revoked
+* If your app explicitly requires consent
+* When your application is requesting access to a resource for the first time
+* When MFA or other Conditional Access policies are required
+
+Multiple account apps should typically acquire tokens interactively, that is with UI that involves the user, with a call to `acquireToken()`. The code to get a token interactively is in the `MultipleAccountModeFragment.java` file in `initializeUI()`, in the `callGraphApiInteractiveButton` click handler:
+
+```java
+/**
+ * Acquire token interactively. It will also create an account object for the silent call as a result (to be obtained by getAccount()).
+ *
+ * If acquireTokenSilent() returns an error that requires an interaction,
+ * invoke acquireToken() to have the user resolve the interrupt interactively.
+ *
+ * Some example scenarios are
+ * - password change
+ * - the resource you're acquiring a token for has a stricter set of requirement than your SSO refresh token.
+ * - you're introducing a new scope which the user has never consented for.
+ **/
+mMultipleAccountApp.acquireToken(getActivity(), getScopes(), getAuthInteractiveCallback());
+```
+
+Apps shouldn't require the user to sign in every time they request a token. If the user has already signed in, `acquireTokenSilentAsync()` allows apps to request tokens without prompting the user, as shown in the `MultipleAccountModeFragment.java` file, in`initializeUI()` in the `callGraphApiSilentButton` click handler:
+
+```java
+/**
+ * Performs acquireToken without interrupting the user.
+ *
+ * This requires an account object of the account you're obtaining a token for.
+ * (can be obtained via getAccount()).
+ */
+mMultipleAccountApp.acquireTokenSilentAsync(getScopes(),
+ accountList.get(accountListSpinner.getSelectedItemPosition()),
+ AUTHORITY,
+ getAuthSilentCallback());
+```
+
+#### Remove an account
+
+The code to remove an account, and any cached tokens for the account, is in the `MultipleAccountModeFragment.java` file in `initializeUI()` in the handler for the remove account button. Before you can remove an account, you need an account object, which you obtain from MSAL methods like `getAccounts()` and `acquireToken()`. Because removing an account is an asynchronous operation, the `onRemoved` callback is supplied to update the UI.
+
+```java
+/**
+ * Removes the selected account and cached tokens from this app (or device, if the device is in shared mode).
+ **/
+mMultipleAccountApp.removeAccount(accountList.get(accountListSpinner.getSelectedItemPosition()),
+ new IMultipleAccountPublicClientApplication.RemoveAccountCallback() {
+ @Override
+ public void onRemoved() {
+ ...
+ /* Reload account asynchronously to get the up-to-date list. */
+ loadAccounts();
+ }
+
+ @Override
+ public void onError(@NonNull MsalException exception) {
+ displayError(exception);
+ }
+ });
+```
+
+### auth_config_multiple_account.json
+
+This is the configuration file for a MSAL app that uses multiple accounts.
+
+See [Understand the Android MSAL configuration file ](msal-configuration.md) for an explanation of the various fields.
+
+Unlike the [auth_config_single_account.json](#auth_config_single_accountjson) configuration file, this config file has `"account_mode" : "MULTIPLE"` instead of `"account_mode" : "SINGLE"` because this is a multiple account app.
+
+`"client_id"` is preconfigured to use an app object registration that Microsoft maintains.
+`"redirect_uri"`is preconfigured to use the signing key provided with the code sample.
+
+```json
+{
+ "client_id" : "0984a7b6-bc13-4141-8b0d-8f767e136bb7",
+ "authorization_user_agent" : "DEFAULT",
+ "redirect_uri" : "msauth://com.azuresamples.msalandroidapp/1wIqXSqBj7w%2Bh11ZifsnqwgyKrY%3D",
+ "account_mode" : "MULTIPLE",
+ "broker_redirect_uri_registered": true,
+ "authorities" : [
+ {
+ "type": "AAD",
+ "audience": {
+ "type": "AzureADandPersonalMicrosoftAccount",
+ "tenant_id": "common"
+ }
+ }
+ ]
+}
+```
++
+## Next steps
+
+Move on to the Android tutorial in which you build an Android app that gets an access token from the Microsoft identity platform and uses it to call the Microsoft Graph API.
+
+> [!div class="nextstepaction"]
+> [Tutorial: Sign in users and call the Microsoft Graph from an Android application](tutorial-v2-android.md)
active-directory Mobile App Quickstart Portal Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mobile-app-quickstart-portal-ios.md
+
+ Title: "Quickstart: Add sign in with Microsoft to an iOS or macOS app | Azure"
+
+description: In this quickstart, learn how an iOS or macOS app can sign in users, get an access token from the Microsoft identity platform, and call the Microsoft Graph API.
+++++++ Last updated : 02/15/2022++++
+#Customer intent: As an application developer, I want to learn how to sign in users and call Microsoft Graph from my iOS or macOS application.
++
+# Quickstart: Sign in users and call the Microsoft Graph API from an iOS or macOS app
+
+In this quickstart, you download and run a code sample that demonstrates how a native iOS or macOS application can sign in users and get an access token to call the Microsoft Graph API.
+
+The quickstart applies to both iOS and macOS apps. Some steps are needed only for iOS apps and will be indicated as such.
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* XCode 10+
+* iOS 10+
+* macOS 10.12+
+
+## How the sample works
+
+![Shows how the sample app generated by this quickstart works](media/quickstart-v2-ios/ios-intro.svg)
+
+#### Step 1: Configure your application
+For the code sample for this quickstart to work, add a **Redirect URI** compatible with the Auth broker.
+> [!div id="makechanges" class="nextstepaction" class="configure-app-button"]
+> [Make this change for me]()
+
+> [!div id="appconfigured" class="alert alert-info"]
+> ![Already configured](media/quickstart-v2-ios/green-check.png) Your application is configured with these attributes
+
+#### Step 2: Download the sample project
+> [!div class="nextstepaction"]
+> [Download the code sample for iOS]()
+
+> [!div class="nextstepaction"]
+> [Download the code sample for macOS]()
+
+#### Step 3: Install dependencies
+
+1. Extract the zip file.
+2. In a terminal window, navigate to the folder with the downloaded code sample and run `pod install` to install the latest MSAL library.
+
+#### Step 4: Your app is configured and ready to run
+We have configured your project with values of your app's properties and it's ready to run.
+> [!NOTE]
+> `Enter_the_Supported_Account_Info_Here`
+
+1. If you're building an app for [Azure AD national clouds](/graph/deployments#app-registration-and-token-service-root-endpoints), replace the line starting with 'let kGraphEndpoint' and 'let kAuthority' with correct endpoints. For global access, use default values:
+
+ ```swift
+ let kGraphEndpoint = "https://graph.microsoft.com/"
+ let kAuthority = "https://login.microsoftonline.com/common"
+ ```
+
+1. Other endpoints are documented [here](/graph/deployments#app-registration-and-token-service-root-endpoints). For example, to run the quickstart with Azure AD Germany, use following:
+
+ ```swift
+ let kGraphEndpoint = "https://graph.microsoft.de/"
+ let kAuthority = "https://login.microsoftonline.de/common"
+ ```
+
+3. Open the project settings. In the **Identity** section, enter the **Bundle Identifier** that you entered into the portal.
+4. Right-click **Info.plist** and select **Open As** > **Source Code**.
+5. Under the dict root node, replace `Enter_the_bundle_Id_Here` with the ***Bundle Id*** that you used in the portal. Notice the `msauth.` prefix in the string.
+
+ ```xml
+ <key>CFBundleURLTypes</key>
+ <array>
+ <dict>
+ <key>CFBundleURLSchemes</key>
+ <array>
+ <string>msauth.Enter_the_Bundle_Id_Here</string>
+ </array>
+ </dict>
+ </array>
+ ```
+
+6. Build and run the app!
+
+## More Information
+
+Read these sections to learn more about this quickstart.
+
+### Get MSAL
+
+MSAL ([MSAL.framework](https://github.com/AzureAD/microsoft-authentication-library-for-objc)) is the library used to sign in users and request tokens used to access an API protected by Microsoft identity platform. You can add MSAL to your application using the following process:
+
+```
+$ vi Podfile
+```
+
+Add the following to this podfile (with your project's target):
+
+```
+use_frameworks!
+
+target 'MSALiOS' do
+ pod 'MSAL'
+end
+```
+
+Run CocoaPods installation command:
+
+`pod install`
+
+### Initialize MSAL
+
+You can add the reference for MSAL by adding the following code:
+
+```swift
+import MSAL
+```
+
+Then, initialize MSAL using the following code:
+
+```swift
+let authority = try MSALAADAuthority(url: URL(string: kAuthority)!)
+
+let msalConfiguration = MSALPublicClientApplicationConfig(clientId: kClientID, redirectUri: nil, authority: authority)
+self.applicationContext = try MSALPublicClientApplication(configuration: msalConfiguration)
+```
+
+> |Where: | Description |
+> |||
+> | `clientId` | The Application ID from the application registered in *portal.azure.com* |
+> | `authority` | The Microsoft identity platform. In most of cases this will be `https://login.microsoftonline.com/common` |
+> | `redirectUri` | The redirect URI of the application. You can pass 'nil' to use the default value, or your custom redirect URI. |
+
+### For iOS only, additional app requirements
+
+Your app must also have the following in your `AppDelegate`. This lets MSAL SDK handle token response from the Auth broker app when you do authentication.
+
+```swift
+func application(_ app: UIApplication, open url: URL, options: [UIApplication.OpenURLOptionsKey : Any] = [:]) -> Bool {
+
+ return MSALPublicClientApplication.handleMSALResponse(url, sourceApplication: options[UIApplication.OpenURLOptionsKey.sourceApplication] as? String)
+}
+```
+
+> [!NOTE]
+> On iOS 13+, if you adopt `UISceneDelegate` instead of `UIApplicationDelegate`, place this code into the `scene:openURLContexts:` callback instead (See [Apple's documentation](https://developer.apple.com/documentation/uikit/uiscenedelegate/3238059-scene?language=objc)).
+> If you support both UISceneDelegate and UIApplicationDelegate for compatibility with older iOS, MSAL callback needs to be placed into both places.
+
+```swift
+func scene(_ scene: UIScene, openURLContexts URLContexts: Set<UIOpenURLContext>) {
+
+ guard let urlContext = URLContexts.first else {
+ return
+ }
+
+ let url = urlContext.url
+ let sourceApp = urlContext.options.sourceApplication
+
+ MSALPublicClientApplication.handleMSALResponse(url, sourceApplication: sourceApp)
+}
+```
+
+Finally, your app must have an `LSApplicationQueriesSchemes` entry in your ***Info.plist*** alongside the `CFBundleURLTypes`. The sample comes with this included.
+
+ ```xml
+ <key>LSApplicationQueriesSchemes</key>
+ <array>
+ <string>msauthv2</string>
+ <string>msauthv3</string>
+ </array>
+ ```
+
+### Sign in users & request tokens
+
+MSAL has two methods used to acquire tokens: `acquireToken` and `acquireTokenSilent`.
+
+#### acquireToken: Get a token interactively
+
+Some situations require users to interact with Microsoft identity platform. In these cases, the end user may be required to select their account, enter their credentials, or consent to your app's permissions. For example,
+
+* The first time users sign in to the application
+* If a user resets their password, they'll need to enter their credentials
+* When your application is requesting access to a resource for the first time
+* When MFA or other Conditional Access policies are required
+
+```swift
+let parameters = MSALInteractiveTokenParameters(scopes: kScopes, webviewParameters: self.webViewParamaters!)
+self.applicationContext!.acquireToken(with: parameters) { (result, error) in /* Add your handling logic */}
+```
+
+> |Where:| Description |
+> |||
+> | `scopes` | Contains the scopes being requested (that is, `[ "user.read" ]` for Microsoft Graph or `[ "<Application ID URL>/scope" ]` for custom web APIs (`api://<Application ID>/access_as_user`) |
+
+#### acquireTokenSilent: Get an access token silently
+
+Apps shouldn't require their users to sign in every time they request a token. If the user has already signed in, this method allows apps to request tokens silently.
+
+```swift
+self.applicationContext!.getCurrentAccount(with: nil) { (currentAccount, previousAccount, error) in
+
+ guard let account = currentAccount else {
+ return
+ }
+
+ let silentParams = MSALSilentTokenParameters(scopes: self.kScopes, account: account)
+ self.applicationContext!.acquireTokenSilent(with: silentParams) { (result, error) in /* Add your handling logic */}
+}
+```
+
+> |Where: | Description |
+> |||
+> | `scopes` | Contains the scopes being requested (that is, `[ "user.read" ]` for Microsoft Graph or `[ "<Application ID URL>/scope" ]` for custom web APIs (`api://<Application ID>/access_as_user`) |
+> | `account` | The account a token is being requested for. This quickstart is about a single account application. If you want to build a multi-account app you'll need to define logic to identify which account to use for token requests using `accountsFromDeviceForParameters:completionBlock:` and passing correct `accountIdentifier` |
++
+## Next steps
+
+Move on to the step-by-step tutorial in which you build an iOS or macOS app that gets an access token from the Microsoft identity platform and uses it to call the Microsoft Graph API.
+
+> [!div class="nextstepaction"]
+> [Tutorial: Sign in users and call Microsoft Graph from an iOS or macOS app](tutorial-v2-ios.md)
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 01/31/2022 Last updated : 02/16/2022
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on January 31st, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
+>This information last updated on February 16th, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| COMMON AREA PHONE | MCOCAP | 295a8eb0-f78d-45c7-8b5b-1eed5ed02dff | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MICROSOFT TEAMS (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) | | Common Area Phone for GCC | MCOCAP_GOV | b1511558-69bd-4e1b-8270-59ca96dba0f3 | MCOEV_GOV (db23fce2-a974-42ef-9002-d78dd42a0f22)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>MCOSTANDARD_GOV (a31ef4a2-f787-435e-8335-e47eb0cafc94) | Microsoft 365 Phone System for Government (db23fce2-a974-42ef-9002-d78dd42a0f22)<br/>Microsoft Teams for Government (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>Skype for Business Online (Plan 2) for Government (a31ef4a2-f787-435e-8335-e47eb0cafc94) | | Common Data Service Database Capacity | CDS_DB_CAPACITY | e612d426-6bc3-4181-9658-91aa906b0ac0 | CDS_DB_CAPACITY (360bcc37-0c11-4264-8eed-9fa7a3297c9b)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Common Data Service for Apps Database Capacity (360bcc37-0c11-4264-8eed-9fa7a3297c9b)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) |
+| Common Data Service Database Capacity for Government | CDS_DB_CAPACITY_GOV | eddf428b-da0e-4115-accf-b29eb0b83965 | CDS_DB_CAPACITY_GOV (1ddffef6-4f69-455e-89c7-d5d72105f915)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8) | Common Data Service for Apps Database Capacity for Government (1ddffef6-4f69-455e-89c7-d5d72105f915)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)|
| Common Data Service Log Capacity | CDS_LOG_CAPACITY | 448b063f-9cc6-42fc-a0e6-40e08724a395 | CDS_LOG_CAPACITY (dc48f5c5-e87d-43d6-b884-7ac4a59e7ee9)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Common Data Service for Apps Log Capacity (dc48f5c5-e87d-43d6-b884-7ac4a59e7ee9)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | COMMUNICATIONS CREDITS | MCOPSTNC | 47794cd0-f0e5-45c5-9033-2eb6b5fc84e0 | MCOPSTNC (505e180f-f7e0-4b65-91d4-00d670bbd18c) | COMMUNICATIONS CREDITS (505e180f-f7e0-4b65-91d4-00d670bbd18c) | | Dynamics 365 - Additional Database Storage (Qualified Offer) | CRMSTORAGE | 328dc228-00bc-48c6-8b09-1fbc8bc3435d | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CRMSTORAGE (77866113-0f3e-4e6e-9666-b1e25c6f99b0) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Dynamics CRM Online Storage Add-On (77866113-0f3e-4e6e-9666-b1e25c6f99b0) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| EXCHANGE ONLINE ESSENTIALS (ExO P1 BASED) | EXCHANGEESSENTIALS | 7fc0182e-d107-4556-8329-7caaa511197b | EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c) | EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)| | EXCHANGE ONLINE ESSENTIALS | EXCHANGE_S_ESSENTIALS | e8f81a67-bd96-4074-b108-cf193eb9433b | EXCHANGE_S_ESSENTIALS (1126bef5-da20-4f07-b45e-ad25d2581aa8)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c) | EXCHANGE ESSENTIALS (1126bef5-da20-4f07-b45e-ad25d2581aa8)<br/>TO-DO (PLAN 1) (5e62787c-c316-451f-b873-1d05acd4d12c) | | EXCHANGE ONLINE KIOSK | EXCHANGEDESKLESS | 80b2d799-d2ba-4d2a-8842-fb0d0f3a4b82 | EXCHANGE_S_DESKLESS (4a82b400-a79f-41a4-b4e2-e94f5787b113) | EXCHANGE ONLINE KIOSK (4a82b400-a79f-41a4-b4e2-e94f5787b113) |
+| Exchange Online (Plan 1) for GCC | EXCHANGESTANDARD_GOV | f37d5ebf-4bf1-4aa2-8fa3-50c51059e983 | EXCHANGE_S_STANDARD_GOV (e9b4930a-925f-45e2-ac2a-3f7788ca6fdd)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117) | Exchange Online (Plan 1) for Government (e9b4930a-925f-45e2-ac2a-3f7788ca6fdd)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117) |
| EXCHANGE ONLINE POP | EXCHANGETELCO | cb0a98a8-11bc-494c-83d9-c1b1ac65327e | EXCHANGE_B_STANDARD (90927877-dcff-4af6-b346-2332c0b15bb7) | EXCHANGE ONLINE POP (90927877-dcff-4af6-b346-2332c0b15bb7) |
+| Exchange Online Protection | EOP_ENTERPRISE | 45a2423b-e884-448d-a831-d9e139c52d2f | EOP_ENTERPRISE (326e2b78-9d27-42c9-8509-46c827743a17) | Exchange Online Protection (326e2b78-9d27-42c9-8509-46c827743a17) |
| INTUNE | INTUNE_A | 061f9ace-7d42-4136-88ac-31dc755f143f | INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | | Microsoft Dynamics AX7 User Trial | AX7_USER_TRIAL | fcecd1f9-a91e-488d-a918-a96cdb6ce2b0 | ERP_TRIAL_INSTANCE (e2f705fd-2468-4090-8c58-fad6e6b1e724)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 Operations Trial Environment (e2f705fd-2468-4090-8c58-fad6e6b1e724)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | Microsoft Azure Multi-Factor Authentication | MFA_STANDALONE | cb2020b1-d8f6-41c0-9acd-8ff3d6d7831b | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| MICROSOFT 365 APPS FOR BUSINESS | O365_BUSINESS | cdd28e44-67e3-425e-be4c-737fab2899d3 | FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>OFFICE_BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>OFFICE 365 BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | | MICROSOFT 365 APPS FOR BUSINESS | SMB_BUSINESS | b214fe43-f5a3-4703-beeb-fa97188220fc | FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>OFFICE_BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>OFFICE 365 BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | | MICROSOFT 365 APPS FOR ENTERPRISE | OFFICESUBSCRIPTION | c2273bd0-dff7-4215-9ef5-2c7bcfb06425 | FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) |
+| Microsoft 365 Apps for Faculty | OFFICESUBSCRIPTION_FACULTY | 12b8c807-2e20-48fc-b453-542b6ee9d171 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>RMS_S_BASIC (31cf2cfc-6b0d-4adc-a336-88b724ed8122)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Azure Rights Management Service (31cf2cfc-6b0d-4adc-a336-88b724ed8122)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>OneDrive for Business (Plan 1) (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91) |
| MICROSOFT 365 AUDIO CONFERENCING FOR GCC | MCOMEETADV_GOC | 2d3091c7-0712-488b-b3d8-6b97bde6a1f5 | EXCHANGE_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MCOMEETADV_GOV (f544b08d-1645-4287-82de-8d91f37c02a1) | EXCHANGE FOUNDATION FOR GOVERNMENT (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MICROSOFT 365 AUDIO CONFERENCING FOR GOVERNMENT (f544b08d-1645-4287-82de-8d91f37c02a1) | | MICROSOFT 365 BUSINESS BASIC | O365_BUSINESS_ESSENTIALS | 3b555118-da6a-4418-894f-7df1e2096870 | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | To-Do (Plan 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | | MICROSOFT 365 BUSINESS BASIC | SMB_BUSINESS_ESSENTIALS | dab7782a-93b1-4074-8bb1-0e61318bea0b | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) | TO-DO (PLAN 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) | | MICROSOFT 365 BUSINESS STANDARD | O365_BUSINESS_PREMIUM | f245ecc8-75af-4f8e-b61f-27d8114de5f3 | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>O365_SB_Relationship_Management (5bfe124c-bbdc-4494-8835-f1297d457d79)<br/>OFFICE_BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653)| To-Do (Plan 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>OUTLOOK CUSTOMER MANAGER (5bfe124c-bbdc-4494-8835-f1297d457d79)<br/>OFFICE 365 BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | | MICROSOFT 365 BUSINESS STANDARD - PREPAID LEGACY | SMB_BUSINESS_PREMIUM | ac5cef5d-921b-4f97-9ef3-c99076e5470f | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>O365_SB_Relationship_Management (5bfe124c-bbdc-4494-8835-f1297d457d79)<br/>OFFICE_BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) | To-Do (Plan 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>OUTLOOK CUSTOMER MANAGER (5bfe124c-bbdc-4494-8835-f1297d457d79)<br/>OFFICE 365 BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) | | MICROSOFT 365 BUSINESS PREMIUM | SPB | cbdc14ab-d96c-4c30-b9f4-6ada7cdc1d46 | AAD_SMB (de377cbc-0019-4ec2-b77c-3f223947e102)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE_S_ARCHIVE_ADDON (176a09a6-7ec5-4039-ac02-b2791c6ba793)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>INTUNE_SMBIZ (8e9ff0ff-aa7a-4b20-83c1-2f636b600ac2)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>O365_SB_Relationship_Management (5bfe124c-bbdc-4494-8835-f1297d457d79)<br/>OFFICE_BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>STREAM_O365_E1 (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>WINBIZ (8e229017-d77b-43d5-9305-903395523b99)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | AZURE ACTIVE DIRECTORY (de377cbc-0019-4ec2-b77c-3f223947e102)<br/>TO-DO (PLAN 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE ONLINE ARCHIVING FOR EXCHANGE ONLINE (176a09a6-7ec5-4039-ac02-b2791c6ba793)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>MICROSOFT INTUNE (8e9ff0ff-aa7a-4b20-83c1-2f636b600ac2)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MICROSOFT BOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>OUTLOOK CUSTOMER MANAGER (5bfe124c-bbdc-4494-8835-f1297d457d79)<br/>OFFICE 365 BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>AZURE INFORMATION PROTECTION PREMIUM P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>MICROSOFT STREAM FOR O365 E1 SKU (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>WINDOWS 10 BUSINESS (8e229017-d77b-43d5-9305-903395523b99)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) |
+| Microsoft 365 Business Voice | BUSINESS_VOICE_MED2 | a6051f20-9cbc-47d2-930d-419183bf6cf1 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOPSTN1 (4ed3ff63-69d7-4fb7-b984-5aec7f605ca8)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Domestic Calling Plan (4ed3ff63-69d7-4fb7-b984-5aec7f605ca8)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) |
| Microsoft 365 Business Voice (US) | BUSINESS_VOICE_MED2_TELCO | 08d7bce8-6e16-490e-89db-1d508e5e9609 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOPSTN1 (4ed3ff63-69d7-4fb7-b984-5aec7f605ca8)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Domestic Calling Plan (4ed3ff63-69d7-4fb7-b984-5aec7f605ca8)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | | Microsoft 365 Business Voice (without calling plan) | BUSINESS_VOICE_DIRECTROUTING | d52db95a-5ecb-46b6-beb0-190ab5cda4a8 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | | Microsoft 365 Business Voice (without Calling Plan) for US | BUSINESS_VOICE_DIRECTROUTING_MED | 8330dae3-d349-44f7-9cad-1b23c64baabe | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| MICROSOFT 365 AUDIO CONFERENCING FOR GCC | MCOMEETADV_GOV | 2d3091c7-0712-488b-b3d8-6b97bde6a1f5 | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MCOMEETADV_GOV (f544b08d-1645-4287-82de-8d91f37c02a1) | EXCHANGE FOUNDATION FOR GOVERNMENT (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MICROSOFT 365 AUDIO CONFERENCING FOR GOVERNMENT (f544b08d-1645-4287-82de-8d91f37c02a1) | | Microsoft 365 E5 Suite features | M365_E5_SUITE_COMPONENTS | 99cc8282-2f74-4954-83b7-c6a9a1999067 | Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7) | Information Protection and Governance Analytics ΓÇô Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft ML-based classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7) | | Microsoft 365 F1 | M365_F1_COMM | 50f60901-3181-4b75-8a2c-4c8e4c1d5a72 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>DYN365_CDS_O365_F1 (ca6e61ec-d4f4-41eb-8b88-d96e0e14323f)<br/>EXCHANGE_S_DESKLESS (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>STREAM_O365_K (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>MCOIMP (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/> RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>DYN365_CDS_O365_F1 (ca6e61ec-d4f4-41eb-8b88-d96e0e14323f)<br/>EXCHANGE_S_DESKLESS (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>STREAM_O365_K (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>MCOIMP (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) |
+| Microsoft 365 F3 GCC | M365_F1_GOV | 2a914830-d700-444a-b73c-e3f31980d833 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM_GOV (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>DYN365_CDS_O365_F1_GCC (29007dd3-36c0-4cc2-935d-f5bca2c2c473)<br/>CDS_O365_F1_GCC (5e05331a-0aec-437e-87db-9ef5934b5771)<br/>EXCHANGE_S_DESKLESS_GOV (88f4d7ef-a73b-4246-8047-516022144c9f)<br/>FORMS_GOV_F1 (bfd4133a-bbf3-4212-972b-60412137c428)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>STREAM_O365_K_GOV (d65648f1-9504-46e4-8611-2658763f28b8)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708- 6ee03664b117)<br/>PROJECTWORKMANAGEMENT_GOV (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>OFFICEMOBILE_SUBSCRIPTION_GOV (4ccb60ee-9523-48fd-8f63-4b090f1ad77a)<br/>POWERAPPS_O365_S1_GOV (49f06c3d-da7d-4fa0-bcce-1458fdd18a59)<br/>FLOW_O365_S1_GOV (5d32692e-5b24-4a59-a77e-b2a8650e25c1)<br/>SHAREPOINTDESKLESS_GOV (b1aeb897-3a19-46e2-8c27-a609413cf193)<br/>MCOIMP_GOV (8a9f17f1-5872-44e8-9b11-3caade9dc90f)<br/>BPOS_S_TODO_FIRSTLINE (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>WHITEBOARD_FIRSTLINE1 (36b29273-c6d0-477a-aca6-6fbe24f538e3) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 for GCC (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>Azure Rights Management (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>Common Data Service - O365 F1 GCC (29007dd3-36c0-4cc2-935d-f5bca2c2c473)<br/>Common Data Service for Teams_F1 GCC (5e05331a-0aec-437e-87db-9ef5934b5771)<br/>Exchange Online (Kiosk) for Government (88f4d7ef-a73b-4246-8047-516022144c9f)<br/>Forms for Government (Plan F1) (bfd4133a-bbf3-4212-972b-60412137c428)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft Stream for O365 for Government (F1) (d65648f1-9504-46e4-8611-2658763f28b8)<br/>Microsoft Teams for Government (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Planner for Government (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>Office for the Web for Government (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>Office Mobile Apps for Office 365 for GCC (4ccb60ee-9523-48fd-8f63-4b090f1ad77a)<br/>Power Apps for Office 365 F3 for Government (49f06c3d-da7d-4fa0-bcce-1458fdd18a59)<br/>Power Automate for Office 365 F3 for Government (5d32692e-5b24-4a59-a77e-b2a8650e25c1)<br/>SharePoint KioskG (b1aeb897-3a19-46e2-8c27-a609413cf193)<br/>Skype for Business Online (Plan 1) for Government (8a9f17f1-5872-44e8-9b11-3caade9dc90f)<br/>To-Do (Firstline) (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>Whiteboard (Firstline) (36b29273-c6d0-477a-aca6-6fbe24f538e3) |
| MICROSOFT 365 G3 GCC | M365_G3_GOV | e823ca47-49c4-46b3-b38d-ca11d5abe3d2 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>RMS_S_PREMIUM_GOV (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>DYN365_CDS_O365_P2_GCC (06162da2-ebf9-4954-99a0-00fee96f95cc)<br/>CDS_O365_P2_GCC (a70bbf38-cdda-470d-adb8-5804b8770f41)<br/>EXCHANGE_S_ENTERPRISE_GOV (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>FORMS_GOV_E3 (24af5f65-d0f3-467b-9f78-ea798c4aeffc)<br/>CONTENT_EXPLORER (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>CONTENTEXPLORER_STANDARD (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2_GOV (6e5b7995-bd4f-4cbd-9d19-0e32010c72f0)<br/>OFFICESUBSCRIPTION_GOV (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>STREAM_O365_E3_GOV (2c1ada27-dbaa-46f9-bda6-ecb94445f758)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>PROJECTWORKMANAGEMENT_GOV (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>POWERAPPS_O365_P2_GOV (0a20c815-5e81-4727-9bdc-2b5a117850c3)<br/>FLOW_O365_P2_GOV (c537f360-6a00-4ace-a7f5-9128d0ac1e4b)<br/>SHAREPOINTENTERPRISE_GOV (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>MCOSTANDARD_GOV (a31ef4a2-f787-435e-8335-e47eb0cafc94) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AZURE RIGHTS MANAGEMENT (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>AZURE RIGHTS MANAGEMENT PREMIUM FOR GOVERNMENT (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>COMMON DATA SERVICE - O365 P2 GCC (06162da2-ebf9-4954-99a0-00fee96f95cc)<br/>COMMON DATA SERVICE FOR TEAMS_P2 GCC (a70bbf38-cdda-470d-adb8-5804b8770f41)<br/>EXCHANGE PLAN 2G (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>FORMS FOR GOVERNMENT (PLAN E3) (24af5f65-d0f3-467b-9f78-ea798c4aeffc)<br/>INFORMATION PROTECTION AND GOVERNANCE ANALYTICS ΓÇô PREMIUM (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>INFORMATION PROTECTION AND GOVERNANCE ANALYTICS ΓÇô STANDARD (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>INFORMATION PROTECTION FOR OFFICE 365 ΓÇô STANDARD (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>INSIGHTS BY MYANALYTICS FOR GOVERNMENT (6e5b7995-bd4f-4cbd-9d19-0e32010c72f0)<br/>MICROSOFT 365 APPS FOR ENTERPRISE G (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>MICROSOFT Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFT BOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>MICROSOFT STREAM FOR O365 FOR GOVERNMENT (E3) (2c1ada27-dbaa-46f9-bda6-ecb94445f758)<br/>MICROSOFT TEAMS FOR GOVERNMENT (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>OFFICE 365 PLANNER FOR GOVERNMENT (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>OFFICE FOR THE WEB (GOVERNMENT) (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>POWER APPS FOR OFFICE 365 FOR GOVERNMENT (0a20c815-5e81-4727-9bdc-2b5a117850c3)<br/>POWER AUTOMATE FOR OFFICE 365 FOR GOVERNMENT (c537f360-6a00-4ace-a7f5-9128d0ac1e4b)<br/>SHAREPOINT PLAN 2G (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) FOR GOVERNMENT (a31ef4a2-f787-435e-8335-e47eb0cafc94) | | MICROSOFT 365 PHONE SYSTEM | MCOEV | e43b5b99-8dfb-405f-9987-dc307f34bcbd | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | | MICROSOFT 365 PHONE SYSTEM FOR DOD | MCOEV_DOD | d01d9287-694b-44f3-bcc5-ada78c8d953e | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Defender for Endpoint Server | MDATP_Server | 509e8ab6-0274-4cda-bcbd-bd164fd562c4 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | | MICROSOFT DYNAMICS CRM ONLINE BASIC | CRMPLAN2 | 906af65a-2970-46d5-9b58-4e9aa50f0657 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>CRMPLAN2 (bf36ca64-95c6-4918-9275-eb9f4ce2c04f)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT DYNAMICS CRM ONLINE BASIC (bf36ca64-95c6-4918-9275-eb9f4ce2c04f)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | | Microsoft Defender for Identity | ATA | 98defdf7-f6c1-44f5-a1f6-943b6764e7a5 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>ADALLOM_FOR_AATP (61d18b02-6889-479f-8f36-56e6e0fe5792) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>SecOps Investigation for MDI (61d18b02-6889-479f-8f36-56e6e0fe5792) |
+| Microsoft Defender for Office 365 (Plan 1) GCC | ATP_ENTERPRISE_GOV | d0d1ca43-b81a-4f51-81e5-a5b1ad7bb005 | ATP_ENTERPRISE_GOV (493ff600-6a2b-4db6-ad37-a7d4eb214516) | Microsoft Defender for Office 365 (Plan 1) for Government (493ff600-6a2b-4db6-ad37-a7d4eb214516) |
| Microsoft Defender for Office 365 (Plan 2) GCC | THREAT_INTELLIGENCE_GOV | 56a59ffb-9df1-421b-9e61-8b568583474d | MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>ATP_ENTERPRISE_GOV (493ff600-6a2b-4db6-ad37-a7d4eb214516)<br/>THREAT_INTELLIGENCE_GOV (900018f1-0cdb-4ecb-94d4-90281760fdc6) | Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft Defender for Office 365 (Plan 1) for Government (493ff600-6a2b-4db6-ad37-a7d4eb214516)<br/>Microsoft Defender for Office 365 (Plan 2) for Government (900018f1-0cdb-4ecb-94d4-90281760fdc6) | | MICROSOFT DYNAMICS CRM ONLINE | CRMSTANDARD | d17b27af-3f49-4822-99f9-56a661538792 | CRMSTANDARD (f9646fb2-e3b2-4309-95de-dc4833737456)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MDM_SALES_COLLABORATION (3413916e-ee66-4071-be30-6f94d4adfeda)<br/>NBPROFESSIONALFORCRM (3e58e97c-9abe-ebab-cd5f-d543d1529634)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | MICROSOFT DYNAMICS CRM ONLINE PROFESSIONAL(f9646fb2-e3b2-4309-95de-dc4833737456)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT DYNAMICS MARKETING SALES COLLABORATION - ELIGIBILITY CRITERIA APPLY (3413916e-ee66-4071-be30-6f94d4adfeda)<br/>MICROSOFT SOCIAL ENGAGEMENT PROFESSIONAL - ELIGIBILITY CRITERIA APPLY (3e58e97c-9abe-ebab-cd5f-d543d1529634)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | | MS IMAGINE ACADEMY | IT_ACADEMY_AD | ba9a34de-4489-469d-879c-0f0f145321cd | IT_ACADEMY_AD (d736def0-1fde-43f0-a5be-e3f8b2de6e41) | MS IMAGINE ACADEMY (d736def0-1fde-43f0-a5be-e3f8b2de6e41) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Teams Rooms Standard | MEETING_ROOM | 6070a4c8-34c6-4937-8dfb-39bbc6397a60 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) | | Microsoft Teams Trial | MS_TEAMS_IW | 74fbf1bb-47c6-4796-9623-77dc7371723b | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MCO_TEAMS_IW (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Teams (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Office 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>Power Automate for Office 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>SharePoint Kiosk (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) | | Microsoft Threat Experts - Experts on Demand | EXPERTS_ON_DEMAND | 9fa2f157-c8e4-4351-a3f2-ffa506da1406 | EXPERTS_ON_DEMAND (b83a66d4-f05f-414d-ac0f-ea1c5239c42b) | Microsoft Threat Experts - Experts on Demand (b83a66d4-f05f-414d-ac0f-ea1c5239c42b) |
+| Microsoft Workplace Analytics | WORKPLACE_ANALYTICS | 3d957427-ecdc-4df2-aacd-01cc9d519da8 | WORKPLACE_ANALYTICS (f477b0f0-3bb1-4890-940c-40fcee6ce05f)<br/>WORKPLACE_ANALYTICS_INSIGHTS_BACKEND (ff7b261f-d98b-415b-827c-42a3fdf015af)<br/>WORKPLACE_ANALYTICS_INSIGHTS_USER (b622badb-1b45-48d5-920f-4b27a2c0996c) | Microsoft Workplace Analytics (f477b0f0-3bb1-4890-940c-40fcee6ce05f)<br/>Microsoft Workplace Analytics Insights Backend (ff7b261f-d98b-415b-827c-42a3fdf015af)<br/>Microsoft Workplace Analytics Insights User (b622badb-1b45-48d5-920f-4b27a2c0996c) |
| Multi-Geo Capabilities in Office 365 | OFFICE365_MULTIGEO | 84951599-62b7-46f3-9c9d-30551b2ad607 | EXCHANGEONLINE_MULTIGEO (897d51f1-2cfa-4848-9b30-469149f5e68e)<br/>SHAREPOINTONLINE_MULTIGEO (735c1d98-dd3f-4818-b4ed-c8052e18e62d)<br/>TEAMSMULTIGEO (41eda15d-6b52-453b-906f-bc4a5b25a26b) | Exchange Online Multi-Geo (897d51f1-2cfa-4848-9b30-469149f5e68e)<br/>SharePoint Multi-Geo (735c1d98-dd3f-4818-b4ed-c8052e18e62d)<br/>Teams Multi-Geo (41eda15d-6b52-453b-906f-bc4a5b25a26b) |
-| Teams Rooms Premium | MTR_PREM | 4fb214cb-a430-4a91-9c91-4976763aa78f | MMR_P1 (bdaa59a3-74fd-4137-981a-31d4f84eb8a0)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Meeting Room Managed Services (bdaa59a3-74fd-4137-981a-31d4f84eb8a0)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) |
+| Nonprofit Portal | NONPROFIT_PORTAL | aa2695c9-8d59-4800-9dc8-12e01f1735af | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>NONPROFIT_PORTAL (7dbc2d88-20e2-4eb6-b065-4510b38d6eb2) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Nonprofit Portal (7dbc2d88-20e2-4eb6-b065-4510b38d6eb2)|
+| Office 365 A1 for faculty | STANDARDWOFFPACK_FACULTY | 94763226-9b3c-4e75-a931-5c89701abe66 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>DYN365_CDS_O365_P1 (40b010bb-0b69-4654-ac5e-ba161433f4b4)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_STANDARD 9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>PROJECT_O365_P1 (a55dfd10-0864-46d9-a3cd-da5991a3e0e2)<br/>SCHOOL_DATA_SYNC_P1 (c33802dd-1b50-4b9a-8bb9-f13d2cdeadac)<br/>SHAREPOINTSTANDARD_EDU (0a4983bb-d3e5-4a09-95d8-b2d0127b3df5)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Common Data Service - O365 P1 (40b010bb-0b69-4654-ac5e-ba161433f4b4)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro Plan 2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Office Mobile Apps for Office 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Project for Office (Plan E1) (a55dfd10-0864-46d9-a3cd-da5991a3e0e2)<br/>School Data Sync (Plan 1) (c33802dd-1b50-4b9a-8bb9-f13d2cdeadac)<br/>SharePoint (Plan 1) for Education (0a4983bb-d3e5-4a09-95d8-b2d0127b3df5)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) |
+| Office 365 A1 Plus for faculty | STANDARDWOFFPACK_IW_FACULTY | 78e66a63-337a-4a9a-8959-41c6654dfb56 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>DYN365_CDS_O365_P1 (40b010bb-0b69-4654-ac5e-ba161433f4b4)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>PROJECT_O365_P1 (a55dfd10-0864-46d9-a3cd-da5991a3e0e2)<br/>SCHOOL_DATA_SYNC_P1 (c33802dd-1b50-4b9a-8bb9-f13d2cdeadac)<br/>SHAREPOINTSTANDARD_EDU (0a4983bb-d3e5-4a09-95d8-b2d0127b3df5)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Common Data Service - O365 P1 (40b010bb-0b69-4654-ac5e-ba161433f4b4)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro Plan 2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Project for Office (Plan E1) (a55dfd10-0864-46d9-a3cd-da5991a3e0e2)<br/>School Data Sync (Plan 1) (c33802dd-1b50-4b9a-8bb9-f13d2cdeadac)<br/>SharePoint (Plan 1) for Education (0a4983bb-d3e5-4a09-95d8-b2d0127b3df5)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) |
+| Office 365 A1 for students | STANDARDWOFFPACK_STUDENT | 314c4481-f395-4525-be8b-2ec4bb1e9d91 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>DYN365_CDS_O365_P1 (40b010bb-0b69-4654-ac5e-ba161433f4b4)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>PROJECT_O365_P1 (a55dfd10-0864-46d9-a3cd-da5991a3e0e2)<br/>SCHOOL_DATA_SYNC_P1 (c33802dd-1b50-4b9a-8bb9-f13d2cdeadac)<br/>SHAREPOINTSTANDARD_EDU (0a4983bb-d3e5-4a09-95d8-b2d0127b3df5)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Common Data Service - O365 P1 (40b010bb-0b69-4654-ac5e-ba161433f4b4)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/> Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro Plan 2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Office Mobile Apps for Office 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Project for Office (Plan E1) (a55dfd10-0864-46d9-a3cd-da5991a3e0e2)<br/>School Data Sync (Plan 1) (c33802dd-1b50-4b9a-8bb9-f13d2cdeadac)<br/>SharePoint (Plan 1) for Education (0a4983bb-d3e5-4a09-95d8-b2d0127b3df5)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Whiteboard (Plan 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) |
+| Office 365 A1 Plus for students | STANDARDWOFFPACK_IW_STUDENT | e82ae690-a2d5-4d76-8d30-7c6e01e6022e | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/> DYN365_CDS_O365_P1 (40b010bb-0b69-4654-ac5e-ba161433f4b4)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>PROJECT_O365_P1 (a55dfd10-0864-46d9-a3cd-da5991a3e0e2)<br/>SCHOOL_DATA_SYNC_P1 (c33802dd-1b50-4b9a-8bb9-f13d2cdeadac)<br/>SHAREPOINTSTANDARD_EDU (0a4983bb-d3e5-4a09-95d8-b2d0127b3df5)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Common Data Service - O365 P1 (40b010bb-0b69-4654-ac5e-ba161433f4b4)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro Plan 2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec15 6)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Project for Office (Plan E1) (a55dfd10-0864-46d9-a3cd-da5991a3e0e2)<br/>School Data Sync (Plan 1) (c33802dd-1b50-4b9a-8bb9-f13d2cdeadac)<br/>SharePoint (Plan 1) for Education (0a4983bb-d3e5-4a09-95d8-b2d0127b3df5)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Whiteboard (Plan 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) |
| Office 365 A3 for faculty | ENTERPRISEPACKPLUS_FACULTY | e578b273-6db4-4691-bba0-8d691f4da603 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>CDS_O365_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>POWER_VIRTUAL_AGENTS_O365_P2 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/> YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Common Data Service - O365 P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Common Data Service for Teams_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro Plan 3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office for the web (Education) (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Power Virtual Agents for Office 365 P2 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint Plan 2 for EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) | | Office 365 A3 for students | ENTERPRISEPACKPLUS_STUDENT | 98b6e773-24d4-4c0d-a968-6e787a1f8204 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>CDS_O365_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>POWER_VIRTUAL_AGENTS_O365_P2 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Common Data Service - O365 P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Common Data Service for Teams_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro Plan 3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Power Virtual Agents for Office 365 P2 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint (Plan 2) for Education (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) | | Office 365 A5 for faculty| ENTERPRISEPREMIUM_FACULTY | a4585165-0533-458a-97e3-c400570268c4 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>OFFICE_FORMS_PLAN_3 (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Flow for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Forms (Plan 3) (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Office for the web (Education) (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>PowerApps for Office 365 Plan 3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint Plan 2 for EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Office 365 E5 | ENTERPRISEPREMIUM | c7df2760-2c81-4ef7-b578-5b5392b571df | DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>CDS_O365_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>GRAPH_CONNECTORS_SEARCH_INDEX (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>EXCEL_PREMIUM (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>FORMS_PLAN_E5 (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>POWER_VIRTUAL_AGENTS_O365_P3 (ded3d325-1bdc-453e-8432-5bac26d7a014)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Common Data Service - O365 P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>Common Data Service for Teams_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Graph Connectors Search with Index (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics ΓÇô Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 ΓÇô Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>M365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Apps for enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Excel Advanced Analytics (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>Microsoft Forms (Plan E5) (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office for the web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Automate for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>Power Virtual Agents for Office 365 P3 (ded3d325-1bdc-453e-8432-5bac26d7a014)<br/>PowerApps for Office 365 Plan 3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>Project for Office (Plan E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) | | OFFICE 365 E5 WITHOUT AUDIO CONFERENCING | ENTERPRISEPREMIUM_NOPSTNCONF | 26d45bd9-adf1-46cd-a9e1-51e9a5524128 | ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>FORMS_PLAN_E5 (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | OFFICE 365 CLOUD APP SECURITY (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>POWER BI PRO (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>OFFICE 365 ADVANCED EDISCOVERY (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>EXCHANGE ONLINE (PLAN 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW FOR OFFICE 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>MICROSOFT FORMS (PLAN E5) (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>POWERAPPS FOR OFFICE 365 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>MICROSOFT STREAM FOR O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>OFFICE 365 ADVANCED THREAT PROTECTION (PLAN 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | | OFFICE 365 F3 | DESKLESSPACK | 4b585984-651b-448a-9e53-3b10f069cf7f | DYN365_CDS_O365_F1 (ca6e61ec-d4f4-41eb-8b88-d96e0e14323f)<br/>CDS_O365_F1 (90db65a7-bf11-4904-a79f-ef657605145b)<br/>EXCHANGE_S_DESKLESS (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>RMS_S_BASIC (31cf2cfc-6b0d-4adc-a336-88b724ed8122)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>FORMS_PLAN_K (f07046bd-2a3c-4b96-b0be-dea79d7cbfb8)<br/>KAIZALA_O365_P1 (73b2a583-6a59-42e3-8e83-54db46bc3278)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_K (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_S1 (e0287f9f-e222-4f98-9a83-f379e249159a)<br/>FLOW_O365_S1 (bd91b1a4-9f94-4ecf-b45b-3a65e5c8128a)<br/>POWER_VIRTUAL_AGENTS_O365_F1 (ba2fdb48-290b-4632-b46a-e4ecc58ac11a)<br/>PROJECT_O365_F3 (7f6f28c2-34bb-4d4b-be36-48ca2e77e1ec)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>MCOIMP (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_FIRSTLINE (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>WHITEBOARD_FIRSTLINE1 (36b29273-c6d0-477a-aca6-6fbe24f538e3)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Common Data Service - O365 F1 (ca6e61ec-d4f4-41eb-8b88-d96e0e14323f)<br/>Common Data Service for Teams_F1 (90db65a7-bf11-4904-a79f-ef657605145b)<br/>Exchange Online Kiosk (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>Microsoft Azure Rights Management Service (31cf2cfc-6b0d-4adc-a336-88b724ed8122)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Forms (Plan F1) (f07046bd-2a3c-4b96-b0be-dea79d7cbfb8)<br/>Microsoft Kaizala Pro Plan 1 (73b2a583-6a59-42e3-8e83-54db46bc3278)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 F3 (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Office Mobile Apps for Office 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>Power Apps for Office 365 F3 (e0287f9f-e222-4f98-9a83-f379e249159a)<br/>Power Automate for Office 365 F3 (bd91b1a4-9f94-4ecf-b45b-3a65e5c8128a)<br/>Power Virtual Agents for Office 365 F1 (ba2fdb48-290b-4632-b46a-e4ecc58ac11a)<br/>Project for Office (Plan F) (7f6f28c2-34bb-4d4b-be36-48ca2e77e1ec)<br/>SharePoint Kiosk (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>Skype for Business Online (Plan 1) (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Firstline) (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>Whiteboard (Firstline) (36b29273-c6d0-477a-aca6-6fbe24f538e3)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) |
+| Office 365 G1 GCC | STANDARDPACK_GOV | 3f4babde-90ec-47c6-995d-d223749065d1 | DYN365_CDS_O365_P1_GCC (8eb5e9bc-783f-4425-921a-c65f45dd72c6)<br/>CDS_O365_P1_GCC (959e5dec-6522-4d44-8349-132c27c3795a)<br/>EXCHANGE_S_STANDARD_GOV (e9b4930a-925f-45e2-ac2a-3f7788ca6fdd)<br/>FORMS_GOV_E1 (f4cba850-4f34-4fd2-a341-0fddfdce1e8f)<br/>MYANALYTICS_P2_GOV (6e5b7995-bd4f-4cbd-9d19-0e32010c72f0)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>STREAM_O365_E1_GOV (15267263-5986-449d-ac5c-124f3b49b2d6)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>PROJECTWORKMANAGEMENT_GOV (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>OFFICEMOBILE_SUBSCRIPTION_GOV (4ccb60ee-9523-48fd-8f63-4b090f1ad77a)<br/>POWERAPPS_O365_P1_GOV (c42aa49a-f357-45d5-9972-bc29df885fee)<br/>FLOW_O365_P1_GOV (ad6c8870-6356-474c-901c-64d7da8cea48)<br/>SharePoint Plan 1G (f9c43823-deb4-46a8-aa65-8b551f0c4f8a)<br/>MCOSTANDARD_GOV (a31ef4a2-f787-435e-8335-e47eb0cafc94)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d) | Common Data Service - O365 P1 GCC (8eb5e9bc-783f-4425-921a-c65f45dd72c6)<br/>Common Data Service for Teams_P1 GCC (959e5dec-6522-4d44-8349-132c27c3795a)<br/>Exchange Online (Plan 1) for Government (e9b4930a-925f-45e2-ac2a-3f7788ca6fdd)<br/>Forms for Government (Plan E1) (f4cba850-4f34-4fd2-a341-0fddfdce1e8f)<br/>Insights by MyAnalytics for Government (6e5b7995-bd4f-4cbd-9d19-0e32010c72f0)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft Stream for O365 for Government (E1) (15267263-5986-449d-ac5c-124f3b49b2d6)<br/>Microsoft Teams for Government(304767db-7d23-49e8-a945- 4a7eb65f9f28)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Planner for Government (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>Office for the Web for Government (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>Office Mobile Apps for Office 365 for GCC (4ccb60ee-9523-48fd-8f63-4b090f1ad77a)<br/> Power Apps for Office 365 for Government (c42aa49a-f357-45d5-9972-bc29df885fee)<br/>Power Automate for Office 365 for Government (ad6c8870-6356-474c-901c-64d7da8cea48)<br/>SharePoint Plan 1G (f9c43823-deb4-46a8-aa65-8b551f0c4f8a)<br/>Skype for Business Online (Plan 2) for Government (a31ef4a2-f787-435e-8335-e47eb0cafc94)<br/>To-Do (Plan 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>Whiteboard (Plan 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d) |
| OFFICE 365 G3 GCC | ENTERPRISEPACK_GOV | 535a3a29-c5f0-42fe-8215-d3b9e1f38c4a | RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>DYN365_CDS_O365_P2_GCC (06162da2-ebf9-4954-99a0-00fee96f95cc)<br/>CDS_O365_P2_GCC (a70bbf38-cdda-470d-adb8-5804b8770f41)<br/>EXCHANGE_S_ENTERPRISE_GOV (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>FORMS_GOV_E3 (24af5f65-d0f3-467b-9f78-ea798c4aeffc)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2_GOV (6e5b7995-bd4f-4cbd-9d19-0e32010c72f0)<br/>OFFICESUBSCRIPTION_GOV (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>STREAM_O365_E3_GOV (2c1ada27-dbaa-46f9-bda6-ecb94445f758)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>PROJECTWORKMANAGEMENT_GOV (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>POWERAPPS_O365_P2_GOV (0a20c815-5e81-4727-9bdc-2b5a117850c3)<br/>FLOW_O365_P2_GOV (c537f360-6a00-4ace-a7f5-9128d0ac1e4b)<br/>SHAREPOINTENTERPRISE_GOV (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>MCOSTANDARD_GOV (a31ef4a2-f787-435e-8335-e47eb0cafc94) | AZURE RIGHTS MANAGEMENT (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>COMMON DATA SERVICE - O365 P2 GCC (06162da2-ebf9-4954-99a0-00fee96f95cc)<br/>COMMON DATA SERVICE FOR TEAMS_P2 GCC (a70bbf38-cdda-470d-adb8-5804b8770f41)<br/>EXCHANGE PLAN 2G (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>FORMS FOR GOVERNMENT (PLAN E3) (24af5f65-d0f3-467b-9f78-ea798c4aeffc)<br/>INFORMATION PROTECTION AND GOVERNANCE ANALYTICS ΓÇô PREMIUM (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>INFORMATION PROTECTION AND GOVERNANCE ANALYTICS ΓÇô STANDARD (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>INFORMATION PROTECTION FOR OFFICE 365 ΓÇô STANDARD (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>INSIGHTS BY MYANALYTICS FOR GOVERNMENT (6e5b7995-bd4f-4cbd-9d19-0e32010c72f0)<br/>MICROSOFT 365 APPS FOR ENTERPRISE G (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>MICROSOFT BOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>MICROSOFT STREAM FOR O365 FOR GOVERNMENT (E3) (2c1ada27-dbaa-46f9-bda6-ecb94445f758)<br/>MICROSOFT TEAMS FOR GOVERNMENT (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>MOBILE DEVICE MANAGEMENT FOR OFFICE 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>OFFICE 365 PLANNER FOR GOVERNMENT (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>OFFICE FOR THE WEB (GOVERNMENT) (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>POWER APPS FOR OFFICE 365 FOR GOVERNMENT (0a20c815-5e81-4727-9bdc-2b5a117850c3)<br/>POWER AUTOMATE FOR OFFICE 365 FOR GOVERNMENT (c537f360-6a00-4ace-a7f5-9128d0ac1e4b)<br/>SHAREPOINT PLAN 2G (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) FOR GOVERNMENT (a31ef4a2-f787-435e-8335-e47eb0cafc94) | | Office 365 G5 GCC | ENTERPRISEPREMIUM_GOV | 8900a2c0-edba-4079-bdf3-b276e293b6a8 | RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>DYN365_CDS_O365_P3_GCC (a7d3fb37-b6df-4085-b509-50810d991a39)<br/>CDS_O365_P3_GCC (bce5e5ca-c2fd-4d53-8ee2-58dfffed4c10)<br/>LOCKBOX_ENTERPRISE_GOV (89b5d3b1-3855-49fe-b46c-87c66dbc1526)<br/>EXCHANGE_S_ENTERPRISE_GOV (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>FORMS_GOV_E5 (843da3a8-d2cc-4e7a-9e90-dc46019f964c)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>OFFICESUBSCRIPTION_GOV (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>MCOMEETADV_GOV (f544b08d-1645-4287-82de-8d91f37c02a1)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>MCOEV_GOV (db23fce2-a974-42ef-9002-d78dd42a0f22)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>ATP_ENTERPRISE_GOV (493ff600-6a2b-4db6-ad37-a7d4eb214516)<br/>THREAT_INTELLIGENCE_GOV (900018f1-0cdb-4ecb-94d4-90281760fdc6)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>EXCHANGE_ANALYTICS_GOV (208120d1-9adb-4daf-8c22-816bd5d237e7)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>STREAM_O365_E5_GOV (92c2089d-9a53-49fe-b1a6-9e6bdf959547)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS_GOV (d1cbfb67-18a8-4792-b643-630b7f19aad1)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>PROJECTWORKMANAGEMENT_GOV (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>POWERAPPS_O365_P3_GOV (0eacfc38-458a-40d3-9eab-9671258f1a3e)<br/>FLOW_O365_P3_GOV (8055d84a-c172-42eb-b997-6c2ae4628246)<br/>BI_AZURE_P_2_GOV (944e9726-f011-4353-b654-5f7d2663db76)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>SHAREPOINTENTERPRISE_GOV (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>MCOSTANDARD_GOV (a31ef4a2-f787-435e-8335-e47eb0cafc94) | RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>DYN365_CDS_O365_P3_GCC (a7d3fb37-b6df-4085-b509-50810d991a39)<br/>CDS_O365_P3_GCC (bce5e5ca-c2fd-4d53-8ee2-58dfffed4c10)<br/>LOCKBOX_ENTERPRISE_GOV (89b5d3b1-3855-49fe-b46c-87c66dbc1526)<br/>EXCHANGE_S_ENTERPRISE_GOV (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>FORMS_GOV_E5 (843da3a8-d2cc-4e7a-9e90-dc46019f964c)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>OFFICESUBSCRIPTION_GOV (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>MCOMEETADV_GOV (f544b08d-1645-4287-82de-8d91f37c02a1)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>MCOEV_GOV (db23fce2-a974-42ef-9002-d78dd42a0f22)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>ATP_ENTERPRISE_GOV (493ff600-6a2b-4db6-ad37-a7d4eb214516)<br/>THREAT_INTELLIGENCE_GOV (900018f1-0cdb-4ecb-94d4-90281760fdc6)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>EXCHANGE_ANALYTICS_GOV (208120d1-9adb-4daf-8c22-816bd5d237e7)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>STREAM_O365_E5_GOV (92c2089d-9a53-49fe-b1a6-9e6bdf959547)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS_GOV (d1cbfb67-18a8-4792-b643-630b7f19aad1)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>PROJECTWORKMANAGEMENT_GOV (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>POWERAPPS_O365_P3_GOV (0eacfc38-458a-40d3-9eab-9671258f1a3e)<br/>FLOW_O365_P3_GOV (8055d84a-c172-42eb-b997-6c2ae4628246)<br/>BI_AZURE_P_2_GOV (944e9726-f011-4353-b654-5f7d2663db76)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>SHAREPOINTENTERPRISE_GOV (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>MCOSTANDARD_GOV (a31ef4a2-f787-435e-8335-e47eb0cafc94) |
+| Office 365 Advanced Compliance for GCC | EQUIVIO_ANALYTICS_GOV | 1a585bba-1ce3-416e-b1d6-9c482b52fcf6 | LOCKBOX_ENTERPRISE_GOV (89b5d3b1-3855-49fe-b46c-87c66dbc1526)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/> RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>EQUIVIO_ANALYTICS_GOV (d1cbfb67-18a8-4792-b643-630b7f19aad1)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f) | Customer Lockbox for Government (89b5d3b1-3855-49fe-b46c-87c66dbc1526)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics -Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 ΓÇô Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Office 365 Advanced eDiscovery for Government (d1cbfb67-18a8-4792-b643-630b7f19aad1)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f) |
| OFFICE 365 MIDSIZE BUSINESS | MIDSIZEPACK | 04a7fb0d-32e0-4241-b4f5-3f7618cd1162 | EXCHANGE_S_STANDARD_MIDMARKET (fc52cc4b-ed7d-472d-bbe7-b081c23ecc56)<br/>MCOSTANDARD_MIDMARKET (b2669e95-76ef-4e7e-a367-002f60a39f3e)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTENTERPRISE_MIDMARKET (6b5b6a67-fc72-4a1f-a2b5-beecf05de761)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>YAMMER_MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) | EXCHANGE ONLINE PLAN 1(fc52cc4b-ed7d-472d-bbe7-b081c23ecc56)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) FOR MIDSIZE(b2669e95-76ef-4e7e-a367-002f60a39f3e)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINT PLAN 1 (6b5b6a67-fc72-4a1f-a2b5-beecf05de761)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>YAMMER_MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) | | OFFICE 365 SMALL BUSINESS | LITEPACK | bd09678e-b83c-4d3f-aaba-3dad4abd128b | EXCHANGE_L_STANDARD (d42bdbd6-c335-4231-ab3d-c8f348d5aff5)<br/>MCOLITE (70710b6b-3ab4-4a38-9f6d-9f169461650a)<br/>SHAREPOINTLITE (a1f3d0a8-84c0-4ae0-bae4-685917b8ab48)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | EXCHANGE ONLINE (P1) (d42bdbd6-c335-4231-ab3d-c8f348d5aff5)<br/>SKYPE FOR BUSINESS ONLINE (PLAN P1) (70710b6b-3ab4-4a38-9f6d-9f169461650a)<br/>SHAREPOINTLITE (a1f3d0a8-84c0-4ae0-bae4-685917b8ab48)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | | OFFICE 365 SMALL BUSINESS PREMIUM | LITEPACK_P2 | fc14ec4a-4169-49a4-a51e-2c852931814b | EXCHANGE_L_STANDARD (d42bdbd6-c335-4231-ab3d-c8f348d5aff5)<br/>MCOLITE (70710b6b-3ab4-4a38-9f6d-9f169461650a)<br/>OFFICE_PRO_PLUS_SUBSCRIPTION_SMBIZ (8ca59559-e2ca-470b-b7dd-afd8c0dee963)<br/>SHAREPOINTLITE (a1f3d0a8-84c0-4ae0-bae4-685917b8ab48)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | EXCHANGE ONLINE (P1) (d42bdbd6-c335-4231-ab3d-c8f348d5aff5)<br/>SKYPE FOR BUSINESS ONLINE (PLAN P1) (70710b6b-3ab4-4a38-9f6d-9f169461650a)<br/>OFFICE 365 SMALL BUSINESS SUBSCRIPTION (8ca59559-e2ca-470b-b7dd-afd8c0dee963)<br/>SHAREPOINTLITE (a1f3d0a8-84c0-4ae0-bae4-685917b8ab48)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| PowerApps per app baseline access | POWERAPPS_PER_APP_IW | bf666882-9c9b-4b2e-aa2f-4789b0a52ba2 | CDS_PER_APP_IWTRIAL (94a669d1-84d5-4e54-8462-53b0ae2c8be5)<br/>Flow_Per_APP_IWTRIAL (dd14867e-8d31-4779-a595-304405f5ad39)<br/>POWERAPPS_PER_APP_IWTRIAL (35122886-cef5-44a3-ab36-97134eabd9ba) | CDS Per app baseline access (94a669d1-84d5-4e54-8462-53b0ae2c8be5)<br/>Flow per app baseline access (dd14867e-8d31-4779-a595-304405f5ad39)<br/>PowerApps per app baseline access (35122886-cef5-44a3-ab36-97134eabd9ba) | | Power Apps per app plan | POWERAPPS_PER_APP | a8ad7d2b-b8cf-49d6-b25a-69094a0be206 | CDS_PER_APP (9f2f00ad-21ae-4ceb-994b-d8bc7be90999)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>POWERAPPS_PER_APP (b4f657ff-d83e-4053-909d-baa2b595ec97)<br/>Flow_Per_APP (c539fa36-a64e-479a-82e1-e40ff2aa83ee) | CDS PowerApps per app plan (9f2f00ad-21ae-4ceb-994b-d8bc7be90999)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power Apps per App Plan (b4f657ff-d83e-4053-909d-baa2b595ec97)<br/>Power Automate for Power Apps per App Plan (c539fa36-a64e-479a-82e1-e40ff2aa83ee) | | Power Apps per user plan | POWERAPPS_PER_USER | b30411f5-fea1-4a59-9ad9-3db7c7ead579 | DYN365_CDS_P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>POWERAPPS_PER_USER (ea2cf03b-ac60-46ae-9c1d-eeaeb63cec86)<br/>Flow_PowerApps_PerUser (dc789ed8-0170-4b65-a415-eb77d5bb350a) | Common Data Service - P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power Apps per User Plan (ea2cf03b-ac60-46ae-9c1d-eeaeb63cec86)<br/>Power Automate for Power Apps per User Plan (dc789ed8-0170-4b65-a415-eb77d5bb350a) |
+| Power Apps per user plan for Government | POWERAPPS_PER_USER_GCC | 8e4c6baa-f2ff-4884-9c38-93785d0d7ba1 | CDSAICAPACITY_PERUSER (91f50f7b-2204-4803-acac-5cf5668b8b39)<br/>CDSAICAPACITY_PERUSER_NEW (74d93933-6f22-436e-9441-66d205435abb)<br/>DYN365_CDS_P2_GOV (37396c73-2203-48e6-8be1-d882dae53275)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>POWERAPPS_PER_USER_GCC (8f55b472-f8bf-40a9-be30-e29919d4ddfe)<br/>Flow_PowerApps_PerUser_GCC (8e3eb3bd-bc99-4221-81b8-8b8bc882e128) | AI Builder capacity Per User add-on (91f50f7b-2204-4803-acac-5cf5668b8b39)<br/>AI Builder capacity Per User add-on (74d93933-6f22-436e-9441-66d205435abb)<br/>Common Data Service for Government (37396c73-2203-48e6-8be1-d882dae53275)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Power Apps per User Plan for Government (8f55b472-f8bf-40a9-be30-e29919d4ddfe)<br/>Power Automate for Power Apps per User Plan for GCC (8e3eb3bd-bc99-4221-81b8-8b8bc882e128) |
+| Power Apps Plan 1 for Government | POWERAPPS_P1_GOV | eca22b68-b31f-4e9c-a20c-4d40287bc5dd | DYN365_CDS_P1_GOV (ce361df2-f2a5-4713-953f-4050ba09aad8)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>FLOW_P1_GOV (774da41c-a8b3-47c1-8322-b9c1ab68be9f)<br/>POWERAPPS_P1_GOV (5ce719f1-169f-4021-8a64-7d24dcaec15f) | Common Data Service for Government (ce361df2-f2a5-4713-953f-4050ba09aad8)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Power Automate (Plan 1) for Government (774da41c-a8b3-47c1-8322-b9c1ab68be9f)<br/>PowerApps Plan 1 for Government (5ce719f1-169f-4021-8a64-7d24dcaec15f) |
+| Power Apps Portals login capacity add-on Tier 2 (10 unit min) for Government | POWERAPPS_PORTALS_LOGIN_T2_GCC | 26c903d5-d385-4cb1-b650-8d81a643b3c4 | CDS_POWERAPPS_PORTALS_LOGIN_GCC (0f7b9a29-7990-44ff-9d05-a76be778f410)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>POWERAPPS_PORTALS_LOGIN_GCC (bea6aef1-f52d-4cce-ae09-bed96c4b1811) | Common Data Service Power Apps Portals Login Capacity for GCC (0f7b9a29-7990-44ff-9d05-a76be778f410)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Power Apps Portals Login Capacity Add-On for Government (bea6aef1-f52d-4cce-ae09-bed96c4b1811) |
+| Power Apps Portals page view capacity add-on for Government | POWERAPPS_PORTALS_PAGEVIEW_GCC | 15a64d3e-5b99-4c4b-ae8f-aa6da264bfe7 | CDS_POWERAPPS_PORTALS_PAGEVIEW_GCC (352257a9-db78-4217-a29d-8b8d4705b014)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>POWERAPPS_PORTALS_PAGEVIEW_GCC (483d5646-7724-46ac-ad71-c78b7f099d8d) | CDS PowerApps Portals page view capacity add-on for GCC (352257a9-db78-4217-a29d-8b8d4705b014)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Power Apps Portals Page View Capacity Add-On for Government (483d5646-7724-46ac-ad71-c78b7f099d8d) |
| Power Automate per flow plan | FLOW_BUSINESS_PROCESS | b3a42176-0a8c-4c3f-ba4e-f2b37fe5be6b | CDS_Flow_Business_Process (c84e52ae-1906-4947-ac4d-6fb3e5bf7c2e)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_BUSINESS_PROCESS (7e017b61-a6e0-4bdc-861a-932846591f6e) | Common data service for Flow per business process plan (c84e52ae-1906-4947-ac4d-6fb3e5bf7c2e)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow per business process plan (7e017b61-a6e0-4bdc-861a-932846591f6e) | | Power Automate per user plan | FLOW_PER_USER | 4a51bf65-409c-4a91-b845-1121b571cc9d | DYN365_CDS_P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_PER_USER (c5002c70-f725-4367-b409-f0eff4fee6c0) | Common Data Service - P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow per user plan (c5002c70-f725-4367-b409-f0eff4fee6c0) | | Power Automate per user plan dept | FLOW_PER_USER_DEPT | d80a4c5d-8f05-4b64-9926-6574b9e6aee4 | DYN365_CDS_P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/> FLOW_PER_USER (c5002c70-f725-4367-b409-f0eff4fee6c0) | Common Data Service - P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow per user plan (c5002c70-f725-4367-b409-f0eff4fee6c0) |
+| Power Automate per user plan for Government | FLOW_PER_USER_GCC | c8803586-c136-479a-8ff3-f5f32d23a68e | DYN365_CDS_P2_GOV (37396c73-2203-48e6-8be1-d882dae53275)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>FLOW_PER_USER_GCC (769b8bee-2779-4c5a-9456-6f4f8629fd41) | Common Data Service for Government (37396c73-2203-48e6-8be1-d882dae53275)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Power Automate per User Plan for Government (769b8bee-2779-4c5a-9456-6f4f8629fd41) |
| Power Automate per user with attended RPA plan | POWERAUTOMATE_ATTENDED_RPA | eda1941c-3c4f-4995-b5eb-e85a42175ab9 | CDS_ATTENDED_RPA (3da2fd4c-1bee-4b61-a17f-94c31e5cab93)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>POWER_AUTOMATE_ATTENDED_RPA (375cd0ad-c407-49fd-866a-0bff4f8a9a4d) | Common Data Service Attended RPA (3da2fd4c-1bee-4b61-a17f-94c31e5cab93)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power Automate RPA Attended (375cd0ad-c407-49fd-866a-0bff4f8a9a4d) |
+| Power Automate Plan 1 for Government (Qualified Offer) | FLOW_P1_GOV | 2b3b0c87-36af-4d15-8124-04a691cc2546 | DYN365_CDS_P1_GOV (ce361df2-f2a5-4713-953f-4050ba09aad8)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>FLOW_P1_GOV (774da41c-a8b3-47c1-8322-b9c1ab68be9f) | Common Data Service for Government (ce361df2-f2a5-4713-953f-4050ba09aad8)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Power Automate (Plan 1) for Government (774da41c-a8b3-47c1-8322-b9c1ab68be9f) |
| Power Automate unattended RPA add-on | POWERAUTOMATE_UNATTENDED_RPA | 3539d28c-6e35-4a30-b3a9-cd43d5d3e0e2 |CDS_UNATTENDED_RPA (b475952f-128a-4a44-b82a-0b98a45ca7fb)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>POWER_AUTOMATE_UNATTENDED_RPA (0d373a98-a27a-426f-8993-f9a425ae99c5) | Common Data Service Unattended RPA (b475952f-128a-4a44-b82a-0b98a45ca7fb)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power Automate Unattended RPA add-on (0d373a98-a27a-426f-8993-f9a425ae99c5) | | Power BI | POWER_BI_INDIVIDUAL_USER | e2767865-c3c9-4f09-9f99-6eee6eef861a | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>SQL_IS_SSIM (fc0a60aa-feee-4746-a0e3-aecfe81a38dd)<br/>BI_AZURE_P1 (2125cfd7-2110-4567-83c4-c1cd5275163d) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Power BI Information Services Plan 1 (fc0a60aa-feee-4746-a0e3-aecfe81a38dd)<br/>Microsoft Power BI Reporting and Analytics Plan 1 (2125cfd7-2110-4567-83c4-c1cd5275163d) | | Power BI (free) | POWER_BI_STANDARD | a403ebcc-fae0-4ca2-8c8c-7a907fd6c235 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>BI_AZURE_P0 (2049e525-b859-401b-b2a0-e0a31c4b1fe4) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power BI (free) (2049e525-b859-401b-b2a0-e0a31c4b1fe4) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Power BI Pro | POWER_BI_PRO | f8a1db68-be16-40ed-86d5-cb42ce701560 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba) | | Power BI Pro CE | POWER_BI_PRO_CE | 420af87e-8177-4146-a780-3786adaffbca | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba) | | Power BI Pro Dept | POWER_BI_PRO_DEPT | 3a6a908c-09c5-406a-8170-8ebb63c42882 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba) |
+| Power BI Pro for GCC | POWERBI_PRO_GOV | f0612879-44ea-47fb-baf0-3d76d9235576 | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>BI_AZURE_P_2_GOV (944e9726-f011-4353-b654-5f7d2663db76) | Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)</br>Power BI Pro for Government (944e9726-f011-4353-b654-5f7d2663db76) |
| Power Virtual Agent | VIRTUAL_AGENT_BASE | e4e55366-9635-46f4-a907-fc8c3b5ec81f | CDS_VIRTUAL_AGENT_BASE (0a0a23fa-fea1-4195-bb89-b4789cb12f7f)<br/>FLOW_VIRTUAL_AGENT_BASE (4b81a949-69a1-4409-ad34-9791a6ec88aa)<br/>VIRTUAL_AGENT_BASE (f6934f16-83d3-4f3b-ad27-c6e9c187b260) | Common Data Service for Virtual Agent Base (0a0a23fa-fea1-4195-bb89-b4789cb12f7f)<br/>Power Automate for Virtual Agent (4b81a949-69a1-4409-ad34-9791a6ec88aa)<br/>Virtual Agent Base (f6934f16-83d3-4f3b-ad27-c6e9c187b260) | | Power Virtual Agents Viral Trial | CCIBOTS_PRIVPREV_VIRAL | 606b54a9-78d8-4298-ad8b-df6ef4481c80 | DYN365_CDS_CCI_BOTS (cf7034ed-348f-42eb-8bbd-dddeea43ee81)<br/>CCIBOTS_PRIVPREV_VIRAL (ce312d15-8fdf-44c0-9974-a25a177125ee)<br/>FLOW_CCI_BOTS (5d798708-6473-48ad-9776-3acc301c40af) | Common Data Service for CCI Bots (cf7034ed-348f-42eb-8bbd-dddeea43ee81)<br/>Dynamics 365 AI for Customer Service Virtual Agents Viral (ce312d15-8fdf-44c0-9974-a25a177125ee)<br/>Flow for CCI Bots (5d798708-6473-48ad-9776-3acc301c40af) | | PROJECT FOR OFFICE 365 | PROJECTCLIENT | a10d5e58-74da-4312-95c8-76be4e5b75a0 | PROJECT_CLIENT_SUBSCRIPTION (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3) | PROJECT ONLINE DESKTOP CLIENT (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3) | | Project Online Essentials | PROJECTESSENTIALS | 776df282-9fc0-4862-99e2-70e561b9909e | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Forms (Plan E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>Office for the web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Project Online Essentials (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97) |
+| Project Online Essentials for GCC | PROJECTESSENTIALS_GOV | ca1a159a-f09e-42b8-bb82-cb6420f54c8e | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>PROJECT_ESSENTIALS_GOV (fdcb7064-f45c-46fa-b056-7e0e9fdf4bf3)<br/>SHAREPOINTENTERPRISE_GOV (153f85dd-d912-4762-af6c-d6e0fb4f6692) | Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Office for the Web for Government (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>Project Online Essentials for Government (fdcb7064-f45c-46fa-b056-7e0e9fdf4bf3)<br/>SharePoint Plan 2G (153f85dd-d912-4762-af6c-d6e0fb4f6692) |
| PROJECT ONLINE PREMIUM | PROJECTPREMIUM | 09015f9f-377f-4538-bbb5-f75ceb09358a | PROJECT_CLIENT_SUBSCRIPTION (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | PROJECT ONLINE DESKTOP CLIENT (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | | PROJECT ONLINE PREMIUM WITHOUT PROJECT CLIENT | PROJECTONLINE_PLAN_1 | 2db84718-652c-47a7-860c-f10d8abbdae3 | FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | | PROJECT ONLINE WITH PROJECT FOR OFFICE 365 | PROJECTONLINE_PLAN_2 | f82a60b8-1ee3-4cfb-a4fe-1c6a53c2656c | FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>PROJECT_CLIENT_SUBSCRIPTION (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>PROJECT ONLINE DESKTOP CLIENT (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| SKYPE FOR BUSINESS PSTN DOMESTIC AND INTERNATIONAL CALLING | MCOPSTN2 | d3b4fe1f-9992-4930-8acb-ca6ec609365e | MCOPSTN2 (5a10155d-f5c1-411a-a8ec-e99aae125390) | DOMESTIC AND INTERNATIONAL CALLING PLAN (5a10155d-f5c1-411a-a8ec-e99aae125390) | | SKYPE FOR BUSINESS PSTN DOMESTIC CALLING | MCOPSTN1 | 0dab259f-bf13-4952-b7f8-7db8f131b28d | MCOPSTN1 (4ed3ff63-69d7-4fb7-b984-5aec7f605ca8) | DOMESTIC CALLING PLAN (4ed3ff63-69d7-4fb7-b984-5aec7f605ca8) | | SKYPE FOR BUSINESS PSTN DOMESTIC CALLING (120 Minutes)| MCOPSTN5 | 54a152dc-90de-4996-93d2-bc47e670fc06 | MCOPSTN5 (54a152dc-90de-4996-93d2-bc47e670fc06) | DOMESTIC CALLING PLAN (54a152dc-90de-4996-93d2-bc47e670fc06) |
+| Skype for Business PSTN Usage Calling Plan | MCOPSTNPP | 06b48c5f-01d9-4b18-9015-03b52040f51a | MCOPSTN3 (6b340437-d6f9-4dc5-8cc2-99163f7f83d6) | MCOPSTN3 (6b340437-d6f9-4dc5-8cc2-99163f7f83d6) |
+| Teams Rooms Premium | MTR_PREM | 4fb214cb-a430-4a91-9c91-4976763aa78f | MMR_P1 (bdaa59a3-74fd-4137-981a-31d4f84eb8a0)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Meeting Room Managed Services (bdaa59a3-74fd-4137-981a-31d4f84eb8a0)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) |
| TELSTRA CALLING FOR O365 | MCOPSTNEAU2 | de3312e1-c7b0-46e6-a7c3-a515ff90bc86 | MCOPSTNEAU (7861360b-dc3b-4eba-a3fc-0d323a035746) | AUSTRALIA CALLING PLAN (7861360b-dc3b-4eba-a3fc-0d323a035746) | | Universal Print | UNIVERSAL_PRINT | 9f3d9c1d-25a5-4aaa-8e59-23a1e6450a67 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9) | | Visio Plan 1 | VISIO_PLAN1_DEPT | ca7f3140-d88c-455b-9a1c-7f0679e31a76 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ONEDRIVE_BASIC (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>VISIOONLINE (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>OneDrive for business Basic (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>Visio web app (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| VISIO ONLINE PLAN 2 | VISIOCLIENT | c5928f49-12ba-48f7-ada3-0d743a3601d5 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ONEDRIVE_BASIC (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>VISIO_CLIENT_SUBSCRIPTION (663a804f-1c30-4ff0-9915-9db84f0d1cea)<br/>VISIOONLINE (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ONEDRIVE FOR BUSINESS BASIC (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>VISIO DESKTOP APP (663a804f-1c30-4ff0-9915-9db84f0d1cea)<br/>VISIO WEB APP (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) | | VISIO PLAN 2 FOR GCC | VISIOCLIENT_GOV | 4ae99959-6b0f-43b0-b1ce-68146001bdba | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>ONEDRIVE_BASIC_GOV (98709c2e-96b5-4244-95f5-a0ebe139fb8a)<br/>VISIO_CLIENT_SUBSCRIPTION_GOV (f85945f4-7a55-4009-bc39-6a5f14a8eac1)<br/>VISIOONLINE_GOV (8a9ecb07-cfc0-48ab-866c-f83c4d911576) | EXCHANGE FOUNDATION FOR GOVERNMENT (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>ONEDRIVE FOR BUSINESS BASIC FOR GOVERNMENT (98709c2e-96b5-4244-95f5-a0ebe139fb8a)<br/>VISIO DESKTOP APP FOR Government (f85945f4-7a55-4009-bc39-6a5f14a8eac1)<br/>VISIO WEB APP FOR GOVERNMENT (8a9ecb07-cfc0-48ab-866c-f83c4d911576) | |Viva Topics | TOPIC_EXPERIENCES | 4016f256-b063-4864-816e-d818aad600c9 | GRAPH_CONNECTORS_SEARCH_INDEX_TOPICEXP (b74d57b2-58e9-484a-9731-aeccbba954f0)<br/>CORTEX (c815c93d-0759-4bb8-b857-bc921a71be83) | Graph Connectors Search with Index (Viva Topics) (b74d57b2-58e9-484a-9731-aeccbba954f0)<br/>Viva Topics (c815c93d-0759-4bb8-b857-bc921a71be83) |
+| Windows 10/11 Enterprise E5 (Original) | WIN_ENT_E5 | 1e7e1070-8ccb-4aca-b470-d7cb538cb07e | DATAVERSE_FOR_POWERAUTOMATE_DESKTOP (59231cdf-b40d-4534-a93e-14d0cd31d27e)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>POWERAUTOMATE_DESKTOP_FOR_WIN (2d589a15-b171-4e61-9b5f-31d15eeb2872)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365) | Dataverse for PAD (59231cdf-b40d-4534-a93e-14d0cd31d27e)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>PAD for Windows (2d589a15-b171-4e61-9b5f-31d15eeb2872)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Enterprise (Original) (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/> Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365) |
| Windows 10 Enterprise A3 for faculty | WIN10_ENT_A3_FAC | 8efbe2f6-106e-442f-97d4-a59aa6037e06 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365) | | Windows 10 Enterprise A3 for students | WIN10_ENT_A3_STU | d4ef921e-840b-4b48-9a90-ab6698bc7b31 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365) | | WINDOWS 10 ENTERPRISE E3 | WIN10_PRO_ENT_SUB | cb10e6cd-9da4-4992-867b-67546b1db821 | WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111) | WINDOWS 10 ENTERPRISE (21b439ba-a0ca-424f-a6cc-52f954a5b111) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Windows 365 Enterprise 2 vCPU, 8 GB, 128 GB (Preview) | CPC_LVL_2 | 461cb62c-6db7-41aa-bf3c-ce78236cdb9e | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_2 (3efff3fe-528a-4fc5-b1ba-845802cc764f) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 2 vCPU, 8 GB, 128 GB (3efff3fe-528a-4fc5-b1ba-845802cc764f) | | Windows 365 Enterprise 4 vCPU, 16 GB, 256 GB (Preview) | CPC_LVL_3 | bbb4bf6e-3e12-4343-84a1-54d160c00f40 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_E_4C_16GB_256GB (9ecf691d-8b82-46cb-b254-cd061b2c02fb) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 4 vCPU, 16 GB, 256 GB (9ecf691d-8b82-46cb-b254-cd061b2c02fb) | | WINDOWS STORE FOR BUSINESS | WINDOWS_STORE | 6470687e-a428-4b7a-bef2-8a291ad947c9 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDOWS_STORE (a420f25f-a7b3-4ff5-a9d0-5d58f73b537d) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDOWS STORE SERVICE (a420f25f-a7b3-4ff5-a9d0-5d58f73b537d) |
-| Microsoft Workplace Analytics | WORKPLACE_ANALYTICS | 3d957427-ecdc-4df2-aacd-01cc9d519da8 | WORKPLACE_ANALYTICS (f477b0f0-3bb1-4890-940c-40fcee6ce05f)<br/>WORKPLACE_ANALYTICS_INSIGHTS_BACKEND (ff7b261f-d98b-415b-827c-42a3fdf015af)<br/>WORKPLACE_ANALYTICS_INSIGHTS_USER (b622badb-1b45-48d5-920f-4b27a2c0996c) | Microsoft Workplace Analytics (f477b0f0-3bb1-4890-940c-40fcee6ce05f)<br/>Microsoft Workplace Analytics Insights Backend (ff7b261f-d98b-415b-827c-42a3fdf015af)<br/>Microsoft Workplace Analytics Insights User (b622badb-1b45-48d5-920f-4b27a2c0996c) |
+| Windows Store for Business EDU Faculty | WSFB_EDU_FACULTY | c7e9d9e6-1981-4bf3-bb50-a5bdfaa06fb2 | Windows Store for Business EDU Store_faculty (aaa2cd24-5519-450f-a1a0-160750710ca1) | Windows Store for Business EDU Store_faculty (aaa2cd24-5519-450f-a1a0-160750710ca1) |
## Service plans that cannot be assigned at the same time
active-directory B2b Quickstart Invite Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-quickstart-invite-powershell.md
Title: 'Quickstart: Add a guest user with PowerShell - Azure AD'
-description: In this quickstart, you learn how to use PowerShell to send an invitation to an external Azure AD B2B collaboration user.
+description: In this quickstart, you learn how to use PowerShell to send an invitation to an external Azure AD B2B collaboration user. You'll use the Microsoft Graph Identity Sign-ins and the Microsoft Graph Users PowerShell modules.
- Previously updated : 08/28/2018 Last updated : 02/16/2022
# Quickstart: Add a guest user with PowerShell
-There are many ways you can invite external partners to your apps and services with Azure Active Directory B2B collaboration. In the previous quickstart, you saw how to add guest users directly in the Azure Active Directory admin portal. You can also use PowerShell to add guest users, either one at a time or in bulk. In this quickstart, youΓÇÖll use the New-AzureADMSInvitation command to add one guest user to your Azure tenant.
+There are many ways you can invite external partners to your apps and services with Azure Active Directory B2B collaboration. In the previous quickstart, you saw how to add guest users directly in the Azure Active Directory admin portal. You can also use PowerShell to add guest users, either one at a time or in bulk. In this quickstart, youΓÇÖll use the New-MgInvitation command to add one guest user to your Azure tenant.
-If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Prerequisites ### PowerShell Module
-Install the [Azure AD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [Azure AD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
+Install the [Microsoft Graph Identity Sign-ins module](/powershell/module/microsoft.graph.identity.signins/?view=graph-powershell-beta) (Microsoft.Graph.Identity.SignIns) and the [Microsoft Graph Users module](/powershell/module/microsoft.graph.users/?view=graph-powershell-beta) (Microsoft.Graph.Users).
### Get a test email account
You need a test email account that you can send the invitation to. The account m
Run the following command to connect to the tenant domain: ```powershell
-Connect-AzureAD -TenantDomain "<Tenant_Domain_Name>"
+Connect-MgGraph -Scopes user.readwrite.all
```
-For example, `Connect-AzureAD -TenantDomain "contoso.onmicrosoft.com"`.
When prompted, enter your credentials. ## Send an invitation
-1. To send an invitation to your test email account, run the following PowerShell command (replace **"Sanda"** and **sanda\@fabrikam.com** with your test email account name and email address):
+1. To send an invitation to your test email account, run the following PowerShell command (replace **"John Doe"** and **john\@contoso.com** with your test email account name and email address):
```powershell
- New-AzureADMSInvitation -InvitedUserDisplayName "Sanda" -InvitedUserEmailAddress sanda@fabrikam.com -InviteRedirectURL https://myapps.microsoft.com -SendInvitationMessage $true
+ New-MgInvitation -InvitedUserDisplayName "John Doe" -InvitedUserEmailAddress John@contoso.com -InviteRedirectUrl "https://myapplications.microsoft.com" -SendInvitationMessage:$true
```
-2. The command sends an invitation to the email address specified. Check the output, which should look similar to the following:
+1. The command sends an invitation to the email address specified. Check the output, which should look similar to the following example:
- ![PowerShell output showing pending user acceptance](media/quickstart-invite-powershell/powershell-azureadmsinvitation-result.png)
+ ![PowerShell output of the invitation command](media/quickstart-invite-powershell/powershell-mginvitation-result.png)
## Verify the user exists in the directory
-1. To verify that the invited user was added to Azure AD, run the following command:
+1. To verify that the invited user was added to Azure AD, run the following command (replace **john\@contoso.com** with your invited email):
```powershell
- Get-AzureADUser -Filter "UserType eq 'Guest'"
+ Get-MgUser -Filter "Mail eq 'John@contoso.com'"
```
-3. Check the output to make sure the user you invited is listed, with a user principal name (UPN) in the format *emailaddress*#EXT#\@*domain*. For example, *sanda_fabrikam.com#EXT#\@contoso.onmicrosoft.com*, where contoso.onmicrosoft.com is the organization from which you sent the invitations.
+1. Check the output to make sure the user you invited is listed, with a user principal name (UPN) in the format *emailaddress*#EXT#\@*domain*. For example, *john_contoso.com#EXT#\@fabrikam.onmicrosoft.com*, where fabrikam.onmicrosoft.com is the organization from which you sent the invitations.
- ![PowerShell output showing guest user added](media/quickstart-invite-powershell/powershell-guest-user-added.png)
+ ![PowerShell output showing guest user added](media/quickstart-invite-powershell/powershell-mginvitation-guest-user-add.png)
## Clean up resources
When no longer needed, you can delete the test user account in the directory. Ru
```powershell Remove-AzureADUser -ObjectId "<UPN>" ```
-For example: `Remove-AzureADUser -ObjectId "sanda_fabrikam.com#EXT#@contoso.onmicrosoft.com"`
+For example: `Remove-AzureADUser -UserId john_contoso.com#EXT#@fabrikam.onmicrosoft.com`
## Next steps
active-directory Tutorial Bulk Invite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/tutorial-bulk-invite.md
Title: Tutorial for bulk inviting B2B collaboration users - Azure AD
-description: In this tutorial, you learn how to use PowerShell and a CSV file to send bulk invitations to external Azure AD B2B collaboration users.
+description: In this tutorial, you learn how to use PowerShell and a CSV file to send bulk invitations to external Azure AD B2B collaboration users. You'll use the Microsoft.Graph.Users PowerShell module.
Previously updated : 03/17/2021 Last updated : 02/16/2022 - # Customer intent: As a tenant administrator, I want to send B2B invitations to multiple external users at the same time so that I can avoid having to send individual invitations to each user.
# Tutorial: Bulk invite Azure AD B2B collaboration users
-If you use Azure Active Directory (Azure AD) B2B collaboration to work with external partners, you can invite multiple guest users to your organization at the same time. In this tutorial, you learn how to use the Azure portal to send bulk invitations to external users. Specifically, you do the following:
+If you use Azure Active Directory (Azure AD) B2B collaboration to work with external partners, you can invite multiple guest users to your organization at the same time. In this tutorial, you learn how to use the Azure portal to send bulk invitations to external users. Specifically, you'll follow these steps:
> [!div class="checklist"] > * Use **Bulk invite users** to prepare a comma-separated value (.csv) file with the user information and invitation preferences
The rows in a downloaded CSV template are as follows:
- **Version number**: The first row containing the version number must be included in the upload CSV. - **Column headings**: The format of the column headings is &lt;*Item name*&gt; [PropertyName] &lt;*Required or blank*&gt;. For example, `Email address to invite [inviteeEmail] Required`. Some older versions of the template might have slight variations.-- **Examples row**: We have included in the template a row of examples of values for each column. You must remove the examples row and replace it with your own entries.
+- **Examples row**: We've included in the template a row of examples of values for each column. You must remove the examples row and replace it with your own entries.
### Additional guidance - The first two rows of the upload template must not be removed or modified, or the upload can't be processed. - The required columns are listed first.-- We don't recommend adding new columns to the template. Any additional columns you add are ignored and not processed.
+- We don't recommend adding new columns to the template. Any columns you add are ignored and not processed.
- We recommend that you download the latest version of the CSV template as often as possible. ## Prerequisites
You need two or more test email accounts that you can send the invitations to. T
7. On the **Bulk invite users** page, under **Upload your csv file**, browse to the file. When you select the file, validation of the .csv file starts. 8. When the file contents are validated, youΓÇÖll see **File uploaded successfully**. If there are errors, you must fix them before you can submit the job. 9. When your file passes validation, select **Submit** to start the Azure bulk operation that adds the invitations.
-10. To view the job status, select **Click here to view the status of each operation**. Or, you can select **Bulk operation results** in the **Activity** section. For details about each line item within the the bulk operation, select the values under the **# Success**, **# Failure**, or **Total Requests** columns. If failures occurred, the reasons for failure will be listed.
+10. To view the job status, select **Click here to view the status of each operation**. Or, you can select **Bulk operation results** in the **Activity** section. For details about each line item within the bulk operation, select the values under the **# Success**, **# Failure**, or **Total Requests** columns. If failures occurred, the reasons for failure will be listed.
![Example of bulk operation results](media/tutorial-bulk-invite/bulk-operation-results.png)
Check to see that the guest users you added exist in the directory either in the
### View guest users with PowerShell
+To view guest users with PowerShell, you'll need the [Microsoft.Graph.Users PowerShell Module](/powershell/module/microsoft.graph.users/?view=graph-powershell-beta). Then sign in using the `Connect-MgGraph` command with an admin account to consent to the required scopes:
+```powershell
+Connect-MgGraph -Scopes "User.Read.All"
+```
+ Run the following command: ```powershell
- Get-AzureADUser -Filter "UserType eq 'Guest'"
+ Get-MgUser -Filter "UserType eq 'Guest'"
``` You should see the users that you invited listed, with a user principal name (UPN) in the format *emailaddress*#EXT#\@*domain*. For example, *lstokes_fabrikam.com#EXT#\@contoso.onmicrosoft.com*, where contoso.onmicrosoft.com is the organization from which you sent the invitations. ## Clean up resources
-When no longer needed, you can delete the test user accounts in the directory in the Azure portal on the Users page by selecting the checkbox next to the guest user and then selecting **Delete**.
+When no longer needed, you can delete the test user accounts in the directory in the Azure portal on the Users page by selecting the checkbox next to the guest user and then selecting **Delete**.
Or you can run the following PowerShell command to delete a user account: ```powershell
- Remove-AzureADUser -ObjectId "<UPN>"
+ Remove-MgUser -UserId "<UPN>"
```
-For example: `Remove-AzureADUser -ObjectId "lstokes_fabrikam.com#EXT#@contoso.onmicrosoft.com"`
+For example: `Remove-MgUser -UserId "lstokes_fabrikam.com#EXT#@contoso.onmicrosoft.com"`
## Next steps
active-directory Add Users Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/add-users-azure-active-directory.md
Previously updated : 05/04/2021 Last updated : 02/16/2022
Add new users or delete existing users from your Azure Active Directory (Azure AD) organization. To add or delete users you must be a User administrator or Global administrator. + ## Add a new user You can create a new user using the Azure Active Directory portal.
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
For more information about how to better secure your organization by using autom
In January 2022, weΓÇÖve added the following 47 new applications in our App gallery with Federation support
-[Jooto](../saas-apps/jooto-tutorial.md), [Proprli](https://app.proprli.com/), [Pace Scheduler](https://www.pacescheduler.com/accounts/login/), [DRTrack](../saas-apps/drtrack-tutorial.md), [Dining Sidekick](../saas-apps/dining-sidekick-tutorial.md), [Cryotos](https://app.cryotos.com/oauth2/authorization/azure-client), [Emergency Management Systems](https://secure.emsystems.com.au/), [Manifestly Checklists](../saas-apps/manifestly-checklists-tutorial.md), [eLearnPOSH](../saas-apps/elearnposh-tutorial.md), [Scuba Analytics](../saas-apps/scuba-analytics-tutorial.md), [Athena Systems Login Platform](../saas-apps/athena-systems-login-platform-tutorial.md), [TimeTrack](../saas-apps/timetrack-tutorial.md), [MiHCM](../saas-apps/mihcm-tutorial.md), [Health Note](https://www.healthnote.com/), [Active Directory SSO for DoubleYou](../saas-apps/active-directory-sso-for-doubleyou-tutorial.md), [Emplifi platform](../saas-apps/emplifi-platform-tutorial.md), [Flexera One](../saas-apps/flexera-one-tutorial.md), [Hypothesis](https://web.hypothes.is/help/authorizing-hypothesis-from-the-azure-ad-app-gallery/), [Recurly](../saas-apps/recurly-tutorial.md), [XpressDox AU Cloud](https://au.xpressdox.com/Authentication/Login.aspx), [Active and Thriving - Perth Airport](../saas-apps/active-and-thriving-perth-airport-tutorial.md), [Zoom for Intune](https://zoom.us/), [UPWARD AGENT](https://app.upward.jp/login/), [Linux Foundation ID](https://openprofile.dev/), [Asset Planner](../saas-apps/asset-planner-tutorial.md), [Kiho](https://v3.kiho.fi/index/sso), [chezie](https://app.chezie.co/), [Excelity HCM](../saas-apps/excelity-hcm-tutorial.md), [yuccaHR](https://app.yuccahr.com/), [Blue Ocean Brain](../saas-apps/blue-ocean-brain-tutorial.md), [EchoSpan](../saas-apps/echospan-tutorial.md), [Archie](../saas-apps/archie-tutorial.md), [Equifax Workforce Solutions](../saas-apps/equifax-workforce-solutions-tutorial.md), [Palantir Foundry](../saas-apps/palantir-foundry-tutorial.md), [ATP SpotLight and ChronicX](../saas-apps/atp-spotlight-and-chronicx-tutorial.md), [DigiSign](https://app.digisign.org/selfcare/sso), [mConnect](https://mconnect.skooler.com/), [BrightHR](https://login.brighthr.com/), [Mural Identity](../saas-apps/mural-identity-tutorial.md), [NordPass SSO](https://app.nordpass.com/login%20use%20%22Log%20in%20to%20business%22%20option), [CloudClarity](https://portal.cloudclarity.app/dashboard), [Twic](../saas-apps/twic-tutorial.md), [Eduhouse Online](https://app.eduhouse.fi/palvelu/kirjaudu/microsoft), [Bealink](../saas-apps/bealink-tutorial.md), [Time Intelligence Bot](https://teams.microsoft.com/), [SentinelOne](https://sentinelone.com/)
+[Jooto](../saas-apps/jooto-tutorial.md), [Proprli](https://app.proprli.com/), [Pace Scheduler](https://www.pacescheduler.com/accounts/login/), [DRTrack](../saas-apps/drtrack-tutorial.md), [Dining Sidekick](../saas-apps/dining-sidekick-tutorial.md), [Cryotos](https://app.cryotos.com/oauth2/authorization/azure-client), [Emergency Management Systems](https://secure.emsystems.com.au/), [Manifestly Checklists](../saas-apps/manifestly-checklists-tutorial.md), [eLearnPOSH](../saas-apps/elearnposh-tutorial.md), [Scuba Analytics](../saas-apps/scuba-analytics-tutorial.md), [Athena Systems Login Platform](../saas-apps/athena-systems-login-platform-tutorial.md), [TimeTrack](../saas-apps/timetrack-tutorial.md), [MiHCM](../saas-apps/mihcm-tutorial.md), [Health Note](https://auth.healthnote.works/oauth), [Active Directory SSO for DoubleYou](../saas-apps/active-directory-sso-for-doubleyou-tutorial.md), [Emplifi platform](../saas-apps/emplifi-platform-tutorial.md), [Flexera One](../saas-apps/flexera-one-tutorial.md), [Hypothesis](https://web.hypothes.is/help/authorizing-hypothesis-from-the-azure-ad-app-gallery/), [Recurly](../saas-apps/recurly-tutorial.md), [XpressDox AU Cloud](https://au.xpressdox.com/Authentication/Login.aspx), [Zoom for Intune](https://zoom.us/), [UPWARD AGENT](https://app.upward.jp/login/), [Linux Foundation ID](https://openprofile.dev/), [Asset Planner](../saas-apps/asset-planner-tutorial.md), [Kiho](https://v3.kiho.fi/index/sso), [chezie](https://app.chezie.co/), [Excelity HCM](../saas-apps/excelity-hcm-tutorial.md), [yuccaHR](https://app.yuccahr.com/), [Blue Ocean Brain](../saas-apps/blue-ocean-brain-tutorial.md), [EchoSpan](../saas-apps/echospan-tutorial.md), [Archie](../saas-apps/archie-tutorial.md), [Equifax Workforce Solutions](../saas-apps/equifax-workforce-solutions-tutorial.md), [Palantir Foundry](../saas-apps/palantir-foundry-tutorial.md), [ATP SpotLight and ChronicX](../saas-apps/atp-spotlight-and-chronicx-tutorial.md), [DigiSign](https://app.digisign.org/selfcare/sso), [mConnect](https://mconnect.skooler.com/), [BrightHR](https://login.brighthr.com/), [Mural Identity](../saas-apps/mural-identity-tutorial.md), [NordPass SSO](https://app.nordpass.com/login%20use%20%22Log%20in%20to%20business%22%20option), [CloudClarity](https://portal.cloudclarity.app/dashboard), [Twic](../saas-apps/twic-tutorial.md), [Eduhouse Online](https://app.eduhouse.fi/palvelu/kirjaudu/microsoft), [Bealink](../saas-apps/bealink-tutorial.md), [Time Intelligence Bot](https://teams.microsoft.com/), [SentinelOne](https://sentinelone.com/)
You can also find the documentation of all the applications from: https://aka.ms/AppsTutorial,
active-directory Deploy Access Reviews https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/deploy-access-reviews.md
The administrative role required to create, manage, or read an access review dep
| Group or application| Global administrator <p>User administrator<p>Identity Governance administrator<p>Privileged Role administrator (only does reviews for Azure AD role-assignable groups)<p>Group owner ([if enabled by an admin]( create-access-review.md#allow-group-owners-to-create-and-manage-access-reviews-of-their-groups-preview))| Global administrator<p>Global reader<p>User administrator<p>Identity Governance administrator<p>Privileged Role administrator<p>Security reader<p>Group owner ([if enabled by an admin]( create-access-review.md#allow-group-owners-to-create-and-manage-access-reviews-of-their-groups-preview)) | |Azure AD roles| Global administrator <p>Privileged Role administrator| Global administrator<p>Global reader<p>User administrator<p>Privileged Role administrator<p> <p>Security reader | | Azure resource roles| User Access Administrator (for the resource)<p>Resource owner| User Access Administrator (for the resource)<p>Resource owner<p>Reader (for the resource) |
-| Access package| Global administrator<p>User administrator<p>Identity Governance administrator| Global administrator<p>Global reader<p>User administrator<p>Identity Governance administrator<p> <p>Security reader |
+| Access package| Global administrator<p>User administrator<p>Identity Governance administrator<p>Catalog owner (for the access package)<p>Access package manager (for the access package)| Global administrator<p>Global reader<p>User administrator<p>Identity Governance administrator<p>Catalog owner (for the access package)<p>Access package manager (for the access package)<p>Security reader |
For more information, see [Administrator role permissions in Azure AD](../roles/permissions-reference.md).
active-directory How To Connect Fed Group Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-group-claims.md
Title: Configure group claims for applications with Azure Active Directory | Microsoft Docs
-description: Information on how to configure group claims for use with Azure AD.
+ Title: Configure group claims for applications by using Azure Active Directory | Microsoft Docs
+description: Get information on how to configure group claims for use with Azure AD.
documentationcenter: ''
-# Configure group claims for applications with Azure Active Directory
+# Configure group claims for applications by using Azure Active Directory
-Azure Active Directory can provide a users group membership information in tokens for use within applications. Two main patterns are supported:
+Azure Active Directory (Azure AD) can provide a user's group membership information in tokens for use within applications. This feature supports two main patterns:
-- Groups identified by their Azure Active Directory object identifier (OID) attribute-- Groups identified by sAMAccountName or GroupSID attributes for Active Directory (AD) synchronized groups and users
+- Groups identified by their Azure AD object identifier (OID) attribute
+- Groups identified by the `sAMAccountName` or `GroupSID` attribute for Active Directory-synchronized groups and users
-> [!IMPORTANT]
-> There are a number of caveats to note for this functionality:
->
-> - Support for use of sAMAccountName and security identifier (SID) attributes synced from on-premises is designed to enable moving existing applications from AD FS and other identity providers. Groups managed in Azure AD do not contain the attributes necessary to emit these claims.
-> - In larger organizations the number of groups a user is a member of may exceed the limit that Azure Active Directory will add to a token. 150 groups for a SAML token, and 200 for a JWT. This can lead to unpredictable results. If your users have large numbers of group memberships, we recommend using the option to restrict the groups emitted in claims to the relevant groups for the application. If for any reason assigning groups to your applications is not possible, we also provide the option of configuring a [group filter](#group-filtering) which can also reduce the number of groups emitted in the claim.
-> - Group claims have a 5-group limit if the token is issued through the implicit flow. Tokens requested via the implicit flow will only have a "hasgroups":true claim if the user is in more than 5 groups.
-> - For new application development, or in cases where the application can be configured for it, and where nested group support isn't required, we recommend that in-app authorization is based on application roles rather than groups. This limits the amount of information that needs to go into the token, is more secure, and separates user assignment from app configuration.
+## Important caveats for this functionality
+
+- Support for use of `sAMAccountName` and security identifier (SID) attributes synced from on-premises is designed to enable moving existing applications from Active Directory Federation Services (AD FS) and other identity providers. Groups managed in Azure AD don't contain the attributes necessary to emit these claims.
+- In larger organizations, the number of groups where a user is a member might exceed the limit that Azure AD will add to a token. Those limits are 150 groups for a SAML token and 200 for a JSON Web Token (JWT). Exceeding a limit can lead to unpredictable results.
+
+ If your users have large numbers of group memberships, we recommend using the option to restrict the groups emitted in claims to the relevant groups for the application. If assigning groups to your applications is not possible, you can configure a [group filter](#group-filtering) to reduce the number of groups emitted in the claim.
+- Group claims have a five-group limit if the token is issued through the implicit flow. Tokens requested via the implicit flow will have a `"hasgroups":true` claim only if the user is in more than five groups.
+- We recommend basing in-app authorization on application roles rather than groups when:
+
+ - You're developing a new application, or an existing application can be configured for it.
+ - Support for nested groups isn't required.
+
+ Using application roles limits the amount of information that needs to go into the token, is more secure, and separates user assignment from app configuration.
## Group claims for applications migrating from AD FS and other identity providers
-Many applications configured to authenticate with AD FS rely on group membership information in the form of Windows AD group attributes. These attributes are the group sAMAccountName, which may be qualified by-domain name, or the Windows Group Security Identifier (GroupSID). When the application is federated with AD FS, AD FS uses the TokenGroups function to retrieve the group memberships for the user.
+Many applications that are configured to authenticate with AD FS rely on group membership information in the form of Windows Server Active Directory group attributes. These attributes are the group `sAMAccountName`, which might be qualified by domain name, or the Windows group security identifier (`GroupSID`). When the application is federated with AD FS, AD FS uses the `TokenGroups` function to retrieve the group memberships for the user.
-An app that has been moved from AD FS needs claims in the same format. Group and role claims may be emitted from Azure Active Directory containing the domain qualified sAMAccountName or the GroupSID synced from Active Directory rather than the group's Azure Active Directory objectID.
+An app that has been moved from AD FS needs claims in the same format. Group and role claims emitted from Azure AD might contain the domain-qualified `sAMAccountName` attribute or the `GroupSID` attribute synced from Active Directory, rather than the group's Azure AD `objectID` attribute.
The supported formats for group claims are: -- **Azure Active Directory Group ObjectId** (Available for all groups)-- **sAMAccountName** (Available for groups synchronized from Active Directory)-- **NetbiosDomain\sAMAccountName** (Available for groups synchronized from Active Directory)-- **DNSDomainName\sAMAccountName** (Available for groups synchronized from Active Directory)-- **On Premises Group Security Identifier** (Available for groups synchronized from Active Directory)
+- **Azure AD group ObjectId**: Available for all groups.
+- **sAMAccountName**: Available for groups synchronized from Active Directory.
+- **NetbiosDomain\sAMAccountName**: Available for groups synchronized from Active Directory.
+- **DNSDomainName\sAMAccountName**: Available for groups synchronized from Active Directory.
+- **On-premises group security identifier**: Available for groups synchronized from Active Directory.
> [!NOTE]
-> sAMAccountName and On Premises Group SID attributes are only available on Group objects synced from Active Directory. They aren't available on groups created in Azure Active Directory or Office365. Applications configured in Azure Active Directory to get synced on-premises group attributes get them for synced groups only.
+> `sAMAccountName` and on-premises `GroupSID` attributes are available only on group objects synced from Active Directory. They aren't available on groups created in Azure AD or Office 365. Applications configured in Azure AD to get synced on-premises group attributes get them for synced groups only.
## Options for applications to consume group information
-Applications can call the MS Graph groups endpoint to obtain group information for the authenticated user. This call ensures that all the groups a user is a member of are available even when there are a large number of groups involved. Group enumeration is then independent of token size limitations.
+Applications can call the Microsoft Graph group's endpoint to obtain group information for the authenticated user. This call ensures that all the groups where a user is a member are available, even when a large number of groups is involved. Group enumeration is then independent of limitations on token size.
+
+However, if an existing application expects to consume group information via claims, you can configure Azure AD with various claim formats. Consider the following options:
-However, if an existing application expects to consume group information via claims, Azure Active Directory can be configured with a number of different claims formats. Consider the following options:
+- When you're using group membership for in-application authorization, it's preferable to use the group `ObjectID` attribute. The group `ObjectID` attribute is immutable and unique in Azure AD. It's available for all groups.
+- If you're using the on-premises group `sAMAccountName` attribute for authorization, use domain-qualified names. It reduces the chance of names clashing. `sAMAccountName` might be unique within an Active Directory domain, but if more than one Active Directory domain is synchronized with an Azure AD tenant, there's a possibility for more than one group to have the same name.
+- Consider using [application roles](../../active-directory/develop/howto-add-app-roles-in-azure-ad-apps.md) to provide a layer of indirection between the group membership and the application. The application then makes internal authorization decisions based on role claims in the token.
+- If the application is configured to get group attributes that are synced from Active Directory and a group doesn't contain those attributes, it won't be included in the claims.
+- Group claims in tokens include nested groups, except when you're using the option to restrict the group claims to groups that are assigned to the application.
-- When using group membership for in-application authorization purposes it is preferable to use the Group ObjectID. The Group ObjectID is immutable and unique in Azure Active Directory and available for all groups.-- If using the on-premises group sAMAccountName for authorization, use domain qualified names; there's less chance of names clashing. sAMAccountName may be unique within an Active Directory domain, but if more than one Active Directory domain is synchronized with an Azure Active Directory tenant there is a possibility for more than one group to have the same name.-- Consider using [Application Roles](../../active-directory/develop/howto-add-app-roles-in-azure-ad-apps.md) to provide a layer of indirection between the group membership and the application. The application then makes internal authorization decisions based on role claims in the token.-- If the application is configured to get group attributes that are synced from Active Directory and a Group doesn't contain those attributes, it won't be included in the claims.-- Group claims in tokens include nested groups except when using the option to restrict the group claims to groups assigned to the application. If a user is a member of GroupB and GroupB is a member of GroupA, then the group claims for the user will contain both GroupA and GroupB. When an organization's users have large numbers of group memberships, the number of groups listed in the token can grow the token size. Azure Active Directory limits the number of groups it will emit in a token to 150 for SAML assertions, and 200 for JWT. If a user is a member of a larger number of groups, the groups are omitted and a link to the Graph endpoint to obtain group information is included instead.
+ If a user is a member of GroupB, and GroupB is a member of GroupA, then the group claims for the user will contain both GroupA and GroupB. When an organization's users have large numbers of group memberships, the number of groups listed in the token can grow the token size. Azure AD limits the number of groups that it will emit in a token to 150 for SAML assertions and 200 for JWT. If a user is a member of a larger number of groups, the groups are omitted. A link to the Microsoft Graph endpoint to obtain group information is included instead.
-## Prerequisites for using Group attributes synchronized from Active Directory
+## Prerequisites for using group attributes synchronized from Active Directory
-Group membership claims can be emitted in tokens for any group if you use the ObjectId format. To use group claims in formats other than the group ObjectId, the groups must be synchronized from Active Directory using Azure AD Connect.
+Group membership claims can be emitted in tokens for any group if you use the `ObjectId` format. To use group claims in formats other than group `ObjectId`, the groups must be synchronized from Active Directory via Azure AD Connect.
-There are two steps to configuring Azure Active Directory to emit group names for Active Directory Groups.
+To configure Azure AD to emit group names for Active Directory groups:
1. **Synchronize group names from Active Directory**
-Before Azure Active Directory can emit the group names or on premises group SID in group or role claims, the required attributes need to be synchronized from Active Directory. You must be running Azure AD Connect version 1.2.70 or later. Earlier versions of Azure AD Connect than 1.2.70 will synchronize the group objects from Active Directory, but will not include the required group name attributes. Upgrade to the current version.
-2. **Configure the application registration in Azure Active Directory to include group claims in tokens**
-Group claims can be configured in the Enterprise Applications section of the portal, or using the Application Manifest in the Application Registrations section. To configure group claims in the application manifest see "Configuring the Azure Active Directory Application Registration for group attributes" below.
+ Before Azure AD can emit the group names or on-premises group SID in group or role claims, you need to synchronize the required attributes from Active Directory. You must be running Azure AD Connect version 1.2.70 or later. Earlier versions of Azure AD Connect than 1.2.70 will synchronize the group objects from Active Directory, but they won't include the required group name attributes.
+
+2. **Configure the application registration in Azure AD to include group claims in tokens**
+
+ You can configure group claims in the **Enterprise Applications** section of the portal, or by using the application manifest in the **Application Registrations** section. To configure group claims in the application manifest, see [Configure the Azure AD application registration for group attributes](#configure-the-azure-ad-application-registration-for-group-attributes) later in this article.
## Add group claims to tokens for SAML applications using SSO configuration
-To configure Group Claims for a Gallery or Non-Gallery SAML application, open **Enterprise Applications**, click on the application in the list, select **Single Sign On configuration**, and then select **User Attributes & Claims**.
+To configure group claims for a gallery or non-gallery SAML application via single sign-on (SSO):
+
+1. Open **Enterprise Applications**, select the application in the list, select **Single Sign On configuration**, and then select **User Attributes & Claims**.
-Click on **Add a group claim**
+1. Select **Add a group claim**.
-![Screenshot that shows the "User Attributes & Claims" page with "Add a group claim" selected.](media/how-to-connect-fed-group-claims/group-claims-ui-1.png)
+ ![Screenshot that shows the page for user attributes and claims, with the button for adding a group claim selected.](media/how-to-connect-fed-group-claims/group-claims-ui-1.png)
-Use the radio buttons to select which groups should be included in the token
+1. Use the options to select which groups should be included in the token.
-![Screenshot that shows the "Group Claims" window with "Security groups" selected.](media/how-to-connect-fed-group-claims/group-claims-ui-2.png)
+ ![Screenshot that shows the Group Claims window with group options.](media/how-to-connect-fed-group-claims/group-claims-ui-2.png)
-| Selection | Description |
-|-|-|
-| **All groups** | Emits security groups and distribution lists and roles. |
-| **Security groups** | Emits security groups the user is a member of in the groups claim |
-| **Directory roles** | If the user is assigned directory roles, they are emitted as a 'wids' claim (groups claim won't be emitted) |
-| **Groups assigned to the application** | Emits only the groups that are explicitly assigned to the application and the user is a member of |
+ | Selection | Description |
+ |-|-|
+ | **All groups** | Emits security groups and distribution lists and roles. |
+ | **Security groups** | Emits security groups that the user is a member of in the groups claim. |
+ | **Directory roles** | If the user is assigned directory roles, they're emitted as a `wids` claim. (The group's claim won't be emitted.) |
+ | **Groups assigned to the application** | Emits only the groups that are explicitly assigned to the application and that the user is a member of. |
-For example, to emit all the Security Groups the user is a member of, select Security Groups
+ - For example, to emit all the security groups that the user is a member of, select **Security groups**.
-![Screenshot that shows the "Group Claims" window with "Security groups" selected and the "Source attribute" drop-down menu open.](media/how-to-connect-fed-group-claims/group-claims-ui-3.png)
+ ![Screenshot that shows the Group Claims window, with the option for security groups selected.](media/how-to-connect-fed-group-claims/group-claims-ui-3.png)
-To emit groups using Active Directory attributes synced from Active Directory instead of Azure AD objectIDs select the required format from the drop-down. Only groups synchronized from Active Directory will be included in the claims.
+ To emit groups by using Active Directory attributes synced from Active Directory instead of Azure AD `objectID` attributes, select the required format from the **Source attribute** drop-down list. Only groups synchronized from Active Directory will be included in the claims.
-![Screenshot that shows the "Source attribute" drop-down menu open.](media/how-to-connect-fed-group-claims/group-claims-ui-4.png)
+ ![Screenshot that shows the drop-down menu for the source attribute.](media/how-to-connect-fed-group-claims/group-claims-ui-4.png)
-To emit only groups assigned to the application, select **Groups Assigned to the application**
+ - To emit only groups assigned to the application, select **Groups assigned to the application**.
-![Screenshot that shows the "Group Claims" window with "Groups assigned to the application" selected.](media/how-to-connect-fed-group-claims/group-claims-ui-4-1.png)
+ ![Screenshot that shows the Group Claims window, with the option for groups assigned to the application selected.](media/how-to-connect-fed-group-claims/group-claims-ui-4-1.png)
-Groups assigned to the application will be included in the token. Other groups the user is a member of will be omitted. With this option nested groups are not included and the user must be a direct member of the group assigned to the application.
+ Groups assigned to the application will be included in the token. Other groups that the user is a member of will be omitted. With this option, nested groups are not included and the user must be a direct member of the group assigned to the application.
-To change the groups assigned to the application, select the application from the **Enterprise Applications** list and then click **Users and Groups** from the application's left-hand navigation menu.
+ To change the groups assigned to the application, select the application from the **Enterprise Applications** list. Then select **Users and Groups** from the application's left menu.
-See the document [Assign a user or group to an enterprise app](../../active-directory/manage-apps/assign-user-or-group-access-portal.md) for details of managing group assignment to applications.
+ For more information about managing group assignment to applications, see [Assign a user or group to an enterprise app](../../active-directory/manage-apps/assign-user-or-group-access-portal.md).
-### Advanced options
+### Set advanced options
#### Customize group claim name
-The way group claims are emitted can be modified by the settings under Advanced options
+You can modify the way that group claims are emitted by using the settings under **Advanced options**.
-Customize the name of the group claim: If selected, a different claim type can be specified for group claims. Enter the claim type in the Name field and the optional namespace for the claim in the namespace field.
+If you select **Customize the name of the group claim**, you can specify a different claim type for group claims. Enter the claim type in the **Name** box and the optional namespace for the claim in the **Namespace** box.
-![Screenshot that shows the "Advanced options" section with "Customize the name of the group claim" selected and "Name" and "Namespace" values entered.](media/how-to-connect-fed-group-claims/group-claims-ui-5.png)
+![Screenshot that shows advanced options, with the option of customizing the name of the group claim selected and the name and namespace values entered.](media/how-to-connect-fed-group-claims/group-claims-ui-5.png)
-Some applications require the group membership information to appear in the 'role' claim. You can optionally emit the user's groups as roles by checking the 'Emit groups a role claims' box.
+Some applications require the group membership information to appear in the role claim. You can optionally emit the user's groups as roles by selecting the **Emit groups as role claims** checkbox.
-![Screenshot that shows the "Advanced options" section with "Customize the name of the group claim" and "Emit groups as role claims" selected.](media/how-to-connect-fed-group-claims/group-claims-ui-6.png)
+![Screenshot that shows advanced options, with the checkboxes selected for customizing the name of the group claim and emitting groups as role claims.](media/how-to-connect-fed-group-claims/group-claims-ui-6.png)
> [!NOTE]
-> If the option to emit group data as roles is used, only groups will appear in the role claim. Any Application Roles the user is assigned will not appear in the role claim.
+> If you use the option to emit group data as roles, only groups will appear in the role claim. Any application roles that the user is assigned to won't appear in the role claim.
#### Group filtering
-Group filtering allows for fine grain control of the list of groups that is included as part of the group claim. When a filter is configured, only groups that match the filter will be included in the groups claim sent to that application. The filter will be applied against all groups regardless of the group hierarchy.
-Filters can be configured to be applied to the groupΓÇÖs display name or SAMAccountName and the following filtering operations are supported:
+Group filtering allows for fine control of the list of groups that's included as part of the group claim. When a filter is configured, only groups that match the filter will be included in the group's claim that's sent to that application. The filter will be applied against all groups regardless of the group hierarchy.
+You can configure filters to be applied to the group's display name or `SAMAccountName` attribute. The following filtering operations are supported:
- ![Screenshot of filtering](media/how-to-connect-fed-group-claims/group-filter-1.png)
+ - **Prefix**: Matches the start of the selected attribute.
+ - **Suffix**: Matches the end of the selected attribute.
+ - **Contains**: Matches any location in the selected attribute.
+
+ ![Screenshot that shows filtering options.](media/how-to-connect-fed-group-claims/group-filter-1.png)
#### Group transformation
-Some applications might require the groups in a different format to how they are represented in Azure AD. To support this, you can apply a transformation to each group that will be emitted in the group claim. This is achieved by allowing the configuration of a regex and a replacement value on custom group claims.
+Some applications might require the groups in a different format from how they're represented in Azure AD. To support this requirement, you can apply a transformation to each group that will be emitted in the group claim. You achieve it by allowing the configuration of a regular expression (regex) and a replacement value on custom group claims.
- ![Screenshot of group transformation](media/how-to-connect-fed-group-claims/group-transform-1.png)\
+![Screenshot of group transformation, with regex information added.](media/how-to-connect-fed-group-claims/group-transform-1.png)\
+- **Regex pattern**: Use a regex to parse text strings according to the pattern that you set in this box. If the regex pattern that you outline evaluates to `true`, the regex replacement pattern will run.
+- **Regex replacement pattern**: Outline in regex notation how you want to replace your string if the regex pattern that you outlined evaluates to `true`. Use capture groups to match subexpressions in this replacement regex.
-For more information about regex replace and capture groups, see [The Regular Expression Engine - The Captured Group](/dotnet/standard/base-types/the-regular-expression-object-model?WT.mc_id=Portal-fx#the-captured-group).
+For more information about regex replace and capture groups, see [The Regular Expression Object Model: The Captured Group](/dotnet/standard/base-types/the-regular-expression-object-model?WT.mc_id=Portal-fx#the-captured-group).
>[!NOTE]
-> As per the Azure AD documentation a restricted claim cannot be modified using policy. The data source cannot be changed, and no transformation is applied when generating these claims. The "Groups" claim is still a restricted claim, hence you need to customize the groups by changing the name, if you select a restricted name for the name of your custom group claim then the claim will be ignored at runtime.
+> As described in the Azure AD documentation, you can't modify a restricted claim by using a policy. The data source can't be changed, and no transformation is applied when you're generating these claims. The group claim is still a restricted claim, so you need to customize the groups by changing the name. If you select a restricted name for the name of your custom group claim, the claim will be ignored at runtime.
>
->The regex transform feature can also be used as a filter since any groups that donΓÇÖt match the regex pattern will not be emitted in the resulting claim.
+> You can also use the regex transform feature as a filter, because any groups that don't match the regex pattern will not be emitted in the resulting claim.
-### Edit the group claims configuration
+### Edit the group claim configuration
-Once a group claim configuration has been added to the User Attributes & Claims configuration, the option to add a group claim will be greyed out. To change the group claim configuration click on the group claim in the **Additional claims** list.
+After you add a group claim configuration to the **User Attributes & Claims** configuration, the option to add a group claim will be unavailable. To change the group claim configuration, select the group claim in the **Additional claims** list.
-![claims UI](media/how-to-connect-fed-group-claims/group-claims-ui-7.png)
+![Screenshot of the area for user attributes and claims, with the name of a group claim highlighted.](media/how-to-connect-fed-group-claims/group-claims-ui-7.png)
-## Configure the Azure AD Application Registration for group attributes
+## Configure the Azure AD application registration for group attributes
-Group claims can also be configured in the [Optional Claims](../../active-directory/develop/active-directory-optional-claims.md) section of the [Application Manifest](../../active-directory/develop/reference-app-manifest.md).
+You can also configure group claims in the [optional claims](../../active-directory/develop/active-directory-optional-claims.md) section of the [application manifest](../../active-directory/develop/reference-app-manifest.md).
-1. In the portal ->Azure Active Directory -> Application Registrations->Select Application->Manifest
+1. In the portal, select **Azure Active Directory** > **Application Registrations** > **Select Application** > **Manifest**.
-2. Enable group membership claims by changing the groupMembershipClaim
+2. Enable group membership claims by changing `groupMembershipClaims`.
-Valid values are:
+ Valid values are:
-| Selection | Description |
-|-|-|
-| **"All"** | Emits security groups, distribution lists and roles |
-| **"SecurityGroup"** | Emits security groups the user is a member of in the groups claim |
-| **"DirectoryRole"** | If the user is assigned directory roles, they are emitted as a 'wids' claim (groups claim won't be emitted) |
-| **"ApplicationGroup"** | Emits only the groups that are explicitly assigned to the application and the user is a member of |
-| **"None"** | No Groups are returned.(Its not case-sensitive so none works as well and it can be set directly in the application manifest.) |
+ | Selection | Description |
+ |-|-|
+ | `All` | Emits security groups, distribution lists, and roles. |
+ | `SecurityGroup` | Emits security groups that the user is a member of in the group claim. |
+ | `DirectoryRole` | If the user is assigned directory roles, they're emitted as a `wids` claim. (A group claim won't be emitted.) |
+ | `ApplicationGroup` | Emits only the groups that are explicitly assigned to the application and that the user is a member of. |
+ | `None` | No groups are returned. (It's not case-sensitive, so `none` also works. It can be set directly in the application manifest.) |
For example:
Valid values are:
"groupMembershipClaims": "SecurityGroup" ```
- By default Group ObjectIDs will be emitted in the group claim value. To modify the claim value to contain on premises group attributes, or to change the claim type to role, use OptionalClaims configuration as follows:
+ By default, group `ObjectID` attributes will be emitted in the group claim value. To modify the claim value to contain on-premises group attributes, or to change the claim type to a role, use the `optionalClaims` configuration described in the next step.
-3. Set group name configuration optional claims.
+3. Set optional clams for group name configuration.
- If you want the groups in the token to contain the on premises AD group attributes, specify which token type optional claim should be applied to in the optional claims section. Multiple token types can be listed:
+ If you want the groups in the token to contain the on-premises Active Directory group attributes, specify which token-type optional claim should be applied in the `optionalClaims` section. You can list multiple token types:
- - idToken for the OIDC ID token
- - accessToken for the OAuth/OIDC access token
- - Saml2Token for SAML tokens.
+ - `idToken` for the OIDC ID token
+ - `accessToken` for the OAuth/OIDC access token
+ - `Saml2Token` for SAML tokens
> [!NOTE]
- > The Saml2Token type applies to both SAML1.1 and SAML2.0 format tokens
+ > The `Saml2Token` type applies to tokens in both SAML1.1 and SAML2.0 format.
- For each relevant token type, modify the groups claim to use the OptionalClaims section in the manifest. The OptionalClaims schema is as follows:
+ For each relevant token type, modify the group claim to use the `optionalClaims` section in the manifest. The `optionalClaims` schema is as follows:
```json {
Valid values are:
} ```
- | Optional Claims Schema | Value |
+ | Optional claims schema | Value |
|-|-|
- | **name:** | Must be "groups" |
- | **source:** | Not used. Omit or specify null |
- | **essential:** | Not used. Omit or specify false |
- | **additionalProperties:** | List of additional properties. Valid options are "sam_account_name", "dns_domain_and_sam_account_name", "netbios_domain_and_sam_account_name", "emit_as_roles" |
+ | `name` | Must be `"groups"`. |
+ | `source` | Not used. Omit or specify `null`. |
+ | `essential` | Not used. Omit or specify `false`. |
+ | `additionalProperties` | List of additional properties. Valid options are `"sam_account_name"`, `"dns_domain_and_sam_account_name"`, `"netbios_domain_and_sam_account_name"`, and `"emit_as_roles"`. |
- In additionalProperties only one of "sam_account_name", "dns_domain_and_sam_account_name", "netbios_domain_and_sam_account_name" are required. If more than one is present, the first is used and any others ignored.
+ In `additionalProperties`, only one of `"sam_account_name"`, `"dns_domain_and_sam_account_name"`, or `"netbios_domain_and_sam_account_name"` is required. If more than one is present, the first is used and any others are ignored.
- Some applications require group information about the user in the role claim. To change the claim type to from a group claim to a role claim, add "emit_as_roles" to additional properties. The group values will be emitted in the role claim.
+ Some applications require group information about the user in the role claim. To change the claim type to from a group claim to a role claim, add `"emit_as_roles"` to additional properties. The group values will be emitted in the role claim.
> [!NOTE]
- > If "emit_as_roles" is used any Application Roles configured that the user is assigned will not appear in the role claim
+ > If you use `"emit_as_roles"`, any configured application roles that the user is assigned to will not appear in the role claim.
### Examples
-Emit groups as group names in OAuth access tokens in dnsDomainName\SAMAccountName format
+Emit groups as group names in OAuth access tokens in `DNSDomainName\sAMAccountName` format:
```json "optionalClaims": {
Emit groups as group names in OAuth access tokens in dnsDomainName\SAMAccountNam
} ```
-To emit group names to be returned in netbiosDomain\samAccountName format as the roles claim in SAML and OIDC ID Tokens:
+Emit group names to be returned in `NetbiosDomain\sAMAccountName` format as the role claim in SAML and OIDC ID tokens:
```json "optionalClaims": {
To emit group names to be returned in netbiosDomain\samAccountName format as the
## Next steps -- [Add authorization using groups & groups claims to an ASP.NET Core web app (Code sample)](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/5-WebApp-AuthZ/5-2-Groups/README.md)
+- [Add authorization using groups & group claims to an ASP.NET Core web app (code sample)](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/5-WebApp-AuthZ/5-2-Groups/README.md)
- [Assign a user or group to an enterprise app](../../active-directory/manage-apps/assign-user-or-group-access-portal.md) - [Configure role claims](../../active-directory/develop/active-directory-enterprise-app-role-management.md)
active-directory Howto Identity Protection Remediate Unblock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-remediate-unblock.md
Previously updated : 01/24/2022 Last updated : 02/17/2022
# Remediate risks and unblock users
-After completing your [investigation](howto-identity-protection-investigate-risk.md), you need to take action to remediate the risk or unblock users. Organizations can enable automated remediation using their [risk policies](howto-identity-protection-configure-risk-policies.md). Organizations should try to close all risk detections that they are presented in a time period your organization is comfortable with. Microsoft recommends closing events quickly, because time matters when working with risk.
+After completing your [investigation](howto-identity-protection-investigate-risk.md), you need to take action to remediate the risk or unblock users. Organizations can enable automated remediation using their [risk policies](howto-identity-protection-configure-risk-policies.md). Organizations should try to close all risk detections that they're presented in a time period your organization is comfortable with. Microsoft recommends closing events quickly, because time matters when working with risk.
## Remediation
Administrators have the following options to remediate:
1. If the account is confirmed compromised: 1. Select the event or user in the **Risky sign-ins** or **Risky users** reports and choose "Confirm compromised".
- 1. If a risk policy or a Conditional Access policy was not triggered at part of the risk detection, and the risk was not [self-remediated](#self-remediation-with-risk-policy), then:
+ 1. If a risk policy or a Conditional Access policy wasn't triggered at part of the risk detection, and the risk wasn't [self-remediated](#self-remediation-with-risk-policy), then:
1. [Request a password reset](#manual-password-reset). 1. Block the user if you suspect the attacker can reset the password or do multi-factor authentication for the user. 1. Revoke refresh tokens.
Some detections may not raise risk to the level where a user self-remediation wo
### Manual password reset
-If requiring a password reset using a user risk policy is not an option, administrators can close all risk detections for a user with a manual password reset.
+If requiring a password reset using a user risk policy isn't an option, administrators can close all risk detections for a user with a manual password reset.
Administrators are given two options when resetting a password for their users: - **Generate a temporary password** - By generating a temporary password, you can immediately bring an identity back into a safe state. This method requires contacting the affected users because they need to know what the temporary password is. Because the password is temporary, the user is prompted to change the password to something new during the next sign-in. -- **Require the user to reset password** - Requiring the users to reset passwords enables self-recovery without contacting help desk or an administrator. This method only applies to users that are registered for Azure AD MFA and SSPR. For users that have not been registered, this option is not available.
+- **Require the user to reset password** - Requiring the users to reset passwords enables self-recovery without contacting help desk or an administrator. This method only applies to users that are registered for Azure AD MFA and SSPR. For users that haven't been registered, this option isn't available.
### Dismiss user risk
-If a password reset is not an option for you, because for example the user has been deleted, you can choose to dismiss user risk detections.
+If a password reset isn't an option for you, you can choose to dismiss user risk detections.
-When you click **Dismiss user risk**, all events are closed and the affected user is no longer at risk. However, because this method does not have an impact on the existing password, it does not bring the related identity back into a safe state.
+When you select **Dismiss user risk**, all events are closed and the affected user is no longer at risk. However, because this method doesn't have an impact on the existing password, it doesn't bring the related identity back into a safe state.
### Close individual risk detections manually
-You can close individual risk detections manually. By closing risk detections manually, you can lower the user risk level. Typically, risk detections are closed manually in response to a related investigation. For example, when talking to a user reveals that an active risk detection is not required anymore.
+You can close individual risk detections manually. By closing risk detections manually, you can lower the user risk level. Typically, risk detections are closed manually in response to a related investigation. For example, when talking to a user reveals that an active risk detection isn't required anymore.
When closing risk detections manually, you can choose to take any of the following actions to change the status of a risk detection:
When closing risk detections manually, you can choose to take any of the followi
- Confirm sign-in safe - Confirm sign-in compromised
+#### Deleted users
+
+It isn't possible for administrators to dismiss risk for users who have been deleted from the directory. To remove deleted users, open a Microsoft support case.
+ ## Unblocking users An administrator may choose to block a sign-in based on their risk policy or investigations. A block may occur based on either sign-in or user risk.
active-directory How Manage User Assigned Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md
editor: Last updated 01/20/2022
active-directory Managed Identities Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identities-status.md
The following Azure services support managed identities for Azure resources:
| Azure VM image builder | [Configure Azure Image Builder Service permissions using Azure CLI](../../virtual-machines/linux/image-builder-permissions-cli.md#using-managed-identity-for-azure-storage-access)| | Azure Virtual Machine Scale Sets | [Configure managed identities on virtual machine scale set - Azure CLI](qs-configure-cli-windows-vmss.md) | | Azure Virtual Machines | [Secure and use policies on virtual machines in Azure](../../virtual-machines/windows/security-policy.md#managed-identities-for-azure-resources) |
+| Azure Web PubSub Service | [Managed identities for Azure Web PubSub Service](../../azure-web-pubsub/howto-use-managed-identity.md) |
## Next steps -- [Managed identities overview](Overview.md)
+- [Managed identities overview](Overview.md)
active-directory Active And Thriving Perth Airport Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/active-and-thriving-perth-airport-tutorial.md
- Title: 'Tutorial: Azure AD SSO integration with Active and Thriving - Perth Airport'
-description: Learn how to configure single sign-on between Azure Active Directory and Active and Thriving - Perth Airport.
-------- Previously updated : 12/20/2021----
-# Tutorial: Azure AD SSO integration with Active and Thriving - Perth Airport
-
-In this tutorial, you'll learn how to integrate Active and Thriving - Perth Airport with Azure Active Directory (Azure AD). When you integrate Active and Thriving - Perth Airport with Azure AD, you can:
-
-* Control in Azure AD who has access to Active and Thriving - Perth Airport.
-* Enable your users to be automatically signed-in to Active and Thriving - Perth Airport with their Azure AD accounts.
-* Manage your accounts in one central location - the Azure portal.
-
-## Prerequisites
-
-To get started, you need the following items:
-
-* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Active and Thriving - Perth Airport single sign-on (SSO) enabled subscription.
-
-## Scenario description
-
-In this tutorial, you configure and test Azure AD SSO in a test environment.
-
-* Active and Thriving - Perth Airport supports **SP and IDP** initiated SSO.
-
-> [!NOTE]
-> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-
-## Add Active and Thriving - Perth Airport from the gallery
-
-To configure the integration of Active and Thriving - Perth Airport into Azure AD, you need to add Active and Thriving - Perth Airport from the gallery to your list of managed SaaS apps.
-
-1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
-1. On the left navigation pane, select the **Azure Active Directory** service.
-1. Navigate to **Enterprise Applications** and then select **All Applications**.
-1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Active and Thriving - Perth Airport** in the search box.
-1. Select **Active and Thriving - Perth Airport** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-
-## Configure and test Azure AD SSO for Active and Thriving - Perth Airport
-
-Configure and test Azure AD SSO with Active and Thriving - Perth Airport using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Active and Thriving - Perth Airport.
-
-To configure and test Azure AD SSO with Active and Thriving - Perth Airport, perform the following steps:
-
-1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Active and Thriving - Perth Airport SSO](#configure-active-and-thrivingperth-airport-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Active and Thriving - Perth Airport test user](#create-active-and-thrivingperth-airport-test-user)** - to have a counterpart of B.Simon in Active and Thriving - Perth Airport that is linked to the Azure AD representation of user.
-1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-
-## Configure Azure AD SSO
-
-Follow these steps to enable Azure AD SSO in the Azure portal.
-
-1. In the Azure portal, on the **Active and Thriving - Perth Airport** application integration page, find the **Manage** section and select **single sign-on**.
-1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
-
-1. On the **Basic SAML Configuration** section perform the following steps, if you wish to configure the application in SP initiated mode:
-
- a. In the **Identifier** text box, type the URL:
- `https://sso-perthairport.activeandthriving.com.au/saml2/aad/metadata`
-
- b. In the **Reply URL** text box, type the URL:
- `https://sso-perthairport.activeandthriving.com.au/saml2/aad/login`
-
- c. In the **Sign-on URL** text box, type the URL:
- `https://sso-perthairport.activeandthriving.com.au/saml2/aad/login`
-
-1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
-
- ![The Certificate download link](common/certificatebase64.png)
-
-1. On the **Set up Active and Thriving - Perth Airport** section, copy the appropriate URL(s) based on your requirement.
-
- ![Copy configuration URLs](common/copy-configuration-urls.png)
-
-### Create an Azure AD test user
-
-In this section, you'll create a test user in the Azure portal called B.Simon.
-
-1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
-1. Select **New user** at the top of the screen.
-1. In the **User** properties, follow these steps:
- 1. In the **Name** field, enter `B.Simon`.
- 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
- 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
- 1. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Active and Thriving - Perth Airport.
-
-1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Active and Thriving - Perth Airport**.
-1. In the app's overview page, find the **Manage** section and select **Users and groups**.
-1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
-1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
-1. In the **Add Assignment** dialog, click the **Assign** button.
-
-## Configure Active and Thriving - Perth Airport SSO
-
-To configure single sign-on on **Active and Thriving - Perth Airport** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Active and Thriving - Perth Airport support team](mailto:hello@activeandthriving.com.au). They set this setting to have the SAML SSO connection set properly on both sides.
-
-### Create Active and Thriving - Perth Airport test user
-
-In this section, you create a user called Britta Simon in Active and Thriving - Perth Airport. Work with [Active and Thriving - Perth Airport support team](mailto:hello@activeandthriving.com.au) to add the users in the Active and Thriving - Perth Airport platform. Users must be created and activated before you use single sign-on.
-
-## Test SSO
-
-In this section, you test your Azure AD single sign-on configuration with following options.
-
-#### SP initiated:
-
-* Click on **Test this application** in Azure portal. This will redirect to Active and Thriving - Perth Airport Sign on URL where you can initiate the login flow.
-
-* Go to Active and Thriving - Perth Airport Sign-on URL directly and initiate the login flow from there.
-
-#### IDP initiated:
-
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the Active and Thriving - Perth Airport for which you set up the SSO.
-
-You can also use Microsoft My Apps to test the application in any mode. When you click the Active and Thriving - Perth Airport tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Active and Thriving - Perth Airport for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-
-## Next steps
-
-Once you configure Active and Thriving - Perth Airport you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Fence Mobile Remotemanager Sso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fence-mobile-remotemanager-sso-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with FENCE-Mobile RemoteManager SSO'
+description: Learn how to configure single sign-on between Azure Active Directory and FENCE-Mobile RemoteManager SSO.
++++++++ Last updated : 02/01/2022++++
+# Tutorial: Azure AD SSO integration with FENCE-Mobile RemoteManager SSO
+
+In this tutorial, you'll learn how to integrate FENCE-Mobile RemoteManager SSO with Azure Active Directory (Azure AD). When you integrate FENCE-Mobile RemoteManager SSO with Azure AD, you can:
+
+* Control in Azure AD who has access to FENCE-Mobile RemoteManager SSO.
+* Enable your users to be automatically signed-in to FENCE-Mobile RemoteManager SSO with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* FENCE-Mobile RemoteManager SSO single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* FENCE-Mobile RemoteManager SSO supports **SP** initiated SSO.
+
+## Adding FENCE-Mobile RemoteManager SSO from the gallery
+
+To configure the integration of FENCE-Mobile RemoteManager SSO into Azure AD, you need to add FENCE-Mobile RemoteManager SSO from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **FENCE-Mobile RemoteManager SSO** in the search box.
+1. Select **FENCE-Mobile RemoteManager SSO** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for FENCE-Mobile RemoteManager SSO
+
+Configure and test Azure AD SSO with FENCE-Mobile RemoteManager SSO using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in FENCE-Mobile RemoteManager SSO.
+
+To configure and test Azure AD SSO with FENCE-Mobile RemoteManager SSO, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure FENCE-Mobile RemoteManager SSO](#configure-fence-mobile-remotemanager-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create FENCE-Mobile RemoteManager SSO test user](#create-fence-mobile-remotemanager-sso-test-user)** - to have a counterpart of B.Simon in FENCE-Mobile RemoteManager SSO that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **FENCE-Mobile RemoteManager SSO** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ `api://www.fence-mrm.bsc.fujitsu.com/<TID>/<GUID>`
+
+ b. In the **Reply URL** text box, type a URL using one of the following patterns:
+
+ | Reply URL |
+ | |
+ | `https://www.fence-mrm.bsc.fujitsu.com/SConsole/SSOServlet?tid=<TID>` |
+ | `https://ctl.fence-mrm.bsc.fujitsu.com/SControl/SSOServlet?tid=<TID>` |
+ | `https://www.fence-mrm.bsc.fujitsu.com/IMDMLogin/SSOServlet?tid=<TID>` |
+ |
+
+
+ c. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://www.fence-mrm.bsc.fujitsu.com/SConsole/login.jsf?tid=<TID>`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [FENCE-Mobile RemoteManager SSO Client support team](mailto:fj-FMRM_Dev_Azure@dl.jp.fujitsu.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/metadataxml.png)
+
+1. On the **Set up FENCE-Mobile RemoteManager SSO** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to FENCE-Mobile RemoteManager SSO.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **FENCE-Mobile RemoteManager SSO**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure FENCE-Mobile RemoteManager SSO
+
+To configure single sign-on on **FENCE-Mobile RemoteManager SSO** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [FENCE-Mobile RemoteManager SSO support team](mailto:fj-FMRM_Dev_Azure@dl.jp.fujitsu.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create FENCE-Mobile RemoteManager SSO test user
+
+In this section, you create a user called Britta Simon in FENCE-Mobile RemoteManager SSO. Work with [FENCE-Mobile RemoteManager SSO support team](mailto:fj-FMRM_Dev_Azure@dl.jp.fujitsu.com) to add the users in the FENCE-Mobile RemoteManager SSO platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to FENCE-Mobile RemoteManager SSO Sign-on URL where you can initiate the login flow.
+
+* Go to FENCE-Mobile RemoteManager SSO Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the FENCE-Mobile RemoteManager SSO tile in the My Apps, this will redirect to FENCE-Mobile RemoteManager SSO Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure FENCE-Mobile RemoteManager SSO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
++
active-directory Gong Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/gong-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Gong for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Gong.
+
+documentationcenter: ''
+
+writer: Thwimmer
++
+ms.assetid: 6c8285d3-4f35-4325-9adb-d1a44668a03a
+++
+ms.devlang: na
+ Last updated : 02/09/2022+++
+# Tutorial: Configure Gong for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Gong and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Gong](https://www.gong.io/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities supported
+> [!div class="checklist"]
+> * Create users in Gong.
+> * Remove users in Gong when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Gong.
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in Gong with **Technical Administrator** privileges.
++
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Gong](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Gong to support provisioning with Azure AD
+
+1. Go to your company settings page > **PEOPLE** area > **Team Member Provisioning**.
+1. Select **Azure AD** as the provisioning source.
+1. To assign data capture, workspace, and permission settings to Azure AD groups:
+ 1. In the **Assign settings** area, click **ADD ASSIGNMENT**.
+ 1. Give the assignment a name.
+ 1. In the **Azure AD groups** area, select the Azure AD group you want to define the settings for.
+ 1. In the **Data capture** area, select the home workspace and the data capture settings for people that belong to this group.
+ 1. In the **Workspaces and permissions** area, set the permissions profile for other workspaces in your org.
+ 1. In the **Update settings** area, define how settings can be managed for this assignment:
+ * Select **Manual editing** to manage data capture and permission settings for users in this assignment in Gong.
+ After you create the assignment: if you make changes to group settings in Azure AD, they will not be pushed to Gong. However, you can edit the group settings manually in Gong.
+ * (Recommended) Select **Automatic updates** to give Azure AD governance over data capture and permission settings in Gong.
+ Define data capture and permission settings in Gong only when creating an assignment. Thereafter, other changes will only be applied to users in groups with this assignment when pushed from Azure AD.
+ 1. Click **ADD ASSIGNMENT**.
+1. For org's that don't have assignments (step 3), select the permission profile to apply to for automatically provisioned users.
+
+ [More information on permission profiles](https://help.gong.io/hc/en-us/articles/360028568911#UUID-34baef91-0aba-1295-4032-ff49102cb182).
+
+1. In the **Manager's provisioning settings** area:
+ 1. Select **Notify direct managers with recorded teams when a new team member is imported** to keep your team managers in the loop.
+ 1. Select **Managers can turn data capture on or off for their team** to give your team managers some autonomy.
+
+ > [!TIP]
+ > For more information, see "What are Manager's provisioning settings" in the [FAQ for team member provisioning](https://help.gong.io/hc/en-us/articles/360042352912#UUID-0d3df83a-44d1-11b9-ddf5-3ec649c2f594) article.
+1. Click **Update** to save your settings.
+
+> [!NOTE]
+> If you later change the provisioning source from Azure AD and then want to return to Azure AD provisioning, you will need to re-authenticate to Azure AD .
+
+## Step 3. Add Gong from the Azure AD application gallery
+
+Add Gong from the Azure AD application gallery to start managing provisioning to Gong. If you have previously setup Gong for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* When assigning users and groups to Gong, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
++
+## Step 5. Configure automatic user provisioning to Gong
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in Gong based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Gong in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+1. In the applications list, select **Gong**.
+
+ ![The Gong link in the Applications list](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+1. In the **Admin Credentials** section, click on Authorize, make sure that you enter your Taskize Connect account's Admin credentials. Click **Test Connection** to ensure Azure AD can connect to Taskize Connect. If the connection fails, ensure your Taskize Connect account has Admin permissions and try again.
+
+ ![Token](media/gong-provisioning-tutorial/gong-authorize.png)
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Gong**.
+
+1. Review the user attributes that are synchronized from Azure AD to Gong in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Gong for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Gong API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Gong|
+ |||||
+ |userName|String|&check;|&check;
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|String||
+ |active|Boolean||
+ |title|String||
+ |emails[type eq "work"].value|String||
+ |name.givenName|String||&check;
+ |name.familyName|String||&check;
+ |phoneNumbers[type eq "work"].value|String||
+ |externalId|String||
+ |locale|String||
+ |timezone|String||
+ |urn:ietf:params:scim:schemas:extension:Gong:2.0:User:stateOrProvince|String||
+ |urn:ietf:params:scim:schemas:extension:Gong:2.0:User:country|String||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Gong, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Gong by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+1. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
++
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Hiretual Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hiretual-tutorial.md
Title: 'Tutorial: Azure AD SSO integration with Hiretual-SSO'
-description: Learn how to configure single sign-on between Azure Active Directory and Hiretual-SSO.
+ Title: 'Tutorial: Azure AD SSO integration with hireEZ-SSO'
+description: Learn how to configure single sign-on between Azure Active Directory and hireEZ-SSO.
Previously updated : 09/29/2021 Last updated : 02/14/2022
-# Tutorial: Azure AD SSO integration with Hiretual-SSO
+# Tutorial: Azure AD SSO integration with hireEZ-SSO
-In this tutorial, you'll learn how to integrate Hiretual-SSO with Azure Active Directory (Azure AD). When you integrate Hiretual-SSO with Azure AD, you can:
+In this tutorial, you'll learn how to integrate hireEZ-SSO with Azure Active Directory (Azure AD). When you integrate hireEZ-SSO with Azure AD, you can:
-* Control in Azure AD who has access to Hiretual-SSO.
-* Enable your users to be automatically signed-in to Hiretual-SSO with their Azure AD accounts.
+* Control in Azure AD who has access to hireEZ-SSO.
+* Enable your users to be automatically signed-in to hireEZ-SSO with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal. ## Prerequisites
In this tutorial, you'll learn how to integrate Hiretual-SSO with Azure Active D
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Hiretual-SSO single sign-on (SSO) enabled subscription.
+* hireEZ-SSO single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Hiretual-SSO supports **SP and IDP** initiated SSO.
+* hireEZ-SSO supports **SP and IDP** initiated SSO.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Add Hiretual-SSO from the gallery
+## Add hireEZ-SSO from the gallery
-To configure the integration of Hiretual-SSO into Azure AD, you need to add Hiretual-SSO from the gallery to your list of managed SaaS apps.
+To configure the integration of hireEZ-SSO into Azure AD, you need to add hireEZ-SSO from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Hiretual-SSO** in the search box.
-1. Select **Hiretual-SSO** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **hireEZ-SSO** in the search box.
+1. Select **hireEZ-SSO** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD SSO for Hiretual-SSO
+## Configure and test Azure AD SSO for hireEZ-SSO
-Configure and test Azure AD SSO with Hiretual-SSO using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Hiretual-SSO.
+Configure and test Azure AD SSO with hireEZ-SSO using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in hireEZ-SSO.
-To configure and test Azure AD SSO with Hiretual-SSO, perform the following steps:
+To configure and test Azure AD SSO with hireEZ-SSO, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Hiretual-SSO](#configure-hiretual-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Hiretual-SSO test user](#create-hiretual-sso-test-user)** - to have a counterpart of B.Simon in Hiretual-SSO that is linked to the Azure AD representation of user.
+1. **[Configure hireEZ-SSO](#configure-hireez-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create hireEZ-SSO test user](#create-hireez-sso-test-user)** - to have a counterpart of B.Simon in hireEZ-SSO that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **Hiretual-SSO** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **hireEZ-SSO** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following step:
- a. In the **Reply URL** text box, type a URL using the following pattern:
- `https://api.hiretual.com/v1/users/saml/login/<teamId>`
-
-1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
-
- In the **Sign-on URL** text box, type the URL:
- `https://app.hiretual.com/`
+ a. In the **Identifier** text box, type the URL:
+ `https://app.hireez.com/`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://api.hireez.com/v1/users/saml/login/<teamId>`
> [!NOTE]
- > This value is not real. Update this value with the actual Reply URL. Contact [Hiretual-SSO Client support team](mailto:support@hiretual.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > The Reply URL value is not real. Update this value with the actual Reply URL. Contact [hireEZ-SSO Client support team](mailto:support@hiretual.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-1. Hiretual-SSO application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+1. Click the **Properties** tab on the left menu bar, copy the value of **User access URL**,and save it on your computer.
- ![image](common/default-attributes.png)
-
-1. In addition to above, Hiretual-SSO application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
-
- | Name | Source Attribute |
- | - | |
- | firstName | user.givenname |
- | title | user.jobtitle |
- | lastName | user.surname |
+ ![Screenshot shows the User access URL.](./media/hiretual-tutorial/access-url.png " SSO Configuration")
1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Hiretual-SSO.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to hireEZ-SSO.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Hiretual-SSO**.
+1. In the applications list, select **hireEZ-SSO**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure Hiretual-SSO
+## Configure hireEZ-SSO
-1. Log in to your Hiretual-SSO company site as an administrator.
+1. Log in to your hireEZ-SSO company site as an administrator.
1. Go to **Security & Compliance** > **Single Sign-On**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. Copy **X509 Certificate** from the metadata file and paste the content in the **Certificate** textbox.
- 1. Fill the required attributes manually according to your requirement and click **Save**.
- 1. Enable **Single Sign-On Connection Status** button. 1. Test your Single Sign-On integration first and then enable **Admin SP-Initiated Single Sign-On** button. > [!NOTE]
- > If your Single Sign-On configuration has any errors or you have trouble to login to Hiretual-SSO Web App/Extension after you connected Admin SP-Initiated Single Sign-On, please contact [Hiretual-SSO support team](mailto:support@hiretual.com).
+ > If your Single Sign-On configuration has any errors or you have trouble to login to hireEZ-SSO Web App/Extension after you connected Admin SP-Initiated Single Sign-On, please contact [hireEZ-SSO support team](mailto:support@hiretual.com).
-### Create Hiretual-SSO test user
+### Create hireEZ-SSO test user
-In this section, you create a user called Britta Simon in Hiretual-SSO. Work with [Hiretual-SSO support team](mailto:support@hiretual.com) to add the users in the Hiretual-SSO platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called Britta Simon in hireEZ-SSO. Work with [hireEZ-SSO support team](mailto:support@hiretual.com) to add the users in the hireEZ-SSO platform. Users must be created and activated before you use single sign-on.
## Test SSO
In this section, you test your Azure AD single sign-on configuration with follow
#### SP initiated:
-* Click on **Test this application** in Azure portal. This will redirect to Hiretual-SSO Sign on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to hireEZ-SSO Sign on URL where you can initiate the login flow.
-* Go to Hiretual-SSO Sign-on URL directly and initiate the login flow from there.
+* Go to hireEZ-SSO Sign-on URL directly and initiate the login flow from there.
#### IDP initiated:
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the Hiretual-SSO for which you set up the SSO.
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the hireEZ-SSO for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the Hiretual-SSO tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Hiretual-SSO for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+You can also use Microsoft My Apps to test the application in any mode. When you click the hireEZ-SSO tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the hireEZ-SSO for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure Hiretual-SSO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
+Once you configure hireEZ-SSO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Prodpad Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/prodpad-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
## Step 3. Add ProdPad from the Azure AD application gallery
-Add ProdPad from the Azure AD application gallery to start managing provisioning to ProdPad. If you have previously setup ProdPad for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+Add ProdPad from the Azure AD application gallery to start managing provisioning to ProdPad. If you have previously setup [ProdPad for SSO](prodpad-tutorial.md), you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
## Step 4. Define who will be in scope for provisioning
Once you've configured provisioning, use the following resources to monitor your
* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully * Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion * If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).+
+## Troubleshooting Tips
+Reach out to [ProdPad support team](mailto:help@prodpad.com) in case of any issues.
+ ## More resources * [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
active-directory Thrive Lxp Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/thrive-lxp-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
## Step 2. Configure Thrive LXP to support provisioning with Azure AD
-Reach out to your Thrive LXP contact to generate your **Tenant url** and **Secret Token**. These values will be entered in the Tenant URL and Secret Token field in the Provisioning tab of your Thrive LXP application in the Azure portal.
+Reach out to your [Thrive LXP Client support team](mailto:support@thrivelearning.com) to generate your **Tenant url** and **Secret Token**. These values will be entered in the Tenant URL and Secret Token field in the Provisioning tab of your Thrive LXP application in the Azure portal.
## Step 3. Add Thrive LXP from the Azure AD application gallery
active-directory Zendesk Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zendesk-provisioning-tutorial.md
For information on how to read the Azure AD provisioning logs, see [Reporting on
* When a custom role is assigned to a user or group, the Azure AD automatic user provisioning service also assigns the default role **Agent**. Only Agents can be assigned a custom role. For more information, see the [Zendesk API documentation](https://developer.zendesk.com/rest_api/docs/support/users#json-format-for-agent-or-admin-requests).
+* Import of all roles will fail if any of the custom roles is either "agent" or "end-user". To avoid this, ensure that none of the roles being imported has the above display names.
+ ## Additional resources * [Manage user account provisioning for enterprise apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
aks Reduce Latency Ppg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/reduce-latency-ppg.md
Proximity placement groups are a node pool concept and associated with each indi
The following example uses the [az group create][az-group-create] command to create a resource group named *myResourceGroup* in the *centralus* region. An AKS cluster named *myAKSCluster* is then created using the [az aks create][az-aks-create] command.
-Accelerated networking greatly improves networking performance of virtual machines. Ideally, use proximity placement groups in conjunction with accelerated networking. By default, AKS uses accelerated networking on [supported virtual machine instances](../virtual-network/create-vm-accelerated-networking-cli.md?toc=/azure/virtual-machines/linux/toc.json#limitations-and-constraints), which include most Azure virtual machine with two or more vCPUs.
+Accelerated networking greatly improves networking performance of virtual machines. Ideally, use proximity placement groups in conjunction with accelerated networking. By default, AKS uses accelerated networking on [supported virtual machine instances](../virtual-network/accelerated-networking-overview.md?toc=/azure/virtual-machines/linux/toc.json#limitations-and-constraints), which include most Azure virtual machine with two or more vCPUs.
Create a new AKS cluster with a proximity placement group associated to the first system node pool:
az group delete --name myResourceGroup --yes --no-wait
[az-aks-nodepool-add]: /cli/azure/aks/nodepool#az_aks_nodepool_add [az-aks-create]: /cli/azure/aks#az_aks_create [az-group-create]: /cli/azure/group#az_group_create
-[az-group-delete]: /cli/azure/group#az_group_delete
+[az-group-delete]: /cli/azure/group#az_group_delete
aks Use Azure Dedicated Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-dedicated-hosts.md
+
+ Title: Use Azure Dedicated Hosts in Azure Kubernetes Service (AKS) (Preview)
+description: Learn how to create an Azure Dedicated Hosts Group and associate it with Azure Kubernetes Service (AKS)
++ Last updated : 02/11/2021+++
+# Add Azure Dedicated Host to an Azure Kubernetes Service (AKS) cluster
+
+Azure Dedicated Host is a service that provides physical servers - able to host one or more virtual machines - dedicated to one Azure subscription. Dedicated hosts are the same physical servers used in our data centers, provided as a resource. You can provision dedicated hosts within a region, availability zone, and fault domain. Then, you can place VMs directly into your provisioned hosts, in whatever configuration best meets your needs.
+
+Using Azure Dedicated Hosts for nodes with your AKS cluster has the following benefits:
+
+* Hardware isolation at the physical server level. No other VMs will be placed on your hosts. Dedicated hosts are deployed in the same data centers and share the same network and underlying storage infrastructure as other, non-isolated hosts.
+* Control over maintenance events initiated by the Azure platform. While the majority of maintenance events have little to no impact on your virtual machines, there are some sensitive workloads where each second of pause can have an impact. With dedicated hosts, you can opt in to a maintenance window to reduce the impact to your service.
++
+## Before you begin
+
+* An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).
+* [Azure CLI installed](/cli/azure/install-azure-cli).
+
+### Install the `aks-preview` Azure CLI
+
+You also need the *aks-preview* Azure CLI extension version 0.5.54 or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
+
+```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+
+### Register the `DedicatedHostGroupPreview` preview feature
+
+To use the feature, you must also enable the `DedicatedHostGroupPreview` feature flag on your subscription.
+
+Register the `DedicatedHostGroupPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "DedicatedHostGroupPreview"
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
+
+```azurecli-interactive
+az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/DedicatedHostGroupPreview')].{Name:name,State:properties.state}"
+```
+
+When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+## Limitations
+
+The following limitations apply when you integrate Azure Dedicated Host with Azure Kubernetes Service:
+* An existing agentpool cannot be converted from non-ADH to ADH or ADH to non-ADH.
+* It is not supported to update agentpool from host group A to host group B.
+
+## Add a Dedicated Host Group to an AKS cluster
+
+A host group is a resource that represents a collection of dedicated hosts. You create a host group in a region and an availability zone, and add hosts to it. When planning for high availability, there are additional options. You can use one or both of the following options with your dedicated hosts:
+
+Span across multiple availability zones. In this case, you are required to have a host group in each of the zones you wish to use.
+Span across multiple fault domains which are mapped to physical racks.
+In either case, you are need to provide the fault domain count for your host group. If you do not want to span fault domains in your group, use a fault domain count of 1.
+
+You can also decide to use both availability zones and fault domains.
+
+Not all host SKUs are available in all regions, and availability zones. You can list host availability, and any offer restrictions before you start provisioning dedicated hosts.
+```azurecli-interactive
+az vm list-skus -l eastus2 -r hostGroups/hosts -o table
+```
+
+## Add Dedicated Hosts to the Host Group
+
+Now create a dedicated host in the host group. In addition to a name for the host, you are required to provide the SKU for the host. Host SKU captures the supported VM series as well as the hardware generation for your dedicated host.
+
+For more information about the host SKUs and pricing, see [Azure Dedicated Host pricing](https://azure.microsoft.com/pricing/details/virtual-machines/dedicated-host/).
+
+Use az vm host create to create a host. If you set a fault domain count for your host group, you will be asked to specify the fault domain for your host.
+
+In this example, we will use [az vm host group create](/cli/azure/vm/host/group#az_vm_host_group_create?view=azure-cli-latest&preserve-view=true) to create a host group using both availability zones and fault domains.
+
+```azurecli-interactive
+az vm host group create \
+--name myHostGroup \
+-g myDHResourceGroup \
+-z 1\
+--platform-fault-domain-count 2
+```
+
+## Create an AKS cluster using the Host Group
+Create an AKS cluster, and add the Host Group you just configured.
+
+```azurecli-interactive
+az aks create -g MyResourceGroup -n MyManagedCluster --location westus2 --kubernetes-version 1.20.13 --nodepool-name agentpool1 --node-count 1 --host-group-id <id> --node-vm-size Standard_D2s_v3 --enable-managed-identity --assign-identity <id>
+```
+
+## Add a Dedicated Host Nodepool to an existing AKS cluster
+Add a Host Group to an already existing AKS cluster.
+
+```azurecli-interactive
+az aks nodepool add --cluster-name MyManagedCluster --name agentpool3 --resource-group MyResourceGroup --node-count 1 --host-group-id <id> --node-vm-size Standard_D2s_v3
+```
+
+## Remove a Dedicated Host Nodepool from an AKS cluster
+
+```azurecli-interactive
+az aks nodepool delete --cluster-name MyManagedCluster --name agentpool3 --resource-group MyResourceGroup
+```
+
+## Next steps
+
+In this article, you learned how to create an AKS cluster with a Dedicated host, and to add a dedicated host to an existing cluster. For more information about Dedicated Hosts, see [dedicated-hosts](../virtual-machines/dedicated-hosts.md).
+
+<!-- LINKS - External -->
+[kubernetes-services]: https://kubernetes.io/docs/concepts/services-networking/service/
+
+<!-- LINKS - Internal -->
+[aks-support-policies]: support-policies.md
+[aks-faq]: faq.md
+[azure-cli-install]: /cli/azure/install-azure-cli
+[dedicated-hosts]: /azure/virtual-machines/dedicated-hosts.md
app-service App Service Web Tutorial Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-tutorial-rest-api.md
Title: 'Tutorial: Host RESTful API with CORS' description: Learn how Azure App Service helps you host your RESTful APIs with CORS support. App Service can host both front-end web apps and back end APIs. ms.assetid: a820e400-06af-4852-8627-12b3db4a8e70
+ms.devlang: csharp
Last updated 04/28/2020
app-service Configure Language Dotnet Framework https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-dotnet-framework.md
Title: Configure ASP.NET apps description: Learn how to configure an ASP.NET app in Azure App Service. This article shows the most common configuration tasks.
+ms.devlang: csharp
Last updated 06/02/2020
app-service Configure Language Dotnetcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-dotnetcore.md
Title: Configure ASP.NET Core apps description: Learn how to configure a ASP.NET Core app in the native Windows instances, or in a pre-built Linux container, in Azure App Service. This article shows the most common configuration tasks.
+ms.devlang: csharp
Last updated 06/02/2020
app-service Configure Language Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-php.md
Title: Configure PHP apps description: Learn how to configure a PHP app in the native Windows instances, or in a pre-built PHP container, in Azure App Service. This article shows the most common configuration tasks.
+ms.devlang: php
Last updated 06/02/2020
app-service Configure Language Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-python.md
description: Learn how to configure the Python container in which web apps are r
Last updated 06/11/2021
+ms.devlang: python
app-service Configure Language Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-ruby.md
description: Learn how to configure a pre-built Ruby container for your app. Thi
Last updated 06/18/2020
+ms.devlang: ruby
app-service Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/networking.md
Title: App Service Environment networking
description: App Service Environment networking details Previously updated : 11/15/2021 Last updated : 02/17/2022
You can set route tables without restriction. You can tunnel all of the outbound
You can put your web application firewall devices, such as Azure Application Gateway, in front of inbound traffic. Doing so exposes specific apps on that App Service Environment. If you want to customize the outbound address of your applications on an App Service Environment, you can add a NAT gateway to your subnet.
+## Private endpoint
+
+In order to enable Private Endpoints for apps hosted in your App Service Environment, you must first enable this feature at the App Service Environment level.
+
+You can activate it through the Azure portal: in the App Service Environment configuration pane turn **on** the setting `Allow new private endpoints`.
+Alternatively the following CLI can enable it:
+
+```azurecli-interactive
+az appservice ase update --name myasename --allow-new-private-endpoint-connections true
+```
+
+For more information about Private Endpoint and Web App, see [Azure Web App Private Endpoint][privateendpoint]
++ ## DNS The following sections describe the DNS considerations and configuration that apply inbound to and outbound from your App Service Environment.
While App Service Environment does deploy into your virtual network, there are a
## More resources -- [Environment variables and app settings reference](../reference-app-settings.md)
+- [Environment variables and app settings reference](../reference-app-settings.md)
+
+<!--Links-->
+[privateendpoint]: ../networking/private-endpoint.md
+
app-service Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/networking/private-endpoint.md
description: Connect privately to a Web App using Azure Private Endpoint
ms.assetid: 2dceac28-1ba6-4904-a15d-9e91d5ee162c Previously updated : 12/07/2021 Last updated : 02/17/2022
For more information, see [Service Endpoints][serviceendpoint].
A Private Endpoint is a special network interface (NIC) for your Azure Web App in a Subnet in your Virtual Network (VNet). When you create a Private Endpoint for your Web App, it provides secure connectivity between clients on your private network and your Web App. The Private Endpoint is assigned an IP Address from the IP address range of your VNet.
-The connection between the Private Endpoint and the Web App uses a secure [Private Link][privatelink]. Private Endpoint is only used for incoming flows to your Web App. Outgoing flows will not use this Private Endpoint. You can inject outgoing flows to your network in a different subnet through the [VNet integration feature][vnetintegrationfeature].
+The connection between the Private Endpoint and the Web App uses a secure [Private Link][privatelink]. Private Endpoint is only used for incoming flows to your Web App. Outgoing flows won't use this Private Endpoint. You can inject outgoing flows to your network in a different subnet through the [VNet integration feature][vnetintegrationfeature].
-Each slot of an app is configured separately. You can plug up to 100 Private Endpoints per slot. You cannot share a Private Endpoint between slots.
+Each slot of an app is configured separately. You can plug up to 100 Private Endpoints per slot. You can't share a Private Endpoint between slots.
The Subnet where you plug the Private Endpoint can have other resources in it, you don't need a dedicated empty Subnet. You can also deploy the Private Endpoint in a different region than the Web App.
From a security perspective:
- When you enable Private Endpoints to your Web App, you disable all public access. - You can enable multiple Private Endpoints in others VNets and Subnets, including VNets in other regions. - The IP address of the Private Endpoint NIC must be dynamic, but will remain the same until you delete the Private Endpoint.-- The NIC of the Private Endpoint cannot have an NSG associated.-- The Subnet that hosts the Private Endpoint can have an NSG associated, but you must disable the network policies enforcement for the Private Endpoint: see [Disable network policies for private endpoints][disablesecuritype]. As a result, you cannot filter by any NSG the access to your Private Endpoint.-- When you enable Private Endpoint to your Web App, the [access restrictions][accessrestrictions] configuration of the Web App is not evaluated.
+- The NIC of the Private Endpoint can't have an NSG associated.
+- The Subnet that hosts the Private Endpoint can have an NSG associated, but you must disable the network policies enforcement for the Private Endpoint: see [Disable network policies for private endpoints][disablesecuritype]. As a result, you can't filter by any NSG the access to your Private Endpoint.
+- When you enable Private Endpoint to your Web App, the [access restrictions][accessrestrictions] configuration of the Web App isn't evaluated.
- You can eliminate the data exfiltration risk from the VNet by removing all NSG rules where destination is tag Internet or Azure services. When you deploy a Private Endpoint for a Web App, you can only reach this specific Web App through the Private Endpoint. If you have another Web App, you must deploy another dedicated Private Endpoint for this other Web App.
-In the Web HTTP logs of your Web App, you will find the client source IP. This feature is implemented using the TCP Proxy protocol, forwarding the client IP property up to the Web App. For more information, see [Getting connection Information using TCP Proxy v2][tcpproxy].
+In the Web HTTP logs of your Web App, you'll find the client source IP. This feature is implemented using the TCP Proxy protocol, forwarding the client IP property up to the Web App. For more information, see [Getting connection Information using TCP Proxy v2][tcpproxy].
> [!div class="mx-imgBorder"]
For example, the name resolution will be:
|mywebapp.azurewebsites.net|CNAME|mywebapp.privatelink.azurewebsites.net| |mywebapp.privatelink.azurewebsites.net|CNAME|clustername.azurewebsites.windows.net| |clustername.azurewebsites.windows.net|CNAME|cloudservicename.cloudapp.net|
-|cloudservicename.cloudapp.net|A|40.122.110.154|<--This public IP is not your Private Endpoint, you will receive a 403 error|
+|cloudservicename.cloudapp.net|A|40.122.110.154|<--This public IP isn't your Private Endpoint, you'll receive a 403 error|
You must setup a private DNS server or an Azure DNS private zone, for tests you can modify the host entry of your test machine. The DNS zone that you need to create is: **privatelink.azurewebsites.net**. Register the record for your Web App with a A record and the Private Endpoint IP.
For the Kudu console, or Kudu REST API (deployment with Azure DevOps self-hosted
| mywebapp.scm.privatelink.azurewebsites.net | A | PrivateEndpointIP |
-## ASEv3 special consideration
+## App Service Environment v3 special consideration
-In order to enable Private Endpoint for Web App hosted in an IsolatedV2 plan (ASEv3), you have to enable the Private Endpoint support at the ASE level.
-You can activate the feature by the Azure portal in the ASE configuration pane, or through the following CLI:
+In order to enable Private Endpoint for apps hosted in an IsolatedV2 plan (App Service Environment v3), you have to enable the Private Endpoint support at the App Service Environment level.
+You can activate the feature by the Azure portal in the App Service Environment configuration pane, or through the following CLI:
```azurecli-interactive az appservice ase update --name myasename --allow-new-private-endpoint-connections true ```
+## Specific requirements
+
+If the Virtual Network is in a different subscription than the app, you must ensure that the subscription with the Virtual Network is registered for the Microsoft.Web resource provider. You can explicitly register the provider [by following this documentation][registerprovider], but it will also automatically be registered when creating the first web app in a subscription.
## Pricing
For pricing details, see [Azure Private Link pricing][pricing].
## Limitations
-* When you use Azure Function in Elastic Premium Plan with Private Endpoint, to run or execute the function in Azure Web portal, you must have direct network access or you will receive an HTTP 403 error. In other words, your browser must be able to reach the Private Endpoint to execute the function from the Azure Web portal.
+* When you use Azure Function in Elastic Premium Plan with Private Endpoint, to run or execute the function in Azure Web portal, you must have direct network access or you'll receive an HTTP 403 error. In other words, your browser must be able to reach the Private Endpoint to execute the function from the Azure Web portal.
* You can connect up to 100 Private Endpoints to a particular Web App. * Remote Debugging functionality is not available when Private Endpoint is enabled for the Web App. The recommendation is to deploy the code to a slot and remote debug it there.
-* FTP access is provided through the inbound public IP address. Private Endpoint does not support FTP access to the Web App.
-* IP-Based SSL is not supported with Private Endpoints.
+* FTP access is provided through the inbound public IP address. Private Endpoint doesn't support FTP access to the Web App.
+* IP-Based SSL isn't supported with Private Endpoints.
-We are improving Private Link feature and Private Endpoint regularly, check [this article][pllimitations] for up-to-date information about limitations.
+We're improving Private Link feature and Private Endpoint regularly, check [this article][pllimitations] for up-to-date information about limitations.
## Next steps
We are improving Private Link feature and Private Endpoint regularly, check [thi
[howtoguide5]: https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-privateendpoint-vnet-injection [howtoguide6]: ../scripts/terraform-secure-backend-frontend.md [TiP]: ../deploy-staging-slots.md#route-traffic
+[registerprovider]: ../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider
app-service Quickstart Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-php.md
description: Deploy your first PHP Hello World to Azure App Service in minutes.
ms.assetid: 6feac128-c728-4491-8b79-962da9a40788 Last updated 05/02/2021
+ms.devlang: php
zone_pivot_groups: app-service-platform-windows-linux
app-service Quickstart Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-ruby.md
keywords: azure app service, linux, oss, ruby, rails
ms.assetid: 6d00c73c-13cb-446f-8926-923db4101afa Last updated 04/27/2021
+ms.devlang: ruby
app-service Scenario Secure App Access Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-access-storage.md
Last updated 11/02/2021
+ms.devlang: csharp, javascript
#Customer intent: As an application developer, I want to learn how to access Azure Storage for an app by using managed identities.
app-service Tutorial Auth Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-auth-aad.md
Title: 'Tutorial: Authenticate users E2E' description: Learn how to use App Service authentication and authorization to secure your App Service apps end-to-end, including access to remote APIs. keywords: app service, azure app service, authN, authZ, secure, security, multi-tiered, azure active directory, azure ad
+ms.devlang: csharp
Last updated 09/23/2021
app-service Tutorial Connect Msi Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-key-vault.md
Title: 'Tutorial: Connect to Azure services securely with Key Vault' description: Learn how to secure connectivity to back-end Azure services that don't support managed identity natively.
+ms.devlang: csharp
Last updated 10/26/2021
app-service Tutorial Connect Msi Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-sql-database.md
Title: 'Tutorial: Access data with managed identity' description: Learn how to make database connectivity more secure by using a managed identity, and also how to apply it to other Azure services.
+ms.devlang: csharp
Last updated 01/27/2022
app-service Tutorial Dotnetcore Sqldb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-dotnetcore-sqldb-app.md
Last updated 02/04/2022
+ms.devlang: csharp
app-service Tutorial Java Spring Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-spring-cosmosdb.md
Title: 'Tutorial: Linux Java app with MongoDB'
description: Learn how to get a data-driven Linux Java app working in Azure App Service, with connection to a MongoDB running in Azure (Cosmos DB).
+ms.devlang: java
Last updated 12/10/2018
app-service Tutorial Nodejs Mongodb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-nodejs-mongodb-app.md
Last updated 01/31/2022 ms.role: developer
+ms.devlang: javascript
app-service Tutorial Php Mysql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-php-mysql-app.md
Title: 'Tutorial: PHP app with MySQL'
description: Learn how to get a PHP app working in Azure, with connection to a MySQL database in Azure. Laravel is used in the tutorial. ms.assetid: 14feb4f3-5095-496e-9a40-690e1414bd73
+ms.devlang: php
Last updated 06/15/2020
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
Title: 'Tutorial: Deploy a Python Django app with Postgres' description: Create a Python web app with a PostgreSQL database and deploy it to Azure. The tutorial uses the Django framework and the app is hosted on Azure App Service on Linux.
+ms.devlang: python
Last updated 11/30/2021
app-service Tutorial Ruby Postgres App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-ruby-postgres-app.md
Title: 'Tutorial: Linux Ruby app with Postgres' description: Learn how to get a Linux Ruby app working in Azure App Service, with connection to a PostgreSQL database in Azure. Rails is used in the tutorial.
+ms.devlang: ruby
Last updated 06/18/2020
applied-ai-services Concept Custom Neural https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-neural.md
https://{endpoint}/formrecognizer/documentModels:build?api-version=2022-01-30-pr
* Train a custom model: > [!div class="nextstepaction"]
- > [Form Recognizer quickstart](quickstarts/try-v3-form-recognizer-studio.md#custom-models)
+ > [How to train a model](how-to-guides/build-custom-model-v3.md)
+
+* Learn more about custom template models:
+
+ > [!div class="nextstepaction"]
+ > [Custom template models](concept-custom-template.md )
* View the REST API:
applied-ai-services Concept Custom Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-template.md
https://{endpoint}/formrecognizer/documentModels:build?api-version=2022-01-30-pr
## Next steps
-* Train a custom template model:
+* * Train a custom model:
> [!div class="nextstepaction"]
- > [Form Recognizer quickstart](quickstarts/try-sdk-rest-api.md)
+ > [How to train a model](how-to-guides/build-custom-model-v3.md)
* Learn more about custom neural models:
applied-ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom.md
To create a custom model, you label a dataset of documents with the values you w
## Custom model types
-Custom models can be one of two types, [**custom template**](concept-custom-template.md ) or [**custom neural**](concept-custom-neural.md) models. The labeling and training process for both models is identical, but the models differ as follows:
+Custom models can be one of two types, [**custom template**](concept-custom-template.md ) or custom form and [**custom neural**](concept-custom-neural.md) or custom document models. The labeling and training process for both models is identical, but the models differ as follows:
### Custom template model
- The custom template model relies on a consistent visual template to extract the labeled data. The accuracy of your model is affected by variances in the visual structure of your documents. Questionnaires or application forms are examples of consistent visual templates.Your training set will consist of structured documents where the formatting and layout are static and constant from one document instance to the next. Custom template models support key-value pairs, selection marks, tables, signature fields and regions and can be trained on documents in any of the [supported languages](language-support.md). For more information, *see* [custom template models](concept-custom-template.md ).
+ The custom template or custom form model relies on a consistent visual template to extract the labeled data. The accuracy of your model is affected by variances in the visual structure of your documents. Structured forms such as questionnaires or applications are examples of consistent visual templates. Your training set will consist of structured documents where the formatting and layout are static and constant from one document instance to the next. Custom template models support key-value pairs, selection marks, tables, signature fields and regions and can be trained on documents in any of the [supported languages](language-support.md). For more information, *see* [custom template models](concept-custom-template.md ).
> [!TIP] >
Custom models can be one of two types, [**custom template**](concept-custom-temp
### Custom neural model
-The custom neural model is a deep learning model type relies on a base model trained on a large collection of labeled documents using key-value pairs. This model is then fine-tuned or adapted to your data when you train the model with a labeled dataset. Custom neural models support structured, semi-structured, and unstructured documents to extract fields. Custom neural models currently support English-language documents. When choosing between the two model types, start with a neural model if it meets your functional needs. See [neural models](concept-custom-neural.md) to learn more about custom document models.
+The custom neural (custom document) model is a deep learning model type that relies on a base model trained on a large collection of documents. This model is then fine-tuned or adapted to your data when you train the model with a labeled dataset. Custom neural models support structured, semi-structured, and unstructured documents to extract fields. Custom neural models currently support English-language documents. When choosing between the two model types, start with a neural model if it meets your functional needs. See [neural models](concept-custom-neural.md) to learn more about custom document models.
## Model features
The following tools are supported by Form Recognizer v3.0:
| Feature | Resources | |-|-|
-|Custom model| <ul><li>[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)</li><li>[C# SDK](quickstarts/try-v3-csharp-sdk.md)</li><li>[Python SDK](quickstarts/try-v3-python-sdk.md)</li></ul>|
+|Custom model| <ul><li>[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)</li><li>[C# SDK](quickstarts/try-v3-csharp-sdk.md)</li><li>[Python SDK](quickstarts/try-v3-python-sdk.md)</li></ul>|
### Try Form Recognizer
applied-ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-model-overview.md
Azure Form Recognizer prebuilt models enable you to add intelligent document pro
| [Invoice](#invoice) | Extract key information from English and Spanish invoices. | | [Receipt](#receipt) | Extract key information from English receipts. | | [ID document](#id-document) | Extract key information from US driver licenses and international passports. |
+| 🆕[W-2 (preview)](#w-2-preview) | Extract employee, employer, wage information, etc. from US W-2 forms. |
| [Business card](#business-card) | Extract key information from English business cards. | | [Custom](#custom) | Extract data from forms and documents specific to your business. Custom models are trained for your distinct data and use cases. |
The ID document model analyzes and extracts key information from U.S. Driver's L
> [!div class="nextstepaction"] > [Learn more: identity document model](concept-id-document.md)
+### W-2 (preview)
++
+The W-2 model analyzes and extracts key information reported in each box on a W-2 form. The model supports standard and customized forms from 2018 to the present, including both single form and multiple forms (copy A, B, C, D, 1, 2) on one page.
+
+***Sample W-2 document processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)***:
++
+> [!div class="nextstepaction"]
+> [Learn more: W-2 model](concept-w2.md)
+ ### Business card :::image type="content" source="media/studio/business-card.png" alt-text="Screenshot: Studio business card icon.":::
The custom model analyzes and extracts data from forms and documents specific to
| Layout | Γ£ô | || Γ£ô | Γ£ô | | | Invoice | Γ£ô | Γ£ô |Γ£ô| Γ£ô | Γ£ô || |Receipt | Γ£ô | Γ£ô |Γ£ô| | ||
- | ID document | Γ£ô | Γ£ô |Γ£ô| | ||
+ | ID document | Γ£ô | Γ£ô |Γ£ô| | ||
+ |🆕W-2 | ✓ | ✓ | ✓ | ✓ | ✓ ||
| Business card | Γ£ô | Γ£ô | Γ£ô| | || | Custom |Γ£ô | Γ£ô || Γ£ô | Γ£ô | Γ£ô |
The custom model analyzes and extracts data from forms and documents specific to
* [**General document (preview)**](concept-general-document.md) model is a new API that uses a pre-trained model to extract text, tables, structure, key-value pairs, and named entities from forms and documents. * [**Receipt (preview)**](concept-receipt.md) model supports single-page hotel receipt processing. * [**ID document (preview)**](concept-id-document.md) model supports endorsements, restrictions, and vehicle classification extraction from US driver's licenses.
+* [**W-2 (preview)**](concept-w2.md) model supports employee, employer, wage information, etc. from US W-2 forms.
* [**Custom model API (preview)**](concept-custom.md) supports signature detection for custom forms. ### Version migration
applied-ai-services Concept W2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-w2.md
recommendations: false
-# Form Recognizer W-2 Form prebuilt model | Preview
+# Form Recognizer W-2 model | Preview
-The Form W-2, Wage and Tax Statement, is a US Internal Revenue Service (IRS) tax form completed by employers to report employees' salary, wages, compensation, and taxes withheld. Employers send a W-2 form to each employee on or before January 31 each year and employees use the form to prepare their tax returns.
+The Form W-2, Wage and Tax Statement, is a [US Internal Revenue Service (IRS) tax form](https://www.irs.gov/forms-pubs/about-form-w-2) completed by employers to report employees' salary, wages, compensation, and taxes withheld. Employers send a W-2 form to each employee on or before January 31 each year and employees use the form to prepare their tax returns. W-2 is a key document used in employee's federal and state taxes filing, as well as other processes like mortgage loan and Social Security Administration (SSA).
-A W-2 is a multipart form divided into state and federal sections:
-
-* Copy A is sent to the Social Security Administration.
-* Copy 1 is for the city, state, or locality tax assessment.
-* Copy B is for filing with the employee's federal tax return.
-* Copy C is for the employee's records.
-* Copy 2 is another copy for a city, state, or locality tax assessment.
-* Copy D is for the employer's records.
-
-Each W-2 Form consists of more than 14 boxes, both numbered and lettered, that detail the employee's income from the previous year. The Form Recognizer **prebuilt-tax**, Form W-2 model, combines Optical Character Recognition (OCR) with deep learning models to analyze and extract information reported in each box on a W-2 form. The model supports standard and customized forms from 2018 to the present, including both single form and multiple forms (copy A, B, C, D, 1, 2) on one page.
+A W-2 is a multipart form divided into state and federal sections and consists of more than 14 boxes, both numbered and lettered, that detail the employee's income from the previous year. The Form Recognizer W-2 model, combines Optical Character Recognition (OCR) with deep learning models to analyze and extract information reported in each box on a W-2 form. The model supports standard and customized forms from 2018 to the present, including both single form and multiple forms ([copy A, B, C, D, 1, 2](https://en.wikipedia.org/wiki/Form_W-2#Filing_requirements) on one page.
***Sample W-2 form processed using Form Recognizer Studio***
See how data, including employee, employer, wage, and tax information is extract
1. On the [Form Recognizer Studio home page](https://formrecognizer.appliedai.azure.com/studio), select **W-2 form**.
-1. You can analyze the sample invoice or select the **Γ₧ò Add** button to upload your own sample.
+1. You can analyze the sample W-2 document or select the **Γ₧ò Add** button to upload your own sample.
1. Select the **Analyze** button:
See how data, including employee, employer, wage, and tax information is extract
|Name| Box | Type | Description | Standardized output| |:--|:-|:-|:-|:-|
-| Employee.SocialSecurityNumber | a | String | Employee's Social Security N number (SSN). | 123-45-6789 |
+| Employee.SocialSecurityNumber | a | String | Employee's Social Security Number (SSN). | 123-45-6789 |
| Employer.IdNumber | b | String | Employer's ID number (EIN), the business equivalent of a social security number.| 12-1234567 | | Employer.Name | c | String | Employer's name. | Contoso | | Employer.Address | c | String | Employer's address (with city). | 123 Example Street Sample City, CA |
applied-ai-services Build Custom Model V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/build-custom-model-v3.md
+
+ Title: "Train a custom model in the Form Recognizer Studio"
+
+description: Learn how to build, label, and train a custom model in the Form Recognizer Studio.
+++++ Last updated : 02/16/2022+++
+# Build your training data set for a custom model
+
+Form Recognizer models require as few as five training documents to get started. If you have at least five documents, you can get started training a custom model. You can train either a [custom template model (custom form)](../concept-custom-template.md) or a [custom neural model (custom document)](../concept-custom-neural.md). The training process is identical for both models and this document walks you through the process of training either model.
+
+## Custom model input requirements
+
+First, make sure your training data set follows the input requirements for Form Recognizer.
++
+## Training data tips
+
+Follow these tips to further optimize your data set for training:
+
+* If possible, use text-based PDF documents instead of image-based documents. Scanned PDFs are handled as images.
+* For forms with input fields, use examples that have all of the fields completed.
+* Use forms with different values in each field.
+* If your form images are of lower quality, use a larger data set (10-15 images, for example).
+
+## Upload your training data
+
+When you've put together the set of forms or documents that you'll use for training, you'll need to upload it to an Azure blob storage container. If you don't know how to create an Azure storage account with a container, following the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
+
+## Create a project in the Form Recognizer Studio
+
+The Form Recognizer Studio provides and orchestrates all the API calls required to create the files required to complete your dataset and train your model.
+
+1. Start by navigating to the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio). If this is your first time using the Studio, you'll need to [initialize it for use](../quickstarts/try-v3-form-recognizer-studio.md). Follow the [additional prerequisite for custom projects](../quickstarts/try-v3-form-recognizer-studio.md#additional-prerequisites-for-custom-projects) to configure the Studio to access your training dataset.
+
+1. In the Studio select the **Custom models** tile, on the custom models page and select the **Create a project** button.
+
+ :::image type="content" source="../media/how-to/studio-create-project.png" alt-text="Screenshot: Create a project in the Form Recognizer Studio.":::
+
+ 1. On the create project dialog, provide a name for your project, optionally a description, and select continue.
+
+ 1. On the next step in the workflow, choose or create a Form Recognizer resource before you select continue.
+
+ > [!IMPORTANT]
+ > Custom neural models models are only available in a few regions. If you plan on training a neural model, please select or create a resource in one of [these supported regions](https://aka.ms/fr-neural#l#supported-regions).
+
+ :::image type="content" source="../media/how-to/studio-select-resource.png" alt-text="Screenshot: Select the Form Recognizer resource.":::
+
+1. Next select the storage account where you uploaded the dataset you wish to use to train your custom model. The **Folder path** should be empty if your training documents are in the root of the container. If your documents are in a sub-folder, enter the relative path from the container root in the **Folder path** field. Once your storage account is configured, select continue.
+
+ :::image type="content" source="../media/how-to/studio-select-storage.png" alt-text="Screenshot: Select the storage account.":::
+
+1. Finally, review your project settings and select **Create Project** to create a new project. You should now be in the labeling window and see the files in your dataset listed.
+
+## Label your data
+
+In your project, your first task is to label your dataset with the fields you wish to extract.
+
+You'll see the files you uploaded to storage on the left of your screen, with the first file ready to be labeled.
+
+1. To start labeling your dataset, create your first field by selecting the plus (Γ₧ò) button on the top-right of the screen to select a field type.
+
+ :::image type="content" source="../media/how-to/studio-create-label.png" alt-text="Screenshot: Create a label.":::
+
+1. Enter a name for the field.
+
+1. To assign a value to the field, simply choose a word or words in the document and select the field in either the dropdown or the field list on the right navigation bar. You'll see the labeled value below the field name in the list of fields.
+
+1. Repeat this process for all the fields you wish to label for your dataset
+
+1. Label the remaining documents in your dataset by selecting each document in the document list and selecting the text to be labeled
+
+You now have all the documents in your dataset labeled. If you look at the storage account, you'll find a *.labels.json* and *.ocr.json* files that correspond to each document in your training dataset and an additional fields.json file. This is the training dataset that will be submitted to train the model.
+
+## Train your model
+
+With your dataset labeled, you're now ready to train your model. Select the train button in the upper-right corner.
+
+1. On the train model dialog, provide a unique model ID and, optionally, a description.
+
+1. For the build mode, select the type of model you want to train. Learn more about the [model types and capabilities](../concept-custom.md).
+
+ :::image type="content" source="../media/how-to/studio-train-model.png" alt-text="Screenshot: Train model dialog":::
+
+1. Select **Train** to initiate the training process.
+
+1. Template models train in a few minutes. Neural models can take up to 30 minutes to train.
+
+1. Navigate to the *Models* menu to view the status of the train operation.
+
+## Test the model
+
+Once the model training is complete, you can test your model by selecting the model on the models list page.
+
+1. Select the model and select on the **Test** button.
+
+1. Select the `+ Add` button to select a file to test the model.
+
+1. With a file selected, choose the **Analyze** button to test the model.
+
+1. The model results are displayed in the main window and the fields extracted are listed in the right navigation bar.
+
+1. Validate your model by evaluating the results for each field.
+
+1. The right navigation bar also has the sample code to invoke your model and the JSON results from the API.
+
+Congratulations you've trained a custom model in the Form Recognizer Studio! Your model is ready for use with the REST API or the SDK to analyze documents.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn about custom model types](../concept-custom.md)
+
+> [!div class="nextstepaction"]
+> [Learn about accuracy and confidence with custom models](../concept-accuracy-confidence.md)
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/overview.md
The following features and development options are supported by the Form Recogn
|[🆕 **Read**](concept-read.md)|Extract text lines, words, detected languages, and handwritten style if detected.|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#try-it-general-document-model)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#general-document-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#general-document-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#general-document-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#general-document-model)</li></ul> | |[🆕 **General document model**](concept-general-document.md)|Extract text, tables, structure, key-value pairs and, named entities.|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/document)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#try-it-general-document-model)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#general-document-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#general-document-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#general-document-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#general-document-model)</li></ul> | |[**Layout model**](concept-layout.md) | Extract text, selection marks, and tables structures, along with their bounding box coordinates, from forms and documents.</br></br> Layout API has been updated to a prebuilt model. | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#try-it-layout-model)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#layout-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#layout-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#layout-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#layout-model)</li></ul>|
-|[**Custom model (updated)**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.</br></br>Custom model API v3.0 supports **signature detection for custom forms**.</li></ul>| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|
+|[**Custom model (updated)**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.<ul><li>Custom model API v3.0 supports **signature detection for custom template (custom form) models**.</br></br></li><li>Custom model API v3.0 offers a new model type **Custom Neural** or custom document to analyze unstructured documents.</li></ul>| [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|
|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li></ul>| |[**Receipt model (updated)**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.</br></br>Receipt model v3.0 supports processing of **single-page hotel receipts**.| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul>| |[**ID document model (updated)**](concept-id-document.md) |Automated data processing and extraction of key information from US driver's licenses and international passports.</br></br>Prebuilt ID document API supports the **extraction of endorsements, restrictions, and vehicle classifications from US driver's licenses**. |<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul>|
applied-ai-services Preview Error Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/preview-error-guide.md
When possible, more details are specified in the *inner error* property.
| InvalidRequest | ContentSourceNotAccessible | Content is not accessible: {details} | | InvalidRequest | ContentSourceTimeout | Timeout while receiving the file from client. | | InvalidRequest | DocumentModelLimit | Account cannot create more than {maximumModels} models. |
+| InvalidRequest | DocumentModelLimitNeural | Account cannot create more than 10 custom neural models per month. Please contact support to request additional capacity. |
| InvalidRequest | DocumentModelLimitComposed | Account cannot create a model with more than {details} component models. | | InvalidRequest | InvalidContent | The file is corrupted or format is unsupported. Refer to documentation for the list of supported formats. | | InvalidRequest | InvalidContentDimensions | The input image dimensions are out of range. Refer to documentation for supported image dimensions. |
When possible, more details are specified in the *inner error* property.
| InvalidRequest | TrainingContentMissing | Training data is missing: {details} | | InvalidRequest | UnsupportedContent | Content is not supported: {details} | | NotFound | ModelNotFound | The requested model was not found. It may have been deleted or is still building. |
-| NotFound | OperationNotFound | The requested operation was not found. The identifier may be invalid or the operation may have expired. |
+| NotFound | OperationNotFound | The requested operation was not found. The identifier may be invalid or the operation may have expired. |
applied-ai-services Try V3 Csharp Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-csharp-sdk.md
In this quickstart, you'll use following features to analyze and extract data an
* [**Layout model**](#layout-model)ΓÇöAnalyze and extract tables, lines, words, and selection marks like radio buttons and check boxes in forms documents, without the need to train a model.
-* [**Prebuilt model**](#prebuilt-model)ΓÇöAnalyze and extract common fields from specific document types using a pre-trained model.
+* [**Prebuilt model**](#prebuilt-model)ΓÇöAnalyze and extract common fields from specific document types using a prebuilt model.
## Prerequisites
This version of the client library defaults to the 2021-09-30-preview version of
:::image type="content" source="../media/quickstarts/select-nuget-package.png" alt-text="Screenshot: select-nuget-package.png":::
- 1. Select the Browse tab and type Azure.AI.FormRecognizer.
+ 1. Select the Browse tab and type Azure.AI.FormRecognizer.
:::image type="content" source="../media/quickstarts/azure-nuget-package.png" alt-text="Screenshot: select-form-recognizer-package.png":::
- 1. Choose the **Include prerelease** checkbox and select version **4.0.0-beta.*** from the dropdown menu.
-
- 1. Select **Install**.
-
- :::image type="content" source="../media/quickstarts/prerelease-nuget-package.png" alt-text="{alt-text}":::
-
+ 1. Choose the **Include prerelease** checkbox and select version **4.0.0-beta.3*** from the dropdown menu and install the package in your project.
<!-- --> ## Build your application
applied-ai-services Try V3 Javascript Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-javascript-sdk.md
In this quickstart you'll use following features to analyze and extract data and
> [!TIP] >
- > * You can create a new file using Powershell.
- > * Open a Powershell window in your project directory by holding down the Shift key and right-clicking the folder.
+ > * You can create a new file using PowerShell.
+ > * Open a PowerShell window in your project directory by holding down the Shift key and right-clicking the folder.
> * Type the following command **New-Item index.js**. 1. Open the `index.js` file in Visual Studio Code or your favorite IDE. First, we'll add the necessary libraries. Copy the following and paste it at the top of the file:
applied-ai-services Try V3 Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-rest-api.md
To learn more about Form Recognizer features and development options, visit our
The REST API supports the following models and capabilities:
-* 🆕General document—Analyze and extract text, tables, structure, key-value pairs, and named entities.|
-* 🆕 W-2 Tax Forms—Analyze and extract fields from W-2 tax documents, using a pre-trained W-2 model.
+* 🆕General document—Analyze and extract text, tables, structure, key-value pairs, and named entities.
+* 🆕 W-2—Analyze and extract fields from W-2 tax documents, using a pre-trained W-2 model.
* LayoutΓÇöAnalyze and extract tables, lines, words, and selection marks like radio buttons and check boxes in forms documents, without the need to train a model. * CustomΓÇöAnalyze and extract form fields and other content from your custom forms, using models you trained with your own form types. * InvoicesΓÇöAnalyze and extract common fields from invoices, using a pre-trained invoice model.
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
Form Recognizer v3.0 preview release introduces several new features and capabilities and enhances existing one:
+* [🆕 **Custom neural model**](concept-custom-neural.md) or custom document model is a new custom model to extract text and selection marks from structured forms, semi-strutured and **unstructured documents**.
+* [🆕 **W-2 prebuilt model**](concept-w2.md) is a new prebuilt model to extract fields from W-2 forms for tax reporting and income verification scenarios.
+* [🆕 **Read**](concept-read.md) API extracts printed text lines, words, text locations, detected languages, and handwritten text, if detected.
+* [**General document**](concept-general-document.md) pre-trained model is now updated to support selection marks in addition to API text, tables, structure, key-value pairs, and named entities from forms and documents.
+* [**Invoice API**](language-support.md#invoice-model) Invoice prebuilt model expands support to Spanish invoices.
* [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com) adds new demos for Read, W2, Hotel receipt samples, and support for training the new custom neural models.
-* [🆕 **W-2 prebuilt model**](concept-w2.md) is a new prebuilt model to extract fields from W-2 tax documents.
-* [🆕 **Read**](concept-read.md) API extracts text lines, words, their locations, detected languages, and handwritten style if detected.
-* [🆕 **Custom neural model**](concept-custom-neural.md) is a new custom model to extract text and selection marks from structured forms and **unstructured documents**.
* [**Language Expansion**](language-support.md) Form Recognizer Read, Layout, and Custom Form add support for 42 new languages including Arabic, Hindi, and other languages using Arabic and Devanagari scripts to expand the coverage to 164 languages. Handwritten support for the same features expands to Japanese and Korean in addition to English, Chinese Simplified, French, German, Italian, Portuguese, and Spanish languages.
-* [**Invoice API**](language-support.md#invoice-model) Invoice API expands support to Spanish invoices.
-* [**General document**](concept-general-document.md) pre-trained model now updated to support selection marks in addition to API text, tables, structure, key-value pairs, and named entities from forms and documents.
-Get stared with the new [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument), [Python](quickstarts/try-v3-python-sdk.md), or [.NET](quickstarts/try-v3-csharp-sdk.md) SDK for the v3.0 preview API.
+Get started with the new [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument), [Python](quickstarts/try-v3-python-sdk.md), or [.NET](quickstarts/try-v3-csharp-sdk.md) SDK for the v3.0 preview API.
#### Form Recognizer model data extraction
applied-ai-services Tutorial Ios Picture Immersive Reader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/tutorial-ios-picture-immersive-reader.md
You need some values from the Azure AD authentication configuration prerequisite
TenantId => Azure subscription TenantId ClientId => Azure AD ApplicationId ClientSecret => Azure AD Application Service Principal password
-Subdomain => Immersive Reader resource subdomain (resource 'Name' if the resource was created in the Azure portal, or 'CustomSubDomain' option if the resource was created with Azure CLI Powershell. Check the Azure portal for the subdomain on the Endpoint in the resource Overview page, for example, 'https://[SUBDOMAIN].cognitiveservices.azure.com/')
+Subdomain => Immersive Reader resource subdomain (resource 'Name' if the resource was created in the Azure portal, or 'CustomSubDomain' option if the resource was created with Azure CLI PowerShell. Check the Azure portal for the subdomain on the Endpoint in the resource Overview page, for example, 'https://[SUBDOMAIN].cognitiveservices.azure.com/')
```` In the main project folder, which contains the ViewController.swift file, create a Swift class file called Constants.swift. Replace the class with the following code, adding in your values where applicable. Keep this file as a local file that only exists on your machine and be sure not to commit this file into source control, as it contains secrets that should not be made public. It is recommended that you do not keep secrets in your app. Instead, we recommend using a backend service to obtain the token, where the secrets can be kept outside of the app and off of the device. The backend API endpoint should be secured behind some form of authentication (for example, [OAuth](https://oauth.net/2/)) to prevent unauthorized users from obtaining tokens to use against your Immersive Reader service and billing; that work is beyond the scope of this tutorial.
automation Automation Linux Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-linux-hrw-install.md
Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/read | Reads a
Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/write | Creates a Hybrid Runbook Worker Group. Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/delete | Deletes a Hybrid Runbook Worker Group. Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/read | Reads a Hybrid Runbook Worker.
-Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/write | Creates a Hybrid Runbook Worker.
-Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/move/action | Moves Hybrid Runbook Worker from one Worker Group to another.
Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/delete| Deletes a Hybrid Runbook Worker.
automation Automation Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-role-based-access-control.md
In Azure Automation, access is granted by assigning the appropriate Azure role t
## Role permissions
-The following tables describe the specific permissions given to each role. This can include Actions, which give permissions, and NotActions, which restrict them.
+The following tables describe the specific permissions given to each role. This can include Actions, which give permissions, and Not Actions, which restrict them.
### Owner
An Owner can manage everything, including access. The following table shows the
|Actions|Description| |||
-|Microsoft.Automation/automationAccounts/|Create and manage resources of all types.|
+|Microsoft.Automation/automationAccounts/*|Create and manage resources of all types.|
### Contributor
A Contributor can manage everything except access. The following table shows the
|**Actions** |**Description** | |||
-|Microsoft.Automation/automationAccounts/|Create and manage resources of all types|
+|Microsoft.Automation/automationAccounts/*|Create and manage resources of all types|
|**Not Actions**|| |Microsoft.Authorization/*/Delete| Delete roles and role assignments. | |Microsoft.Authorization/*/Write | Create roles and role assignments. |
A Contributor can manage everything except access. The following table shows the
### Reader
+>[!Note]
+> We have recently made a change in the built-in Reader role permission for the Automation account. [Learn more](#reader-role-access-permissions)
+ A Reader can view all the resources in an Automation account but can't make any changes. |**Actions** |**Description** | ||| |Microsoft.Automation/automationAccounts/read|View all resources in an Automation account. | + ### Automation Contributor An Automation Contributor can manage all resources in the Automation account except access. The following table shows the permissions granted for the role: |**Actions** |**Description** | |||
+|[Microsoft.Automation](/azure/role-based-access-control/resource-provider-operations#microsoftautomation)/automationAccounts/* | Create and manage resources of all types.|
|Microsoft.Authorization/*/read|Read roles and role assignments.| |Microsoft.Resources/deployments/*|Create and manage resource group deployments.| |Microsoft.Resources/subscriptions/resourceGroups/read|Read resource group deployments.|
The following table shows the permissions granted for the role:
|Microsoft.Resources/deployments/* |Create and manage resource group deployments. | |Microsoft.Insights/alertRules/* | Create and manage alert rules. | |Microsoft.Support/* |Create and manage support tickets.|
+|[Microsoft.ResourceHealth](/azure/role-based-access-control/resource-provider-operations#microsoftresourcehealth)/availabilityStatuses/read| Gets the availability statuses for all resources in the specified scope.|
### Automation Job Operator
The following table shows the permissions granted for the role:
|Microsoft.Resources/deployments/* |Create and manage resource group deployments. | |Microsoft.Insights/alertRules/* | Create and manage alert rules. | |Microsoft.Support/* |Create and manage support tickets.|
+|Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/read | Reads a Hybrid Runbook Worker Group.|
+|Microsoft.Automation/automationAccounts/jobs/output/read | Gets the output of a job.|
### Automation Runbook Operator
A Log Analytics Contributor can read all monitoring data and edit monitoring set
|Microsoft.Resources/subscriptions/resourcegroups/deployments/*|Create and manage resource group deployments.| |Microsoft.Storage/storageAccounts/listKeys/action|List storage account keys.| |Microsoft.Support/*|Create and manage support tickets.|
+|Microsoft.HybridCompute/machines/extensions/write| Installs or Updates an Azure Arc extensions.|
### Log Analytics Reader
A User Access Administrator can manage user access to Azure resources. The follo
|Microsoft.Authorization/*|Manage authorization| |Microsoft.Support/*|Create and manage support tickets| +
+## Reader role access permissions
+
+>[!Important]
+> To strengthen the overall Azure Automation security posture, the built-in RBAC Reader would not have access to Automation account keys through the API call - `GET /AUTOMATIONACCOUNTS/AGENTREGISTRATIONINFORMATION`.
+
+The Built-in Reader role for the Automation Account can't use the `API ΓÇô GET /AUTOMATIONACCOUNTS/AGENTREGISTRATIONINFORMATION` to fetch the Automation Account keys. This is a high privilege operation providing sensitive information that could pose a security risk of an unwanted malicious actor with low privileges who can get access to automation account keys and can perform actions with elevated privilege level.
+
+To access the `API ΓÇô GET /AUTOMATIONACCOUNTS/AGENTREGISTRATIONINFORMATION`, we recommend that you switch to the built-in roles like Owner, Contributor or Automation Contributor to access the Automation account keys. These roles, by default, will have the *listKeys* permission. As a best practice, we recommend that you create a custom role with limited permissions to access the Automation account keys. For a custom role, you need to add
+`Microsoft.Automation/automationAccounts/listKeys/action` permission to the role definition.
+[Learn more](/azure/role-based-access-control/custom-roles) about how to create custom role from the Azure portal.
+
## Feature setup permissions The following sections describe the minimum required permissions needed for enabling the Update Management and Change Tracking and Inventory features.
Perform the following steps to create the Azure Automation custom role with Powe
1. Complete the remaining steps as outlined in [Create or update Azure custom roles using Azure PowerShell](./../role-based-access-control/custom-roles-powershell.md#create-a-custom-role-with-json-template). It can take a few minutes for your custom role to appear everywhere.
+## Manage Role permissions for Hybrid Worker Groups and Hybrid Workers
+
+You can create [Azure custom roles](/azure/role-based-access-control/custom-roles) in Automation and grant the following permissions to Hybrid Worker Groups and Hybrid Workers:
+
+- [Extension-based Hybrid Runbook Worker](/azure/automation/extension-based-hybrid-runbook-worker-install?tabs=windows#manage-role-permissions-for-hybrid-worker-groups-and-hybrid-workers)
+- [Agent-based Windows Hybrid Runbook Worker](/azure/automation/automation-windows-hrw-install#manage-role-permissions-for-hybrid-worker-groups-and-hybrid-workers)
+ - [Agent-based Linux Hybrid Runbook Worker](/azure/automation/automation-linux-hrw-install#manage-role-permissions-for-hybrid-worker-groups-and-hybrid-workers)
++ ## Update Management permissions Update Management can be used to assess and schedule update deployments to machines in multiple subscriptions in the same Azure Active Directory (Azure AD) tenant, or across tenants using Azure Lighthouse. The following table lists the permissions needed to manage update deployments.
automation Automation Security Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-security-guidelines.md
+
+ Title: Azure Automation security guidelines
+description: This article helps you with the guidelines that Azure Automation offers to ensure data privacy and data security.
++ Last updated : 02/16/2022+++
+# Best practices for security in Azure Automation
+
+This article details the best practices to securely execute the automation jobs.
+[Azure Automation](/azure/automation/overview) provides you the platform to orchestrate frequent, time consuming, error-prone infrastructure management and operational tasks, as well as mission-critical operations. This service allows you to execute scripts, known as automation runbooks seamlessly across cloud and hybrid environments.
+
+The platform components of Azure Automation Service are actively secured and hardened. The service goes through robust security and compliance checks. [Azure security benchmark](/security/benchmark/azure/overview) details the best practices and recommendations to help improve the security of workloads, data, and services on Azure. Also see [Azure security baseline for Azure Automation](/security/benchmark/azure/baselines/automation-security-baseline?toc=/azure/automation/TOC.json).
+
+## Secure configuration of Automation account
+
+This section guides you in configuring your Automation account securely.
+
+### Permissions
+
+1. Follow the principle of least privilege to get the work done when granting access to Automation resources. Implement [Automation granular RBAC roles](/azure/automation/automation-role-based-access-control) and avoid assigning broader roles or scopes such as subscription level. When creating the custom roles, only include the permissions users need. By limiting roles and scopes, you limit the resources that are at risk if the security principal is ever compromised. For detailed information on role-based access control concepts, see [Azure role-based access control best practices](/azure/role-based-access-control/best-practices).
+
+1. Avoid roles that include Actions having a wildcard (_*_) as it implies full access to the Automation resource or a sub-resource, for example _automationaccounts/*/read_. Instead, use specific actions only for the required permission.
+
+1. Configure [Role based access at a runbook level](/azure/automation/automation-role-based-access-control) if the user doesn't require access to all the runbooks in the Automation account.
+
+1. Limit the number of highly privileged roles such as Automation Contributor to reduce the potential for breach by a compromised owner.
+
+1. Use [Azure AD Privileged Identity Management](/azure/active-directory/roles/security-planning#use-azure-ad-privileged-identity-management) to protect the privileged accounts from malicious cyber-attacks to increase your visibility into their use through reports and alerts.
+
+### Securing Hybrid Runbook worker role
+
+1. Install Hybrid workers using the [Hybrid Runbook Worker VM extension](/azure/automation/extension-based-hybrid-runbook-worker-install?tabs=windows), that doesn't have any dependency on the Log Analytics agent. We recommend this platform as it leverages Azure AD based authentication.
+ [Hybrid Runbook Worker](/azure/automation/automation-hrw-run-runbooks) feature of Azure Automation allows you to execute runbooks directly on the machine hosting the role in Azure or non-Azure machine to execute Automation jobs in the local environment.
+ - Use only high privilege users or [Hybrid worker custom roles](/azure/automation/extension-based-hybrid-runbook-worker-install?tabs=windows#manage-role-permissions-for-hybrid-worker-groups) for users responsible for managing operations such as registering or unregistering Hybrid workers and hybrid groups and executing runbooks against Hybrid runbook worker groups.
+ - The same user would also require VM contributor access on the machine hosting Hybrid worker role. Since the VM contributor is a high privilege role, ensure only a limited right set of users have access to manage Hybrid works, thereby reducing the potential for breach by a compromised owner.
+
+ Follow the [Azure RBAC best practices](/azure/role-based-access-control/best-practices).
+
+1. Follow the principle of least privilege and grant only the required permissions to users for runbook execution against a Hybrid worker. Don't provide unrestricted permissions to the machine hosting the hybrid runbook worker role. In case of unrestricted access, a user with VM Contributor rights or having permissions to run commands against the hybrid worker machine can use the Automation Account Run As certificate from the hybrid worker machine and could potentially allow a malicious user access as a subscription contributor. This could jeopardize the security of your Azure environment.
+ Use [Hybrid worker custom roles](/azure/automation/extension-based-hybrid-runbook-worker-install?tabs=windows#manage-role-permissions-for-hybrid-worker-groups) for users responsible to manage Automation runbooks against Hybrid runbook workers and Hybrid runbook worker groups.
+
+1. [Unregister](/azure/automation/extension-based-hybrid-runbook-worker-install?tabs=windows#delete-a-hybrid-runbook-worker) any unused or non-responsive hybrid workers.
+
+### Authentication certificate and identities
+
+1. For runbook authentication, we recommend that you use [Managed identities](/azure/automation/automation-security-overview#managed-identities) instead of Run As accounts. The Run As accounts are an administrative overhead and we plan to deprecate them. A managed identity from Azure Active Directory (Azure AD) allows your runbook to easily access other Azure AD-protected resources such as Azure Key Vault. The identity is managed by the Azure platform and does not require you to provision or rotate any secrets. For more information about managed identities in Azure Automation, see [Managed identities for Azure Automation](/azure/automation/automation-security-overview#managed-identities)
+
+ You can authenticate an Automation account using two types of managed identities:
+ - **System-assigned identity** is tied to your application and is deleted if your app is deleted. An app can only have one system-assigned identity.
+ - **User-assigned identity** is a standalone Azure resource that can be assigned to your app. An app can have multiple user-assigned identities.
+
+ Follow the [Managed identity best practice recommendations](/azure/active-directory/managed-identities-azure-resources/managed-identity-best-practice-recommendations#choosing-system-or-user-assigned-managed-identities) for more details.
+
+1. If you use Run As accounts as the authentication mechanism for your runbooks, ensure the following:
+ - Track the service principals in your inventory. Service principals often have elevated permissions.
+ - Delete any unused Run As accounts to minimize your exposed attack surface.
+ - [Renew the Run As certificate](/azure/automation/manage-runas-account#cert-renewal) periodically.
+ - Follow the RBAC guidelines to limit the permissions assigned to Run As account using this [script](/azure/automation/manage-runas-account#limit-run-as-account-permissions). Do not assign high privilege permissions like Contributor, Owner and so on.
+
+1. Rotate the [Azure Automation keys](/azure/automation/automation-create-standalone-account?tabs=azureportal#manage-automation-account-keys) periodically. The key regeneration prevents future DSC or hybrid worker node registrations from using previous keys. We recommend to use the [Extension based hybrid workers](/azure/automation/automation-hybrid-runbook-worker) that use Azure AD authentication instead of Automation keys. Azure AD centralizes the control and management of identities and resource credentials.
+
+### Data security
+1. Secure the assets in Azure Automation including credentials, certificates, connections and encrypted variables. These assets are protected in Azure Automation using multiple levels of encryption. By default, data is encrypted with Microsoft-managed keys. For additional control over encryption keys, you can supply customer-managed keys to use for encryption of Automation assets. These keys must be present in Azure Key Vault for Automation service to be able to access the keys. See [Encryption of secure assets using customer-managed keys](/azure/automation/automation-secure-asset-encryption).
+
+1. Don't print any credentials or certificate details in the job output. An Automation job operator who is the low privilege user can view the sensitive information.
+
+1. Maintain a valid [backup of Automation](/azure/automation/automation-managing-data#data-backup) configuration like runbooks and assets ensuring backups are validated and protected to maintain business continuity after an unexpected event.
+
+### Network isolation
+
+1. Use [Azure Private Link](/azure/automation/how-to/private-link-security) to securely connect Hybrid runbook workers to Azure Automation. Azure Private Endpoint is a network interface that connects you privately and securely to a an Azure Automation service powered by Azure Private Link. Private Endpoint uses a private IP address from your Virtual Network (VNet), to effectively bring the Automation service into your VNet.
+
+If you want to access and manage other services privately through runbooks from Azure VNet without the need to open an outbound connection to the internet, you can execute runbooks on a Hybrid Worker that is connected to the Azure VNet.
+
+### Policies for Azure Automation
+
+Review the Azure Policy recommendations for Azure Automation and act as appropriate. See [Azure Automation policies](/azure/automation/policy-reference).
+
+## Next steps
+
+* To learn how to use Azure role-based access control (Azure RBAC), see [Manage role permissions and security in Azure Automation](/automation/automation-role-based-access-control).
+* For information on how Azure protects your privacy and secures your data, see [Azure Automation data security](/azure/automation/automation-managing-data).
+* To learn about configuring the Automation account to use encryption, see [Encryption of secure assets in Azure Automation](/automation/automation-secure-asset-encryption).
automation Automation Windows Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-windows-hrw-install.md
Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/read | Reads a
Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/write | Creates a Hybrid Runbook Worker Group. Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/delete | Deletes a Hybrid Runbook Worker Group. Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/read | Reads a Hybrid Runbook Worker.
-Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/write | Creates a Hybrid Runbook Worker.
-Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/move/action | Moves Hybrid Runbook Worker from one Worker Group to another.
Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/delete | Deletes a Hybrid Runbook Worker.
automation Extension Based Hybrid Runbook Worker Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/extension-based-hybrid-runbook-worker-install.md
To install and use Hybrid Worker extension using REST API, follow these steps. T
} ```
-## Manage Role permissions for Hybrid Worker Groups
+## Manage Role permissions for Hybrid Worker Groups and Hybrid Workers
-You can create custom Azure Automation roles and grant following permissions to Hybrid Worker Groups. To learn more about how to create Azure Automation custom roles, see [Azure custom roles](../role-based-access-control/custom-roles.md).
+You can create custom Azure Automation roles and grant following permissions to Hybrid Worker Groups and Hybrid Workers. To learn more about how to create Azure Automation custom roles, see [Azure custom roles](../role-based-access-control/custom-roles.md).
**Actions** | **Description** | Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/read | Reads a Hybrid Runbook Worker Group. Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/write | Creates a Hybrid Runbook Worker Group. Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/delete | Deletes a Hybrid Runbook Worker Group.
+Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/read | Reads a Hybrid Runbook Worker.
+Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/write | Creates a Hybrid Runbook Worker.
+Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/move/action | Moves Hybrid Runbook Worker from one Worker Group to another.
+Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/delete | Deletes a Hybrid Runbook Worker.
++ ## Next steps
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md
Azure Automation receives improvements on an ongoing basis. To stay up to date w
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Automation](whats-new-archive.md).
+## February 2022
+
+### Permissions change in the built-in Reader role for the Automation Account.
+
+**Type:** New change
+
+To strengthen the overall Azure Automation security posture, the built-in RBAC Reader role would not have access to Automation account keys through the API call - `GET /automationAccounts/agentRegistrationInformation`. Read [here](/azure/automation/automation-role-based-access-control#reader) for more information.
+ ## December 2021 ### New scripts added for Azure VM management based on Azure Monitor Alert
azure-app-configuration Howto App Configuration Event https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-app-configuration-event.md
ms.assetid:
+ms.devlang: csharp
Last updated 03/04/2020
azure-app-configuration Use Key Vault References Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-key-vault-references-spring-boot.md
editor: ''
ms.assetid:
+ms.devlang: java
Last updated 08/11/2020
azure-arc Onboard Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-service-principal.md
Title: Connect hybrid machines to Azure at scale description: In this article, you learn how to connect machines to Azure using Azure Arc-enabled servers using a service principal. Previously updated : 02/10/2022 Last updated : 02/16/2022
If you don't have an Azure subscription, create a [free account](https://azure.m
You can create a service principal in the Azure portal or by using Azure PowerShell. > [!NOTE]
-> To create a service principal and assign roles, your account must be a member of the **Owner** or **User Access Administrator** role in the subscription that you want to use for onboarding. If you don't have sufficient permissions to configure role assignments, the service principal might still be created, but it won't be able to onboard machines.
+> To create a service principal and assign roles, your account must be a member of the **Owner** or **User Access Administrator** role in the subscription that you want to use for onboarding.
### Azure portal
The values from the following properties are used with parameters passed to the
> [!TIP] > Make sure to use the service principal **ApplicationId** property, not the **Id** property.
-The **Azure Connected Machine Onboarding** role contains only the permissions required to onboard a machine. You can assign the service principal permission to allow its scope to include a resource group or a subscription. To add role assignments, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md) or [Assign Azure roles using Azure CLI](../../role-based-access-control/role-assignments-cli.md).
+4. Assign the **Azure Connected Machine Onboarding** role to the service principal for the designated resource group or subscription. This role contains only the permissions required to onboard a machine. Note that your account must be a member of the **Owner** or **User Access Administrator** role for the subscription to which the service principal will have access. For information on how to add role assignments, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md) or [Assign Azure roles using Azure CLI](../../role-based-access-control/role-assignments-cli.md).
## Generate the installation script from the Azure portal
azure-functions Create First Function Cli Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-node.md
Title: Create a JavaScript function from the command line - Azure Functions
description: Learn how to create a JavaScript function from the command line, then publish the local Node.js project to serverless hosting in Azure Functions. Last updated 11/18/2021
+ms.devlang: javascript
azure-functions Create First Function Cli Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-powershell.md
Title: Create a PowerShell function from the command line - Azure Functions
description: Learn how to create a PowerShell function from the command line, then publish the local project to serverless hosting in Azure Functions. Last updated 11/03/2020
+ms.devlang: powershell
azure-functions Create First Function Cli Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-typescript.md
Title: Create a TypeScript function from the command line - Azure Functions
description: Learn how to create a TypeScript function from the command line, then publish the local project to serverless hosting in Azure Functions. Last updated 11/18/2021
+ms.devlang: typescript
azure-functions Durable Functions Event Publishing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-event-publishing.md
Title: Durable Functions publishing to Azure Event Grid
description: Learn how to configure automatic Azure Event Grid publishing for Durable Functions. Last updated 05/11/2020
+ms.devlang: csharp, javascript
azure-functions Functions Event Hub Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-event-hub-cosmos-db.md
Last updated 11/04/2019
+ms.devlang: java
#Customer intent: As a Java developer, I want to write Java functions that process data continually (for example, from IoT sensors), and store the processing results in Azure Cosmos DB.
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/) | &#x2705; | &#x2705; | | [Azure Resource Graph](../../governance/resource-graph/overview.md) | &#x2705; | &#x2705; | | [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/) | &#x2705; | &#x2705; |
-| [Azure Scheduler](../../scheduler/scheduler-intro.md) (replaced by [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/)) | &#x2705; | &#x2705; |
+| [Azure Scheduler](../../scheduler/index.yml) (replaced by [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/)) | &#x2705; | &#x2705; |
| [Azure Service Fabric](https://azure.microsoft.com/services/service-fabric/) | &#x2705; | &#x2705; | | [Azure Service Health](https://azure.microsoft.com/features/service-health/) | &#x2705; | &#x2705; | | [Azure Service Manager (RDFE)](/previous-versions/azure/ee460799(v=azure.100)) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | | [Azure Resource Graph](../../governance/resource-graph/overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure Scheduler](../../scheduler/scheduler-intro.md) (replaced by [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Scheduler](../../scheduler/index.yml) (replaced by [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Service Fabric](https://azure.microsoft.com/services/service-fabric/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Service Health](https://azure.microsoft.com/features/service-health/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Service Manager (RDFE)](/previous-versions/azure/ee460799(v=azure.100)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
azure-government Connect With Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/connect-with-azure-pipelines.md
This article helps you use Azure Pipelines to set up continuous integration (CI)
> [!NOTE] > For special considerations when deploying apps to Azure Government, see **[Deploy apps to Azure Government Cloud](/azure/devops/pipelines/library/government-cloud).**
-[Azure Pipelines](/azure/devops/pipelines/get-started/) is used by teams to configure continuous deployment for applications hosted in Azure subscriptions. We can use this service for applications running in Azure Government by defining [service connections](/azure/devops/pipelines/library/service-endpoints) for Azure Government.
+[Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines) is used by teams to configure continuous deployment for applications hosted in Azure subscriptions. We can use this service for applications running in Azure Government by defining [service connections](/azure/devops/pipelines/library/service-endpoints) for Azure Government.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
The following steps will set up a CD process to deploy to this Web App.
Follow through one of the quickstarts below to set up a Build for your specific type of app: - [ASP.NET 4 app](/azure/devops/pipelines/apps/aspnet/build-aspnet-4)-- [ASP.NET Core app](/azure/devops/pipelines/languages/dotnet-core?tabs=yaml)-- [Node.js app with Gulp](/azure/devops/pipelines/languages/javascript?tabs=yaml)
+- [ASP.NET Core app](/azure/devops/pipelines/ecosystems/dotnet-core)
+- [Node.js app with Gulp](/azure/devops/pipelines/ecosystems/javascript)
## Generate a service principal
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[South River Technologies](https://southrivertech.com)| |[Stabilify](http://www.stabilify.net/)| |[Stafford Associates](https://www.staffordnet.com/)|
-|[Static Networks, LLC](https://staticnetworks.com)|
+|Static Networks, LLC|
|[Steel Root](https://steelroot.us)| |[StoneFly, Inc.](https://stonefly.com)| |[Strategic Communications](https://stratcomminc.com)|
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
you can configure Application Insights Java 3.x to use an HTTP proxy:
Application Insights Java 3.x also respects the global `https.proxyHost` and `https.proxyPort` system properties if those are set (and `http.nonProxyHosts` if needed).
-Starting from 3.2.6, authenticated proxies are supported. You can add `"user"` and `"password"` under `"proxy"` in
-the json above (or if you are using the system properties above, you can add `https.proxyUser` and `https.proxyPassword`
-system properties).
- ## Self-diagnostics "Self-diagnostics" refers to internal logging from Application Insights Java 3.x.
azure-monitor Migrate From Instrumentation Keys To Connection Strings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/migrate-from-instrumentation-keys-to-connection-strings.md
Title: Migrate from instrumentation keys to connection strings
+ Title: Migrate from Application Insights instrumentation keys to connection strings
description: Learn the steps required to upgrade from Azure Monitor Application Insights instrumentation keys to connection strings Last updated 02/14/2022
-# Migrate from instrumentation keys to connection strings
+# Migrate from Application Insights instrumentation keys to connection strings
This guide walks through migrating from [instrumentation keys](separate-resources.md#about-resources-and-instrumentation-keys) to [connection strings](sdk-connection-string.md#overview).
Use environment variables to pass a connection string to the Application Insight
To set a connection string via environment variable, place the value of the connection string into an environment variable named ΓÇ£APPLICATIONINSIGHTS_CONNECTION_STRINGΓÇ¥.
-This process can be automated in your Azure deployments. For example, the following ARM template shows how you can automatically include the correct connection string with an App Services deployment (be sure to include any other App Settings your app requires):
+This process can be [automated in your Azure deployments](../../azure-resource-manager/templates/deploy-portal.md#deploy-resources-with-arm-templates-and-azure-portal). For example, the following ARM template shows how you can automatically include the correct connection string with an App Services deployment (be sure to include any other App Settings your app requires):
```JSON {
Connection strings provide a single configuration setting and eliminate the need
### Missing data
-1. Confirm you're using a [supported SDK version](#supported-sdk-versions). If you use Application Insights integration in another Azure product offering, check its documentation on how to properly configure a connection string.
+- Confirm you're using a [supported SDK version](#supported-sdk-versions). If you use Application Insights integration in another Azure product offering, check its documentation on how to properly configure a connection string.
-1. Confirm you aren't setting both an instrumentation key and connection string at the same time. Instrumentation key settings should be removed from your configuration.
+- Confirm you aren't setting both an instrumentation key and connection string at the same time. Instrumentation key settings should be removed from your configuration.
-1. Confirm your connection string is exactly as provided in the Azure portal.
+- Confirm your connection string is exactly as provided in the Azure portal.
### Environment variables aren't working
You can't enable [Azure AD authentication](azure-ad-authentication.md) for [auto
### What is the difference between global and regional ingestion?
-Global ingestion sends all telemetry data to a single endpoint, no matter where this data will end up or be stored. Regional ingestion allows you to define specific endpoints per region for data ingestion, ensuring data stays within a specific region during processing and storage.
+Global ingestion sends all telemetry data to a single endpoint, no matter where this data will be stored. Regional ingestion allows you to define specific endpoints per region for data ingestion, ensuring data stays within a specific region during processing and storage.
### How do connection strings impact the billing?
azure-monitor Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/pricing.md
Previously updated : 11/26/2021-- Last updated : 02/17/2021+ # Manage usage and costs for Application Insights
+This article describes how to proactively monitor and control Application Insights costs.
+
+[Monitoring usage and estimated costs](..//usage-estimated-costs.md) describes usage and estimated costs across Azure Monitor features using [Azure Cost Management + Billing](../logs/manage-cost-storage.md#viewing-log-analytics-usage-on-your-azure-bill).
+ > [!NOTE]
-> This article describes how to understand and control your costs for Application Insights. A related article, [Monitoring usage and estimated costs](..//usage-estimated-costs.md) describes how to view usage and estimated costs across multiple Azure monitoring features using [Azure Cost Management + Billing](../logs/manage-cost-storage.md#viewing-log-analytics-usage-on-your-azure-bill). All prices and costs in this article are for example purposes only.
+> All prices and costs in this article are for example purposes only.
-Application Insights is designed to get everything you need to monitor the availability, performance, and usage of your web applications, whether they're hosted on Azure or on-premises. Application Insights supports popular languages and frameworks, such as .NET, Java, and Node.js, and integrates with DevOps processes and tools like Azure DevOps, Jira, and PagerDuty. It's important to understand what determines the costs of monitoring your applications. In this article, we review what drives your application monitoring costs and how you can proactively monitor and control them.
+<! App Insights monitoring features (availability, performance, usage, etc. ) Supported languages, integration with specific tools (Azure DevOps, Jira, and PagerDuty, etc.) should be documented elsewhere. (e.g. platforms.md) -->
If you have questions about how pricing works for Application Insights, you can post a question in our [Microsoft Q&A question page](/answers/topics/azure-monitor.html). ## Pricing model
-The pricing for [Azure Application Insights][start] is a **Pay-As-You-Go** model based on data volume ingested and optionally for longer data retention. Each Application Insights resource is charged as a separate service and contributes to the bill for your Azure subscription. Data volume is measured as the size of the uncompressed JSON data package that's received by Application Insights from your application. Data volume is measured in GB (10^9 bytes). There is no data volume charge for using the [Live Metrics Stream](./live-stream.md). On your Azure bill or in [Azure Cost Management + Billing](../logs/manage-cost-storage.md#viewing-log-analytics-usage-on-your-azure-bill), your data ingestion and data retention for a classic Application Insights resource will be reported with a meter category of **Log Analytics**.
+The pricing for [Azure Application Insights][start] is a **Pay-As-You-Go** model based on data volume ingested and optionally for longer data retention. Each Application Insights resource is charged as a separate service and contributes to the bill for your Azure subscription. Data volume is measured as the size of the uncompressed JSON data package that's received by Application Insights from your application. Data volume is measured in GB (10^9 bytes). There's no data volume charge for using the [Live Metrics Stream](./live-stream.md). On your Azure bill or in [Azure Cost Management + Billing](../logs/manage-cost-storage.md#viewing-log-analytics-usage-on-your-azure-bill), your data ingestion and data retention for a classic Application Insights resource will be reported with a meter category of **Log Analytics**.
-[Multi-step web tests](./availability-multistep.md) incur an additional charge. Multi-step web tests are web tests that perform a sequence of actions. There's no separate charge for *ping tests* of a single page. Telemetry from ping tests and multi-step tests is charged the same as other telemetry from your app.
+[Multi-step web tests](./availability-multistep.md) incur extra charges. Multi-step web tests are web tests that perform a sequence of actions. There's no separate charge for *ping tests* of a single page. Telemetry from ping tests and multi-step tests is charged the same as other telemetry from your app.
-The Application Insights option to [Enable alerting on custom metric dimensions](./pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation) can also generate in additional costs because this can result in the creation of additional pre-aggregation metrics. [Learn more](./pre-aggregated-metrics-log-metrics.md) about log-based and pre-aggregated metrics in Application Insights and about [pricing](https://azure.microsoft.com/pricing/details/monitor/) for Azure Monitor custom metrics.
+The Application Insights option to [Enable alerting on custom metric dimensions](./pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation) can also increase costs because this can result in the creation of more pre-aggregation metrics. [Learn more](./pre-aggregated-metrics-log-metrics.md) about log-based and pre-aggregated metrics in Application Insights and about [pricing](https://azure.microsoft.com/pricing/details/monitor/) for Azure Monitor custom metrics.
### Workspace-based Application Insights
If you're not yet using Application Insights, you can use the [Azure Monitor pri
### Learn from what similar applications collect
-In the Azure Monitoring Pricing calculator for Application Insights, click to enable the **Estimate data volume based on application activity**. Here you can provide inputs about your application (requests per month and page views per month, in case you will collect client-side telemetry), and then the calculator will tell you the median and 90th percentile amount of data collected by similar applications. These applications span the range of Application Insights configuration (e.g some have default [sampling](./sampling.md), some have no sampling etc.), so you still have the control to reduce the volume of data you ingest far below the median level using sampling.
+In the Azure Monitoring Pricing calculator for Application Insights, click to enable the **Estimate data volume based on application activity**. Here you can provide inputs about your application (requests per month and page views per month, in case you'll collect client-side telemetry), and then the calculator will tell you the median and 90th percentile amount of data collected by similar applications. These applications span the range of Application Insights configuration (e.g some have default [sampling](./sampling.md), some have no sampling etc.), so you still have the control to reduce the volume of data you ingest far below the median level using sampling.
### Data collection when using sampling
-With the ASP.NET SDK's [adaptive sampling](sampling.md#adaptive-sampling), the data volume is adjusted automatically to keep within a specified maximum rate of traffic for default Application Insights monitoring. If the application produces a low amount of telemetry, such as when debugging or due to low usage, items won't be dropped by the sampling processor as long as volume is below the configured events per second level. For a high volume application, with the default threshold of five events per second, adaptive sampling will limit the number of daily events to 432,000. Using a typical average event size of 1 KB, this corresponds to 13.4 GB of telemetry per 31-day month per node hosting your application since the sampling is done local to each node.
+With the ASP.NET SDK's [adaptive sampling](sampling.md#adaptive-sampling), the data volume is adjusted automatically to keep within a specified maximum rate of traffic for default Application Insights monitoring. If the application produces a low amount of telemetry, such as when debugging or due to low usage, items won't be dropped by the sampling processor as long as volume is below the configured events per second level. For a high volume application, with the default threshold of five events per second, adaptive sampling will limit the number of daily events to 432,000. Considering a typical average event size of 1 KB, this corresponds to 13.4 GB of telemetry per 31-day month per node hosting your application since the sampling is done local to each node.
For SDKs that don't support adaptive sampling, you can employ [ingestion sampling](./sampling.md#ingestion-sampling), which samples when the data is received by Application Insights based on a percentage of data to retain, or [fixed-rate sampling for ASP.NET, ASP.NET Core, and Java websites](sampling.md#fixed-rate-sampling) to reduce the traffic sent from your web server and web browsers ## Viewing Application Insights usage on your Azure bill
-The easiest way to see the billed usage for a single Application Insights resource which is not a workspace-baed resource is to go to the resource's Overview page and click **View Cost** in the upper right corner. You might need additional access to Cost Management data ([learn more](../../cost-management-billing/costs/assign-access-acm-data.md)).
+The easiest way to see the billed usage for a single Application Insights resource, which isn't a workspace-baed resource is to go to the resource's Overview page and click **View Cost** in the upper right corner. You might need elevated access to Cost Management data ([learn more](../../cost-management-billing/costs/assign-access-acm-data.md)).
-To learn more, Azure provides a great deal of useful functionality in the [Azure Cost Management + Billing](../../cost-management-billing/costs/quick-acm-cost-analysis.md?toc=/azure/billing/TOC.json) hub. For instance, the "Cost analysis" functionality enables you to view your spends for Azure resources. Adding a filter by resource type (to microsoft.insights/components for Application Insights) will allow you to track your spending. Then for "Group by" select "Meter category" or "Meter". Note that Application Insights billed usage for data ingestion and data retention will show up as **Log Analytics** for the Meter category since Log Analytics backend for all Azure Monitor logs.
+To learn more, Azure provides a great deal of useful functionality in the [Azure Cost Management + Billing](../../cost-management-billing/costs/quick-acm-cost-analysis.md?toc=/azure/billing/TOC.json) hub. For instance, the "Cost analysis" functionality enables you to view your spends for Azure resources. Adding a filter by resource type (to microsoft.insights/components for Application Insights) will allow you to track your spending. Then for "Group by" select "Meter category" or "Meter". Application Insights billed usage for data ingestion and data retention will show up as **Log Analytics** for the Meter category since Log Analytics backend for all Azure Monitor logs.
> [!NOTE] > Application Insights billing for data ingestion and data retention is reported as coming from the **Log Analytics** service (Meter category in Azure Cost Management + Billing). Even more understanding of your usage can be gained by [downloading your usage from the Azure portal](../../cost-management-billing/understand/download-azure-daily-usage.md).
-In the downloaded spreadsheet, you can see usage per Azure resource per day. In this Excel spreadsheet, usage from your Application Insights resources can be found by first filtering on the "Meter Category" column to show "Application Insights" and "Log Analytics", and then adding a filter on the "Instance ID" column which is "contains microsoft.insights/components". Most Application Insights usage is reported on meters with the Meter Category of Log Analytics, since there is a single logs backend for all Azure Monitor components. Only Application Insights resources on legacy pricing tiers and multi-step web tests are reported with a Meter Category of Application Insights. The usage is shown in the "Consumed Quantity" column and the unit for each entry is shown in the "Unit of Measure" column. More details are available to help you [understand your Microsoft Azure bill](../../cost-management-billing/understand/review-individual-bill.md).
+In the downloaded spreadsheet, you can see usage per Azure resource per day. In this Excel spreadsheet, usage from your Application Insights resources can be found by first filtering on the "Meter Category" column to show "Application Insights" and "Log Analytics", and then adding a filter on the "Instance ID" column, which is "contains microsoft.insights/components". Most Application Insights usage is reported on meters with the Meter Category of Log Analytics, since there's a single logs backend for all Azure Monitor components. Only Application Insights resources on legacy pricing tiers and multi-step web tests are reported with a Meter Category of Application Insights. The usage is shown in the "Consumed Quantity" column and the unit for each entry is shown in the "Unit of Measure" column. More details are available to help you [understand your Microsoft Azure bill](../../cost-management-billing/understand/review-individual-bill.md).
## Understand your usage and optimizing your costs <a name="understand-your-usage-and-estimate-costs"></a>
C. View data volume trends for the past month.
D. Enable data ingestion [sampling](./sampling.md). E. Set the daily data volume cap.
-(Note that all prices displayed in screenshots in this article are for example purposes only. For current prices in your currency and region, see [Application Insights pricing][pricing].)
+(All prices displayed in screenshots in this article are for example purposes only. For current prices in your currency and region, see [Application Insights pricing][pricing].)
To investigate your Application Insights usage more deeply, open the **Metrics** page, add the metric named "Data point volume", and then select the *Apply splitting* option to split the data by "Telemetry item type".
To learn more about your data volumes, selecting **Metrics** for your Applicatio
### Queries to understand data volume details
-There are two approaches to investigating data volumes for Application Insights. The first uses aggregated information in the `systemEvents` table, and the second uses the `_BilledSize` property, which is available on each ingested event. `systemEvents` will not have data size information for [workspace-based-application-insights](#data-volume-for-workspace-based-application-insights-resources).
+There are two approaches to investigating data volumes for Application Insights. The first uses aggregated information in the `systemEvents` table, and the second uses the `_BilledSize` property, which is available on each ingested event. `systemEvents` won't have data size information for [workspace-based-application-insights](#data-volume-for-workspace-based-application-insights-resources).
#### Using aggregated data volume information
systemEvents
| summarize sum(BillingTelemetrySizeInBytes) by BillingTelemetryType, bin(timestamp, 1d) | render barchart ```
-Note that this query can be used in an [Azure Log Alert](../alerts/alerts-unified-log.md) to set up alerting on data volumes.
+This query can be used in an [Azure Log Alert](../alerts/alerts-unified-log.md) to set up alerting on data volumes.
To learn more about your telemetry data changes, we can get the count of events by type using the query:
The volume of data you send can be managed using the following techniques:
* **Sampling**: You can use sampling to reduce the amount of telemetry that's sent from your server and client apps, with minimal distortion of metrics. Sampling is the primary tool you can use to tune the amount of data you send. Learn more about [sampling features](./sampling.md).
-* **Limit Ajax calls**: You can [limit the number of Ajax calls that can be reported](./javascript.md#configuration) in every page view, or switch off Ajax reporting. Note that disabling Ajax calls will disable [JavaScript correlation](./javascript.md#enable-correlation).
+* **Limit Ajax calls**: You can [limit the number of Ajax calls that can be reported](./javascript.md#configuration) in every page view, or switch off Ajax reporting. Disabling Ajax calls will disable [JavaScript correlation](./javascript.md#enable-correlation).
* **Disable unneeded modules**: [Edit ApplicationInsights.config](./configuration-with-applicationinsights-config.md) to turn off collection modules that you don't need. For example, you might decide that performance counters or dependency data are inessential.
The volume of data you send can be managed using the following techniques:
We've removed the restriction on some subscription types that have credit that couldn't be used for Application Insights. Previously, if the subscription has a spending limit, the daily cap dialog has instructions to remove the spending limit and enable the daily cap to be raised beyond 32.3 MB/day.
-* **Throttling**: Throttling limits the data rate to 32,000 events per second, averaged over 1 minute per instrumentation key. The volume of data that your app sends is assessed every minute. If it exceeds the per-second rate averaged over the minute, the server refuses some requests. The SDK buffers the data and then tries to resend it. It spreads out a surge over several minutes. If your app consistently sends data at more than the throttling rate, some data will be dropped. (The ASP.NET, Java, and JavaScript SDKs try to resend data this way; other SDKs might simply drop throttled data.) If throttling occurs, a notification warning alerts you that this has occurred.
+* **Throttling**: Throttling limits the data rate to 32,000 events per second, averaged over 1 minute per instrumentation key. The volume of data that your app sends is assessed every minute. If it exceeds the per-second rate averaged over the minute, the server refuses some requests. The SDK buffers the data and then tries to resend it. It spreads out a surge over several minutes. If your app consistently sends data at more than the throttling rate, some data will be dropped. (The ASP.NET, Java, and JavaScript SDKs try to resend data this way; other SDKs might drop throttled data.) If throttling occurs, a notification warning alerts you that this has occurred.
## Manage your maximum daily data volume
-You can use the daily volume cap to limit the data collected. However, if the cap is met, a loss of all telemetry sent from your application for the remainder of the day occurs. It is *not advisable* to have your application hit the daily cap. You can't track the health and performance of your application after it reaches the daily cap.
+You can use the daily volume cap to limit the data collected. However, if the cap is met, a loss of all telemetry sent from your application for the remainder of the day occurs. It *isn't advisable* to have your application hit the daily cap. You can't track the health and performance of your application after it reaches the daily cap.
> [!WARNING] > If you have a workspace-based Application Insights resource, we recommend using the [workspace's daily cap](../logs/manage-cost-storage.md#manage-your-maximum-daily-data-volume) to limit ingestion and costs. The daily cap in Application Insights may not limit ingestion in all cases to the selected level. (If your Application Insights resource is ingesting a lot of data, the Application Insights daily cap might need to be raised.)
To change the retention, from your Application Insights resource, go to the **Us
![Screenshot that shows where to change the data retention period.](./media/pricing/pricing-005.png)
-When the retention is lowered, there is a several day grace period before the oldest data is removed.
+A several-day grace period begins when the retention is lowered before the oldest data is removed.
The retention can also be [set programatically using PowerShell](powershell.md#set-the-data-retention) using the `retentionInDays` parameter. If you set the data retention to 30 days, you can trigger an immediate purge of older data using the `immediatePurgeDataOn30Days` parameter, which may be useful for compliance-related scenarios. This purge functionality is only exposed via Azure Resource Manager and should be used with extreme care. The daily reset time for the data volume cap can be configured using Azure Resource Manager to set the `dailyQuotaResetTime` parameter.
To disable the daily volume cap e-mails, under the **Configure** section of your
## Legacy Enterprise (Per Node) pricing tier
-For early adopters of Azure Application Insights, there are still two possible pricing tiers: Basic and Enterprise. The Basic pricing tier is the same as described above and is the default tier. It includes all Enterprise tier features, at no additional cost. The Basic tier bills primarily on the volume of data that's ingested.
+For early adopters of Azure Application Insights, there are still two possible pricing tiers: Basic and Enterprise. The Basic pricing tier is the same as described above and is the default tier. It includes all Enterprise tier features, at no extra cost. The Basic tier bills primarily on the volume of data that's ingested.
These legacy pricing tiers have been renamed. The Enterprise pricing tier is now called **Per Node** and the Basic pricing tier is now called **Per GB**. These new names are used below and in the Azure portal.
-The Per Node (formerly Enterprise) tier has a per-node charge, and each node receives a daily data allowance. In the Per Node pricing tier, you are charged for data ingested above the included allowance. If you are using Operations Management Suite, you should choose the Per Node tier. In April 2018, we [introduced](https://azure.microsoft.com/blog/introducing-a-new-way-to-purchase-azure-monitoring-services/) a new pricing model for Azure monitoring. This model adopts a simple "pay-as-you-go" model across the complete portfolio of monitoring services. Learn more about the [new pricing model](..//usage-estimated-costs.md).
+The Per Node (formerly Enterprise) tier has a per-node charge, and each node receives a daily data allowance. In the Per Node pricing tier, you're charged for data ingested above the included allowance. If you're using Operations Management Suite, you should choose the Per Node tier. In April 2018, we [introduced](https://azure.microsoft.com/blog/introducing-a-new-way-to-purchase-azure-monitoring-services/) a new pricing model for Azure monitoring. This model adopts a simple "pay-as-you-go" model across the complete portfolio of monitoring services. Learn more about the [new pricing model](..//usage-estimated-costs.md).
For current prices in your currency and region, see [Application Insights pricing](https://azure.microsoft.com/pricing/details/application-insights/). ### Understanding billed usage on the legacy Enterprise (Per Node) tier
-As described below in more detail, the legacy Enterprise (Per Node) tier combines usage from across all Application Insights resources in a subscription to calculate the number of nodes and the data overage. Due to this combination process, **usage for all Application Insights resources in a subscription are reported against just one of the resources**. This makes reconciling your [billed usage](#viewing-application-insights-usage-on-your-azure-bill) with the usage you observe for each Application Insights resources very complicated.
+As described below in more detail, the legacy Enterprise (Per Node) tier combines usage from across all Application Insights resources in a subscription to calculate the number of nodes and the data overage. Due to this combination process, **usage for all Application Insights resources in a subscription are reported against just one of the resources**. This makes reconciling your [billed usage](#viewing-application-insights-usage-on-your-azure-bill) with the usage you observe for each Application Insights resource complicated.
> [!WARNING] > Because of the complexity of tracking and understanding usage of Application Insights resources in the legacy Enterprise (Per Node) tier we strongly recommend using the current Pay-As-You-Go pricing tier. ### Per Node tier and Operations Management Suite subscription entitlements
-Customers who purchase Operations Management Suite E1 and E2 can get Application Insights Per Node as an additional component at no additional cost as [previously announced](/archive/blogs/msoms/azure-application-insights-enterprise-as-part-of-operations-management-suite-subscription). Specifically, each unit of Operations Management Suite E1 and E2 includes an entitlement to one node of the Application Insights Per Node tier. Each Application Insights node includes up to 200 MB of data ingested per day (separate from Log Analytics data ingestion), with 90-day data retention at no additional cost. The tier is described in more detailed later in the article.
+Customers who purchase Operations Management Suite E1 and E2 can get Application Insights Per Node as an supplemental component at no extra cost as [previously announced](/archive/blogs/msoms/azure-application-insights-enterprise-as-part-of-operations-management-suite-subscription). Specifically, each unit of Operations Management Suite E1 and E2 includes an entitlement to one node of the Application Insights Per Node tier. Each Application Insights node includes up to 200 MB of data ingested per day (separate from Log Analytics data ingestion), with 90-day data retention at no extra cost. The tier is described in more detailed later in the article.
Because this tier is applicable only to customers with an Operations Management Suite subscription, customers who don't have an Operations Management Suite subscription don't see an option to select this tier.
Because this tier is applicable only to customers with an Operations Management
* You pay for each node that sends telemetry for any apps in the Per Node tier. * A *node* is a physical or virtual server machine or a platform-as-a-service role instance that hosts your app.
- * Development machines, client browsers, and mobile devices do not count as nodes.
+ * Development machines, client browsers, and mobile devices don't count as nodes.
* If your app has several components that send telemetry, such as a web service and a back-end worker, the components are counted separately. * [Live Metrics Stream](./live-stream.md) data isn't counted for pricing purposes. In a subscription, your charges are per node, not per app. If you have five nodes that send telemetry for 12 apps, the charge is for five nodes. * Although charges are quoted per month, you're charged only for any hour in which a node sends telemetry from an app. The hourly charge is the quoted monthly charge divided by 744 (the number of hours in a 31-day month).
You can write a script to set the pricing tier by using Azure Resource Managemen
## Next steps
-* [sampling](./sampling.md)
+[Sampling](./sampling.md) in Application Insights is the recommended way to reduce telemetry traffic, data costs, and storage costs.
[api]: app-insights-api-custom-events-metrics.md [apiproperties]: app-insights-api-custom-events-metrics.md#properties [start]: ./app-insights-overview.md [pricing]: https://azure.microsoft.com/pricing/details/application-insights/ [pricing]: https://azure.microsoft.com/pricing/details/application-insights/+
+## Troubleshooting
+
+### Unexpected usage or estimated cost
+
+Lower your bill with updated versions of the ASP.NET Core SDK and Worker Service SDK, which [don't collect counters by default](eventcounters.md#default-counters-collected).
+
+### Microsoft Q&A question page
+
+If you have questions about how pricing works for Application Insights, you can post a question in our [Microsoft Q&A question page](/answers/topics/azure-monitor.html).
azure-monitor Resources Roles Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/resources-roles-access-control.md
Title: Resources, roles and access control in Azure Application Insights | Micro
description: Owners, contributors and readers of your organization's insights. Last updated 02/14/2019 -+
First, some definitions:
* [**Resource group**][group] - Every resource belongs to one group. A group is a convenient way to manage related resources, particularly for access control. For example, into one resource group you could put a Web App, an Application Insights resource to monitor the app, and a Storage resource to keep exported data.
-* [**Subscription**](https://portal.azure.com) - To use Application Insights or other Azure resources, you sign in to an Azure subscription. Every resource group belongs to one Azure subscription, where you choose your price package and, if it's an organization subscription, choose the members and their access permissions.
+* [**Subscription**](https://portal.azure.com) - To use Application Insights or other Azure resources, you sign in to an Azure subscription. Every resource group belongs to one Azure subscription, where you choose your price package. If it's an organization subscription, the owner may choose the members and their access permissions.
* [**Microsoft account**][account] - The username and password that you use to sign in to Microsoft Azure subscriptions, XBox Live, Outlook.com, and other Microsoft services. ## <a name="access"></a> Control access in the resource group
The user must have a [Microsoft Account][account], or access to their [organizat
#### Navigate to resource group or directly to the resource itself
-Choose **Access control (IAM)** from the left-hand menu.
+1. Assign the Contributor role to the Role Based Access Control.
-![Screenshot of Access control button in Azure portal](./media/resources-roles-access-control/0001-access-control.png)
-
-Select **Add role assignment**
-
-![Screenshot of Access control menu with add button highlighted in red](./media/resources-roles-access-control/0002-add.png)
-
-The **Add permissions** view below is primarily specific to Application Insights resources, if you were viewing the access control permissions from a higher level like resource groups, you would see additional non-Application Insights-centric roles.
-
-To view information on all Azure role-based access control built-in roles use the [official reference content](../../role-based-access-control/built-in-roles.md).
-
-![Screenshot of Access control user role list](./media/resources-roles-access-control/0003-user-roles.png)
+ For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
#### Select a role
Where applicable we link to the associated official reference documentation.
| [Contributor](../../role-based-access-control/built-in-roles.md#contributor) |Can edit anything, including all resources. | | [Application Insights Component contributor](../../role-based-access-control/built-in-roles.md#application-insights-component-contributor) |Can edit Application Insights resources. | | [Reader](../../role-based-access-control/built-in-roles.md#reader) |Can view but not change anything. |
-| [Application Insights Snapshot Debugger](../../role-based-access-control/built-in-roles.md#application-insights-snapshot-debugger) | Gives the user permission to use Application Insights Snapshot Debugger features. Note that this role is included in neither the Owner nor Contributor roles. |
+| [Application Insights Snapshot Debugger](../../role-based-access-control/built-in-roles.md#application-insights-snapshot-debugger) | Gives the user permission to use Application Insights Snapshot Debugger features. This role is included in neither the Owner nor Contributor roles. |
| Azure Service Deploy Release Management Contributor | Contributor role for services deploying through Azure Service Deploy. | | [Data Purger](../../role-based-access-control/built-in-roles.md#data-purger) | Special role for purging personal data. See our [guidance for personal data](../logs/personal-data-mgmt.md) for more information. | | ExpressRoute Administrator | Can create delete and manage express routes.|
azure-monitor Sdk Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-connection-string.md
See also: [Regions that require endpoint modification](./custom-endpoints.md#reg
## Connection string examples
-### Minimal valid connection string
-
-`InstrumentationKey=00000000-0000-0000-0000-000000000000;`
-
-In this example, only the Instrumentation Key has been set.
--- Authorization scheme defaults to "ikey" -- Instrumentation Key: 00000000-0000-0000-0000-000000000000-- The regional service URIs are based on the [SDK defaults](https://github.com/microsoft/ApplicationInsights-dotnet/blob/develop/BASE/src/Microsoft.ApplicationInsights/Extensibility/Implementation/Endpoints/Constants.cs) and will connect to the public global Azure:
- - Ingestion: `https://dc.services.visualstudio.com/`
- - Live metrics: `https://rt.services.visualstudio.com/`
- - Profiler: `https://profiler.monitor.azure.com/`
- - Debugger: `https://snapshot.monitor.azure.com/`
--- ### Connection string with endpoint suffix `InstrumentationKey=00000000-0000-0000-0000-000000000000;EndpointSuffix=ai.contoso.com;`
You can set the connection string in the `applicationinsights.json` configuratio
} ```
-See [connection string configuration](./java-standalone-config.md#connection-string) for more details.
+For more information, [connection string configuration](./java-standalone-config.md#connection-string).
For Application Insights Java 2.x, you can set the connection string in the `ApplicationInsights.xml` configuration file:
azure-monitor Snapshot Debugger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-debugger.md
Access to snapshots is protected by Azure role-based access control (Azure RBAC)
Subscription owners should assign the `Application Insights Snapshot Debugger` role to users who will inspect snapshots. This role can be assigned to individual users or groups by subscription owners for the target Application Insights resource or its resource group or subscription.
-1. Navigate to the Application Insights resource in the Azure portal.
-1. Click **Access control (IAM)**.
-1. Click the **+Add role assignment** button.
-1. Select **Application Insights Snapshot Debugger** from the **Roles** drop-down list.
-1. Search for and enter a name for the user to add.
-1. Click the **Save** button to add the user to the role.
+1. Assign the Debugger role to the **Application Insights Snapshot**.
+
+ For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
> [!IMPORTANT]
azure-monitor Container Insights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-troubleshoot.md
During the onboarding or update process, granting the **Monitoring Metrics Publi
You can also manually grant this role from the Azure portal by performing the following steps:
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. In the Azure portal, click **All services** found in the upper left-hand corner. In the list of resources, type **Kubernetes**. As you begin typing, the list filters based on your input. Select **Azure Kubernetes**.
-3. In the list of Kubernetes clusters, select one from the list.
-2. From the left-hand menu, click **Access control (IAM)**.
-3. Select **+ Add** to add a role assignment and select the **Monitoring Metrics Publisher** role and under the **Select** box type **AKS** to filter the results on just the clusters service principals defined in the subscription. Select the one from the list that is specific to that cluster.
-4. Select **Save** to finish assigning the role.
+1. Assign the **Publisher** role to the **Monitoring Metrics** scope.
+
+ For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
## Container insights is enabled but not reporting any information
azure-monitor Collect Sccm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/collect-sccm.md
description: This article shows the steps to connect Configuration Manager to wo
Previously updated : 08/02/2021 Last updated : 08/02/2021 # Connect Configuration Manager to Azure Monitor
In the following procedure, you grant the *Contributor* role in your Log Analyti
> You must specify permissions in the Log Analytics workspace for Configuration Manager. Otherwise, you receive an error message when you use the configuration wizard in Configuration Manager. >
-1. In the Azure portal, click **All services** found in the upper left-hand corner. In the list of resources, type **Log Analytics**. As you begin typing, the list filters based on your input. Select **Log Analytics**.
+1. Assign the Contributor role to the Log Analytics scope.
-2. In your list of Log Analytics workspaces, select the workspace to modify.
-
-3. From the left pane, select **Access control (IAM)**.
-
-4. In the Access control (IAM) page, click **Add role assignment** and the **Add role assignment** pane appears.
-
-5. In the **Add role assignment** pane, under the **Role** drop-down list select the **Contributor** role.
-
-6. Under the **Assign access to** drop-down list, select the Configuration Manager application created in AD earlier, and then click **OK**.
+ For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
## Download and install the agent
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-cost-storage.md
na Previously updated : 02/13/2022 Last updated : 02/17/2022
Billing for the commitment tiers is done on a daily basis. [Learn more](https://
<a name="data-size"></a> <a name="free-data-types"></a>
-In all pricing tiers, an event's data size is calculated from a string representation of the properties that are stored in Log Analytics for this event, regardless of whether the data is sent from an agent or added during the ingestion process. This includes any [custom fields](custom-fields.md) that are added as data is collected and then stored in Log Analytics. Several properties common to all data types, including some [Log Analytics Standard Properties](./log-standard-columns.md), are excluded in the calculation of the event size. This includes `_ResourceId`, `_SubscriptionId`, `_ItemId`, `_IsBillable`, `_BilledSize` and `Type`. All other properties stored in Log Analytics are included in the calculation of the event size. Some data types are free from data ingestion charges altogether, for example the [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity), [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat), [Usage](/azure/azure-monitor/reference/tables/usage) and [Operation](/azure/azure-monitor/reference/tables/operation) types. Some solutions have more solution-specific policies about free data ingestion, for instance [Azure Migrate](https://azure.microsoft.com/pricing/details/azure-migrate/) makes dependency visualization data free for the first 180-days of a Server Assessment. To determine whether an event was excluded from billing for data ingestion, you can use the [_IsBillable](log-standard-columns.md#_isbillable) property as shown [below](#data-volume-for-specific-events). Usage is reported in GB (10^9 bytes).
+In all pricing tiers, an event's data size is calculated from a string representation of the properties that are stored in Log Analytics for this event, regardless of whether the data is sent from an agent or added during the ingestion process. This includes any [custom fields](custom-fields.md) that are added as data is collected and then stored in Log Analytics. Several properties common to all data types, including some [Log Analytics Standard Properties](./log-standard-columns.md), are excluded in the calculation of the event size. This includes `_ResourceId`, `_SubscriptionId`, `_ItemId`, `_IsBillable`, `_BilledSize` and `Type`. All other properties stored in Log Analytics are included in the calculation of the event size. Some data types are free from data ingestion charges altogether, for example the [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity), [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat), [Usage](/azure/azure-monitor/reference/tables/usage) and [Operation](/azure/azure-monitor/reference/tables/operation) types. Some solutions have more solution-specific policies about free data ingestion, for instance [Azure Migrate](https://azure.microsoft.com/pricing/details/azure-migrate/) makes dependency visualization data free for the first 180-days of a Server Assessment. To determine whether an event was excluded from billing for data ingestion, you can use the [_IsBillable](log-standard-columns.md#_isbillable) property as shown [below]Fdata-volume-for-specific-events). Usage is reported in GB (10^9 bytes).
Also, some solutions, such as [Microsoft Defender for Cloud](https://azure.microsoft.com/pricing/details/azure-defender/), [Microsoft Sentinel](https://azure.microsoft.com/pricing/details/azure-sentinel/), and [Configuration management](https://azure.microsoft.com/pricing/details/automation/) have their own pricing models.
The easiest way to view your billed usage for a particular Log Analytics workspa
Alternatively, you can start in the [Azure Cost Management + Billing](../../cost-management-billing/costs/quick-acm-cost-analysis.md?toc=%2fazure%2fbilling%2fTOC.json) hub. here you can use the "Cost analysis" functionality to view your Azure resource expenses. To track your Log Analytics expenses, you can add a filter by "Resource type" (to microsoft.operationalinsights/workspace for Log Analytics and microsoft.operationalinsights/cluster for Log Analytics Clusters). For **Group by**, select **Meter category** or **Meter**. Other services, like Microsoft Defender for Cloud and Microsoft Sentinel, also bill their usage against Log Analytics workspace resources. To see the mapping to the service name, you can select the Table view instead of a chart.
+<a name="export-usage"></a>
+<a name="download-usage"></a>
+ To gain more understanding of your usage, you can [download your usage from the Azure portal](../../cost-management-billing/understand/download-azure-daily-usage.md). For step-by-step instructions, review this [tutorial](../../cost-management-billing/costs/tutorial-export-acm-data.md). In the downloaded spreadsheet, you can see usage per Azure resource (for example, Log Analytics workspace) per day. In this Excel spreadsheet, usage from your Log Analytics workspaces can be found by first filtering on the "Meter Category" column to show "Log Analytics", "Insight and Analytics" (used by some of the legacy pricing tiers), and "Azure Monitor" (used by commitment tier pricing tiers), and then adding a filter on the "Instance ID" column that is "contains workspace" or "contains cluster" (the latter to include Log Analytics Cluster usage). The usage is shown in the "Consumed Quantity" column, and the unit for each entry is shown in the "Unit of Measure" column. For more information, see [Review your individual Azure subscription bill](../../cost-management-billing/understand/review-individual-bill.md).
None of the legacy pricing tiers have regional-based pricing.
## Log Analytics and Microsoft Defender for Cloud <a name="ASC"></a>
-[Microsoft Defender for servers (Defender for Cloud)](../../security-center/index.yml) billing is closely tied to Log Analytics billing. Microsoft Defender for Cloud [bills by the number of monitored services](https://azure.microsoft.com/pricing/details/azure-defender/) and provides 500 MB/server/day data allocation that is applied to the following subset of [security data types](/azure/azure-monitor/reference/tables/tables-category#security) (WindowsEvent, SecurityAlert, SecurityBaseline, SecurityBaselineSummary, SecurityDetection, SecurityEvent, WindowsFirewall, MaliciousIPCommunication, LinuxAuditLog, SysmonEvent, ProtectionStatus) and the Update and UpdateSummary data types when the Update Management solution isn't running on the workspace or solution targeting is enabled ([learn more](../../security-center/security-center-pricing.md#what-data-types-are-included-in-the-500-mb-data-daily-allowance)). The count of monitored servers is calculated on an hourly granularity. The daily data allocation contributions from each monitored server are aggregated at the workspace level. If the workspace is in the legacy Per Node pricing tier, the Microsoft Defender for Cloud and Log Analytics allocations are combined and applied jointly to all billable ingested data.
+[Microsoft Defender for Servers (part of Defender for Cloud)](../../security-center/index.yml) billing is closely tied to Log Analytics billing. Microsoft Defender for Servers [bills by the number of monitored services](https://azure.microsoft.com/pricing/details/azure-defender/) and provides 500 MB/server/day data allocation that is applied to the following subset of [security data types](/azure/azure-monitor/reference/tables/tables-category#security) (WindowsEvent, SecurityAlert, SecurityBaseline, SecurityBaselineSummary, SecurityDetection, SecurityEvent, WindowsFirewall, MaliciousIPCommunication, LinuxAuditLog, SysmonEvent, ProtectionStatus) and the Update and UpdateSummary data types when the Update Management solution isn't running on the workspace or solution targeting is enabled ([learn more](../../security-center/security-center-pricing.md#what-data-types-are-included-in-the-500-mb-data-daily-allowance)). The count of monitored servers is calculated on an hourly granularity. The daily data allocation contributions from each monitored server are aggregated at the workspace level. If the workspace is in the legacy Per Node pricing tier, the Microsoft Defender for Cloud and Log Analytics allocations are combined and applied jointly to all billable ingested data.
+
+To view the daily Defender for Servers data allocations for a workspace, you need to [export your usage details](#viewing-log-analytics-usage-on-your-azure-bill), open the usage spreadsheet and filter the meter category to "Insight and Analytics". You'll then see usage with the meter name "Data Included per Node" which has a zero price per GB. The consumed quantity column will show the number of GBs of Defender for Cloud data allocation for the day. (If the workspace is in the legacy Per Node Log Analytics pricing tier, this meter will also include the data allocations from this Log Analytics pricing tier.)
## Change the data retention period
This query isn't an exact replication of how usage is calculated, but it provide
> [!NOTE] > To use the entitlements that come from purchasing OMS E1 Suite, OMS E2 Suite, or OMS Add-On for System Center, choose the Log Analytics *Per Node* pricing tier.
+<a name="allocations"></a>
+
+## Viewing data allocation benefits
+
+To view data allocation benefits from sources such as [Microsoft Defender for Servers](https://azure.microsoft.com/pricing/details/defender-for-cloud/), [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5 and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/), or the [Sentinel Free Trial](https://azure.microsoft.com/pricing/details/microsoft-sentinel/), you need to [export your usage details](#viewing-log-analytics-usage-on-your-azure-bill). Open the exported usage spreadsheet and filter the "Instance ID" column to your workspace. (To select all of your workspaces in the spreadsheet, filter the Instance ID column to "contains /workspaces/".) Next, filter the ResourceRate column to show only rows where this is equal to zero. Now you will see the data allocations from these various sources.
+
+> [!NOTE]
+> Data allocations from Defender for Servers 500 MB/server/day will appear in rows with the meter name "Data Included per Node" and the meter category to "Insight and Analytics" (the name of a legacy offer still used with this meter.) If the workspace is in the legacy Per Node Log Analytics pricing tier, this meter will also include the data allocations from this Log Analytics pricing tier.
+ ## Late-arriving data Situations can arise where data is ingested with old timestamps. For example, if an agent can't communicate to Log Analytics because of a connectivity issue or when a host has an incorrect time date/time. This can manifest itself by an apparent discrepancy between the ingested data reported by the **Usage** data type and a query summing **_BilledSize** over the raw data for a particular day specified by **TimeGenerated**, the timestamp when the event was generated.
azure-portal Azure Portal Add Remove Sort Favorites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-add-remove-sort-favorites.md
Title: Add, remove, and arrange favorites in Azure portal description: Learn how to add or remove items from the favorites list and rearrange the order of items keywords: favorites,portal Previously updated : 03/16/2021 Last updated : 02/17/2022 # Add, remove, and rearrange favorites
-Add or remove items from your **Favorites** list so that you can quickly go to the services you use most often. We already added some common services to your **Favorites** list, but youΓÇÖll likely want to customize it. You're the only one who sees the changes you make to **Favorites**.
+Add or remove items from your **Favorites** list in the Azure portal so that you can quickly go to the services you use most often. We've already added some common services to your **Favorites** list, but you'll likely want to customize it. You're the only one who sees the changes you make to **Favorites**.
## Add a favorite Items that are listed under **Favorites** are selected from **All services**. Hover over a service name to display information and resources related to the service. A filled star icon ![Filled star icon](./media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-graystar.png) next to the service name indicates that the item appears on the **Favorites** list. Select the star icon to add a service to the **Favorites** list.
-### Add Cost Management + Billing to Favorites
+In this example, we'll add Cost Management + Billing to the **Favorites** list.
1. Select **All services** from the Azure portal menu.
- ![Screenshot showing All services selected](./media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-new-all-services.png)
+ :::image type="content" source="media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-new-all-services.png" alt-text="Screenshot showing All services in the Azure portal menu.":::
1. Enter the word "cost" in the search field. Services that have "cost" in the title or that have "cost" as a keyword are shown.
- ![Screenshot showing search in All services](./media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-find-service.png)
+ :::image type="content" source="media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-find-service.png" alt-text="Screenshot showing a search in All services in the Azure portal.":::
1. Hover over the service name to display the **Cost Management + Billing** information card. Select the star icon.
- ![Screenshot showing star next to cost management + billing selected](./media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-add.png)
+ :::image type="content" source="media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-add.png" alt-text="Screenshot showing the star icon to add a service to Favorites in the Azure portal.":::
1. **Cost Management + Billing** is now added as the last item in your **Favorites** list.
You can now remove an item directly from the **Favorites** list.
1. In the **Favorites** section of the portal menu, hover over the name of the service you want to remove.
- ![Screenshot showing hover behavior in Favorites](./media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-remove.png)
+ :::image type="content" source="media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-remove.png" alt-text="Screenshot showing how to remove a service from Favorites in the Azure portal.":::
2. On the information card, select the star so that it changes from filled to unfilled. The service is removed from the **Favorites** list. ## Rearrange favorites
-You can change the order that your favorite services are listed. Just drag and drop the menu item to another location under **Favorites**.
-
-### Move Cost Management + Billing to the top of Favorites
-
-1. Select and hold the **Cost Management + Billing** entry on the **Favorites** list.
-
- ![Screenshot showing cost management + billing selected](./media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-sort.png)
-
-1. While continuing to hold, drag the item to the top of **Favorites** and then release.
+You can change the order in which your favorite services are listed. Just select an item, then drag and drop it to another location under **Favorites**.
## Next steps
-* To create a project-focused workspace, see [Create and share dashboards in the Azure portal](../azure-portal/azure-portal-dashboards.md)
-* Discover more how-to's in the [Azure portal how-to video series](https://www.youtube.com/playlist?list=PLLasX02E8BPBKgXP4oflOL29TtqTzwhxR)
+- To create a project-focused workspace, see [Create and share dashboards in the Azure portal](../azure-portal/azure-portal-dashboards.md).
+- Explore the [Azure portal how-to video series](https://www.youtube.com/playlist?list=PLLasX02E8BPBKgXP4oflOL29TtqTzwhxR).
azure-resource-manager Deploy To Azure Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-to-azure-button.md
Title: Deploy to Azure button
-description: Use button to deploy Azure Resource Manager templates from a GitHub repository.
+description: Use button to deploy remote Azure Resource Manager templates.
Previously updated : 12/03/2021 Last updated : 02/15/2022
-# Use a deployment button to deploy templates from GitHub repository
+# Use a deployment button to deploy remote templates
-This article describes how to use the **Deploy to Azure** button to deploy ARM JSON templates from a GitHub repository. You can add the button directly to the _README.md_ file in your GitHub repository. Or, you can add the button to a web page that references the repository. This method doesn't support [Bicep files](../bicep/overview.md).
+This article describes how to use the **Deploy to Azure** button to deploy remote ARM JSON templates from a GitHub repository or an Azure storage account. You can add the button directly to the _README.md_ file in your GitHub repository. Or, you can add the button to a web page that references the repository. This method doesn't support deploying remote [Bicep files](../bicep/overview.md).
The deployment scope is determined by the template schema. For more information, see:
The image appears as:
## Create URL for deploying template
-To create the URL for your template, start with the raw URL to the template in your repo. To see the raw URL, select **Raw**.
+This section shows how to get the URLs for the templates stored in GitHub and Azure storage account, and how to format the URLs.
+
+### Template stored in GitHub
+
+To create the URL for your template, start with the raw URL to the template in your GitHub repo. To see the raw URL, select **Raw**.
:::image type="content" source="./media/deploy-to-azure-button/select-raw.png" alt-text="select Raw":::
The format of the URL is:
https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.json ```
-Then, convert the URL to a URL-encoded value. You can use an online encoder or run a command. The following PowerShell example shows how to URL encode a value.
+
+If you're using [Git with Azure Repos](/azure/devops/repos/git/) instead of a GitHub repo, you can still use the **Deploy to Azure** button. Make sure your repo is public. Use the [Items operation](/rest/api/azure/devops/git/items/get) to get the template. Your request should be in the following format:
+
+```http
+https://dev.azure.com/{organization-name}/{project-name}/_apis/git/repositories/{repository-name}/items?scopePath={url-encoded-path}&api-version=6.0
+```
+
+## Template stored in Azure storage account
+
+The format of the URLs for the templates stored in a public container is:
+
+```html
+https://{storage-account-name}.blob.core.windows.net/{container-name}/{template-file-name}
+```
+
+For example:
+
+```html
+https://demostorage0215.blob.core.windows.net/democontainer/azuredeploy.json
+```
+
+You can secure the template with SAS token. For more information, see [How to deploy private ARM template with SAS token](./secure-template-with-sas-token.md). The following url is an example with SAS token:
+
+```html
+https://demostorage0215.blob.core.windows.net/privatecontainer/azuredeploy.json?sv=2019-07-07&sr=b&sig=rnI8%2FvKoCHmvmP7XvfspfyzdHjtN4GPsSqB8qMI9FAo%3D&se=2022-02-16T17%3A47%3A46Z&sp=r
+```
+
+## Format the URL
+
+Once you have the URL, you need to convert the URL to a URL-encoded value. You can use an online encoder or run a command. The following PowerShell example shows how to URL encode a value.
```powershell $url = "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.json"
https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.github
You have your full URL for the link. -
-If you're using [Git with Azure Repos](/azure/devops/repos/git/) instead of a GitHub repo, you can still use the **Deploy to Azure** button. Make sure your repo is public. Use the [Items operation](/rest/api/azure/devops/git/items/get) to get the template. Your request should be in the following format:
-
-```http
-https://dev.azure.com/{organization-name}/{project-name}/_apis/git/repositories/{repository-name}/items?scopePath={url-encoded-path}&api-version=6.0
-```
-
-Encode this request URL.
- ## Create Deploy to Azure button Finally, put the link and image together.
azure-signalr Howto Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-custom-domain.md
+
+ Title: Configure a custom domain for Azure SignalR Service
+
+description: How to configure a custom domain for Azure SignalR Service
++++ Last updated : 02/15/2022+++
+# Configure a custom domain for Azure SignalR Service
+
+In addition to the default domain provided Azure SignalR Service, you can also add custom domains.
+
+## Prerequisites
+
+* Resource must be Premium tier
+* A custom certificate matching custom domain is stored in Azure Key Vault
+
+## Add a custom certificate
+
+Before you can add a custom domain, you need add a matching custom certificate first. A custom certificate is a sub resource of your Azure SignalR Service. It references a certificate in your Azure Key Vault. For security and compliance reasons, Azure SignalR Service doesn't permanently store your certificate. Instead it fetches it from your Key Vault on the fly and keeps it in memory.
+
+### Step 1: Grant your Azure SignalR Service resource access to Key Vault
+
+Azure SignalR Service uses Managed Identity to access your Key Vault. In order to authorize, it needs to be granted permissions.
+
+1. In the Azure portal, go to your Azure SignalR Service resource.
+1. In the menu pane, select **Identity**.
+1. Turn on either **System assigned** or **User assigned** identity. Click **Save**.
+
+ :::image type="content" alt-text="Screenshot of enabling managed identity." source="media\howto-custom-domain\portal-identity.png" :::
+
+1. Go to your Key Vault resource.
+1. In the menu pane, select **Access configuration**. Click **Go to access policies**.
+1. Click **Create**. Select **Secret Get** permission and **Certificate Get** permission. Click **Next**.
+
+ :::image type="content" alt-text="Screenshot of permissions selection in Key Vault." source="media\howto-custom-domain\portal-key-vault-permissions.png" :::
+
+1. Search for the Azure SignalR Service resource name or the user assigned identity name. Click **Next**.
+
+ :::image type="content" alt-text="Screenshot of principal selection in Key Vault." source="media\howto-custom-domain\portal-key-vault-principal.png" :::
+
+1. Skip **Application (optional)**. Click **Next**.
+1. In **Review + create**, click **Create**.
+
+### Step 2: Create a custom certificate
+
+1. In the Azure portal, go to your Azure SignalR Service resource.
+1. In the menu pane, select **Custom domain**.
+1. Under **Custom certificate**, click **Add**.
+
+ :::image type="content" alt-text="Screenshot of custom certificate management." source="media\howto-custom-domain\portal-custom-certificate-management.png" :::
+
+1. Fill in a name for the custom certificate.
+1. Click **Select from your Key Vault** to choose a Key Vault certificate. After selection the following **Key Vault Base URI**, **Key Vault Secret Name** should be automatically filled. Alternatively you can also fill in these fields manually.
+1. Optionally, you can specify a **Key Vault Secret Version** if you want to pin the certificate to a specific version.
+1. Click **Add**.
+
+ :::image type="content" alt-text="Screenshot of adding a custom certificate." source="media\howto-custom-domain\portal-custom-certificate-add.png" :::
+
+Azure SignalR Service will then fetch the certificate and validate its content. If everything is good, the **Provisioning State** will be **Succeeded**.
+
+ :::image type="content" alt-text="Screenshot of an added custom certificate." source="media\howto-custom-domain\portal-custom-certificate-added.png" :::
+
+## Create a custom domain CNAME
+
+To validate the ownership of your custom domain, you need to create a CNAME record for the custom domain and point it to the default domain of Azure SignalR Service.
+
+For example, if your default domain is `contoso.service.signalr.net`, and your custom domain is `contoso.example.com`, you need to create a CNAME record on `example.com` like:
+
+```
+contoso.example.com. 0 IN CNAME contoso.service.signalr.net.
+```
+
+If you're using Azure DNS Zone, see [manage DNS records](../dns/dns-operations-recordsets-portal.md) for how to add a CNAME record.
+
+ :::image type="content" alt-text="Screenshot of adding a CNAME record in Azure DNS Zone." source="media\howto-custom-domain\portal-dns-cname.png" :::
+
+If you're using other DNS providers, follow provider's guide to create a CNAME record.
+
+## Add a custom domain
+
+A custom domain is another sub resource of your Azure SignalR Service. It contains all configurations for a custom domain.
+
+1. In the Azure portal, go to your Azure SignalR Service resource.
+1. In the menu pane, select **Custom domain**.
+1. Under **Custom domain**, click **Add**.
+
+ :::image type="content" alt-text="Screenshot of custom domain management." source="media\howto-custom-domain\portal-custom-domain-management.png" :::
+
+1. Fill in a name for the custom domain. It's the sub resource name.
+1. Fill in the domain name. It's the full domain name of your custom domain, for example, `contoso.com`.
+1. Select a custom certificate that applies to this custom domain.
+1. Click **Add**.
+
+ :::image type="content" alt-text="Screenshot of adding a custom domain." source="media\howto-custom-domain\portal-custom-domain-add.png" :::
+
+## Verify a custom domain
+
+You can now access your Azure SignalR Service endpoint via the custom domain. To verify it, you can access the health API.
+
+Here's an example using cURL:
+
+#### [PowerShell](#tab/azure-powershell)
+
+```powershell
+PS C:\> curl.exe -v https://contoso.example.com/api/health
+...
+> GET /api/health HTTP/1.1
+> Host: contoso.example.com
+
+< HTTP/1.1 200 OK
+...
+PS C:\>
+```
+
+#### [Bash](#tab/azure-bash)
+
+```bash
+$ curl -vvv https://contoso.example.com/api/health
+...
+* SSL certificate verify ok.
+...
+> GET /api/health HTTP/2
+> Host: contoso.example.com
+...
+< HTTP/2 200
+...
+```
+
+--
+
+It should return `200` status code without any certificate error.
+
+## Next steps
+++ [How to enable managed identity for Azure SignalR Service](howto-use-managed-identity.md)++ [Get started with Key Vault certificates](../key-vault/certificates/certificate-scenarios.md)++ [What is Azure DNS](../dns/dns-overview.md)
azure-signalr Signalr Concept Authenticate Oauth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-authenticate-oauth.md
Last updated 11/13/2019
+ms.devlang: csharp
# Azure SignalR Service authentication
azure-sql Always Encrypted Azure Key Vault Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/always-encrypted-azure-key-vault-configure.md
azure-sql Application Authentication Get Client Id Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/application-authentication-get-client-id-keys.md
azure-sql Auto Failover Group Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/auto-failover-group-configure.md
azure-sql Connect Query Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/connect-query-java.md
+ms.devlang: java
Last updated 06/26/2020
azure-sql Database Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/database-copy.md
azure-sql Database Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/database-import.md
sqlpackage.exe /a:Import /sf:testExport.bacpac /tdn:NewDacFX /tsn:apptestserver.
> [A SQL Managed Instance](../managed-instance/sql-managed-instance-paas-overview.md) does not currently support migrating a database into an instance database from a BACPAC file using Azure PowerShell. To import into a SQL Managed Instance, use SQL Server Management Studio or SQLPackage. > [!NOTE]
-> The machines processing import/export requests submitted through portal or Powershell need to store the bacpac file as well as temporary files generated by Data-Tier Application Framework (DacFX). The disk space required varies significantly among DBs with same size and can take up to 3 times of the database size. Machines running the import/export request only have 450GB local disk space. As result, some requests may fail with "There is not enough space on the disk" error. In this case, the workaround is to run sqlpackage.exe on a machine with enough local disk space. When importing/exporting databases larger than 150GB, use SqlPackage to avoid this issue.
+> The machines processing import/export requests submitted through portal or PowerShell need to store the bacpac file as well as temporary files generated by Data-Tier Application Framework (DacFX). The disk space required varies significantly among DBs with same size and can take up to 3 times of the database size. Machines running the import/export request only have 450GB local disk space. As result, some requests may fail with "There is not enough space on the disk" error. In this case, the workaround is to run sqlpackage.exe on a machine with enough local disk space. When importing/exporting databases larger than 150GB, use SqlPackage to avoid this issue.
# [PowerShell](#tab/azure-powershell)
az sql db import --resource-group "<resourceGroup>" --server "<server>" --name "
## Cancel the import request Use the [Database Operations - Cancel API](/rest/api/sql/databaseoperations/cancel)
-or the Powershell [Stop-AzSqlDatabaseActivity command](/powershell/module/az.sql/Stop-AzSqlDatabaseActivity), here an example of powershell command.
+or the PowerShell [Stop-AzSqlDatabaseActivity command](/powershell/module/az.sql/Stop-AzSqlDatabaseActivity), here an example of powershell command.
```cmd Stop-AzSqlDatabaseActivity -ResourceGroupName $ResourceGroupName -ServerName $ServerName -DatabaseName $DatabaseName -OperationId $Operation.OperationId
azure-sql Failover Group Add Elastic Pool Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/failover-group-add-elastic-pool-tutorial.md
azure-sql Ledger Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/ledger-overview.md
Ledger provides a solution for these networks. Participants can verify the integ
### Trusted off-chain storage for blockchain
-When a blockchain network is necessary for a multiple-party business process, the ability query the data on the blockchain without sacrificing performance is a challenge.
+When a blockchain network is necessary for a multiple-party business process, the ability to query the data on the blockchain without sacrificing performance is a challenge.
Typical patterns for solving this problem involve replicating data from the blockchain to an off-chain store, such as a database. But after the data is replicated to the database from the blockchain, the data integrity guarantees that a blockchain offer is lost. Ledger provides data integrity for off-chain storage of blockchain networks, which helps ensure complete data trust through the entire system.
azure-sql Logical Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/logical-servers.md
azure-sql Long Term Backup Retention Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/long-term-backup-retention-configure.md
azure-sql Single Database Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/single-database-manage.md
azure-sql Threat Detection Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/threat-detection-configure.md
Previously updated : 12/01/2020 Last updated : 02/16/2022 # Configure Advanced Threat Protection for Azure SQL Database [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
You can receive notifications about the detected threats via [email notification
## Set up Advanced Threat Protection in the Azure portal 1. Sign into the [Azure portal](https://portal.azure.com).
-2. Navigate to the configuration page of the server you want to protect. In the security settings, select **Defender for Cloud**.
-3. On the **Microsoft Defender for SQL** configuration page:
+2. Navigate to the configuration page of the [server](logical-servers.md) you want to protect. In the security settings, select **Microsoft Defender for Cloud**.
+3. On the **Microsoft Defender for Cloud** configuration page:
- - Enable **Microsoft Defender for SQL** on the server.
- - In **Advanced Threat Protection Settings**, provide the list of emails to receive security alerts upon detection of anomalous database activities in the **Send alerts to** text box.
+ 1. If Microsoft Defender for SQL hasn't yet been enabled, select **Enable Microsoft Defender for SQL**.
- :::image type="content" source="media/azure-defender-for-sql/set-up-advanced-threat-protection.png" alt-text="set up advanced threat protection":::
-
+ 1. Select **Configure**.
+
+ :::image type="content" source="media/azure-defender-for-sql/enable-microsoft-defender-sql.png" alt-text="Enable Microsoft Defender for SQL." lightbox="media/azure-defender-for-sql/enable-microsoft-defender-sql.png":::
+
+ 1. Under **ADVANCED THREAT PROTECTION SETTINGS**, select **Add your contact details to the subscription's email settings in Defender for Cloud**.
+
+ :::image type="content" source="media/azure-defender-for-sql/advanced-threat-protection-add-contact-details.png" alt-text="Select link to proceed to advanced threat protection settings." lightbox="media/azure-defender-for-sql/advanced-threat-protection-add-contact-details.png":::
+
+ 1. Provide the list of emails to receive notifications upon detection of anomalous database activities in the **Additional email addresses (separated by commas)** text box.
+ 1. Optionally customize the severity of alerts that will trigger notifications to be sent under **Notification types**.
+ 1. Select **Save**.
+
+ :::image type="content" source="media/azure-defender-for-sql/advanced-threat-protection-configure-emails.png" alt-text="Enter emails for Advanced Threat Protection notifications." lightbox="media/azure-defender-for-sql/advanced-threat-protection-configure-emails.png":::
+
## Set up Advanced Threat Protection using PowerShell For a script example, see [Configure auditing and Advanced Threat Protection using PowerShell](scripts/auditing-threat-detection-powershell-configure.md). ## Next steps -- Learn more about [Advanced Threat Protection](threat-detection-overview.md).-- Learn more about [Advanced Threat Protection in SQL Managed Instance](../managed-instance/threat-detection-configure.md). -- Learn more about [Microsoft Defender for SQL](azure-defender-for-sql.md).-- Learn more about [auditing](../../azure-sql/database/auditing-overview.md)-- Learn more about [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md)
+Learn more about Advanced Threat Protection and Microsoft Defender for SQL in the following articles:
+
+- [Advanced Threat Protection](threat-detection-overview.md)
+- [Advanced Threat Protection in SQL Managed Instance](../managed-instance/threat-detection-configure.md)
+- [Microsoft Defender for SQL](azure-defender-for-sql.md)
+- [Auditing for Azure SQL Database and Azure Synapse Analytics](auditing-overview.md)
+- [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md)
- For more information on pricing, see the [SQL Database pricing page](https://azure.microsoft.com/pricing/details/sql-database/)
azure-sql Transparent Data Encryption Byok Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/transparent-data-encryption-byok-configure.md
azure-sql Api References Create Manage Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/api-references-create-manage-instance.md
description: Learn about creating and configuring managed instances of Azure SQL
azure-sql Failover Group Add Instance Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/failover-group-add-instance-tutorial.md
Add managed instances of Azure SQL Managed Instance to a failover group. In this
> - Create a secondary managed instance as part of a [failover group](../database/auto-failover-group-overview.md). > - Test failover.
+ There are multiple ways to establish connectivity between managed instances in different virtual networks, including:
+ * [Azure ExpressRoute](../../expressroute/expressroute-howto-circuit-portal-resource-manager.md)
+ * [Virtual network peering](../../virtual-network/virtual-network-peering-overview.md)
+ * VPN gateways
+
+This tutorial provides steps for creating and connecting VPN gateways. If you prefer to use ExpressRoute or VNet peering, replace the gateway steps accordingly, or
+skip ahead to [Step 7](#create-a-failover-group) if you already have ExpressRoute or global VNet peering configured.
++ > [!NOTE] > - When going through this tutorial, ensure you are configuring your resources with the [prerequisites for setting up failover groups for SQL Managed Instance](../database/auto-failover-group-overview.md#enabling-geo-replication-between-managed-instances-and-their-vnets).
- > - Creating a managed instance can take a significant amount of time. As a result, this tutorial could take several hours to complete. For more information on provisioning times, see [SQL Managed Instance management operations](sql-managed-instance-paas-overview.md#management-operations).
- > - Managed instances participating in a failover group require [Azure ExpressRoute](../../expressroute/expressroute-howto-circuit-portal-resource-manager.md), global VNet peering, or two connected VPN gateways. This tutorial provides steps for creating and connecting the VPN gateways. Skip these steps if you already have ExpressRoute configured.
-
+ > - Creating a managed instance can take a significant amount of time. As a result, this tutorial may take several hours to complete. For more information on provisioning times, see [SQL Managed Instance management operations](sql-managed-instance-paas-overview.md#management-operations).
## Prerequisites
This portion of the tutorial uses the following PowerShell cmdlets:
## Create a primary gateway
-For two managed instances to participate in a failover group, there must be either ExpressRoute or a gateway configured between the virtual networks of the two managed instances to allow network communication. If you choose to configure [ExpressRoute](../../expressroute/expressroute-howto-circuit-portal-resource-manager.md) instead of connecting two VPN gateways, skip ahead to [Step 7](#create-a-failover-group).
-
-This article provides steps to create the two VPN gateways and connect them, but you can skip ahead to creating the failover group if you have configured ExpressRoute instead.
- > [!NOTE] > The SKU of the gateway affects throughput performance. This tutorial deploys a gateway with the most basic SKU (`HwGw1`). Deploy a higher SKU (example: `VpnGw3`) to achieve higher throughput. For all available options, see [Gateway SKUs](../../vpn-gateway/vpn-gateway-about-vpngateways.md#benchmark)
azure-sql Long Term Backup Retention Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/long-term-backup-retention-configure.md
backup Backup Azure Afs Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-afs-automation.md
This article explains how to:
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] > [!NOTE]
-> Azure Powershell currently doesn't support backup policies with hourly schedule. Please use Azure Portal to leverage this feature. [Learn more](manage-afs-backup.md#create-a-new-policy)
+> Azure PowerShell currently doesn't support backup policies with hourly schedule. Please use Azure Portal to leverage this feature. [Learn more](manage-afs-backup.md#create-a-new-policy)
Set up PowerShell as follows:
backup Backup Azure Arm Restore Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-restore-vms.md
If you don't have permissions, you can [restore a disk](#restore-disks), and the
As one of the [restore options](#restore-options), you can create a VM quickly with basic settings from a restore point.
-1. In **Restore Virtual Machine** > **Create new** > **Restore Type**, select **Create a virtual machine**.
+1. In **Restore Virtual Machine** > **Create new** > **Restore Type**, select **Create new virtual machine**.
1. In **Virtual machine name**, specify a VM that doesn't exist in the subscription. 1. In **Resource group**, select an existing resource group for the new VM, or create a new one with a globally unique name. If you assign a name that already exists, Azure assigns the group the same name as the VM. 1. In **Virtual network**, select the VNet in which the VM will be placed. All VNets associated with the subscription in the same location as the vault, which is active and not attached with any affinity group, are displayed. Select the subnet.
As one of the [restore options](#restore-options), you can create a disk from a
- [Attach restored disks](../virtual-machines/windows/attach-managed-disk-portal.md) to an existing VM. - [Create a new VM](./backup-azure-vms-automation.md#create-a-vm-from-restored-disks) from the restored disks using PowerShell.
-1. In **Restore configuration** > **Create new** > **Restore Type**, select **Restore disks**.
+1. In **Restore configuration** > **Create new** > **Restore Type**, select **Create new virtual machine**.
1. In **Resource group**, select an existing resource group for the restored disks, or create a new one with a globally unique name. 1. In **Staging location**, specify the storage account to which to copy the VHDs. [Learn more](#storage-accounts).
When your virtual machine uses managed disks and you select the **Create virtual
While you restore disks for a Managed VM from a Vault-Standard recovery point, it restores the Managed disk and Azure Resource Manager (ARM) templates, along with the VHD files of the disks in staging location. If you restore disks from an Instant recovery point, it restores the Managed disks and ARM templates only. >[!Note]
->For restoring disk from a Vault-Standard recovery point that is/was greater than 4 TB, Azure Backup doesn't restore the VHD files.
+>- For restoring disk from a Vault-Standard recovery point that is/was greater than 4 TB, Azure Backup doesn't restore the VHD files.
+>- For information on managed/premium disk performance after restored via Azure Backup, see the [Latency](../virtual-machines/premium-storage-performance.md#latency) section.
### Use templates to customize a restored VM
There are a few things to note after restoring a VM:
## Next steps - If you experience difficulties during the restore process, [review](backup-azure-vms-troubleshoot.md#restore) common issues and errors.-- After the VM is restored, learn about [managing virtual machines](backup-azure-manage-vms.md)
+- After the VM is restored, learn about [managing virtual machines](backup-azure-manage-vms.md)
backup Backup Azure Restore Files From Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-restore-files-from-vm.md
To restore files or folders from the recovery point, go to the virtual machine a
## Step 2: Ensure the machine meets the requirements before executing the script
-After the script is successfully downloaded, make sure you have the right machine to execute this script. The VM where you are planning to execute the script, should not have any of the following unsupported configurations. **If it does, then choose an alternate machine preferably from the same region that meets the requirements**.
+After the script is successfully downloaded, make sure you have the right machine to execute this script. The VM where you are planning to execute the script, should not have any of the following unsupported configurations. **If it does, then choose an alternate machine that meets the requirements**.
### Dynamic disks
In Linux, the OS of the computer used to restore files must support the file sys
| SLES | 12 and above | | openSUSE | 42.2 and above |
-> [!NOTE]
-> We've found some issues in running the file recovery script on machines with SLES 12 SP4 OS and we're investigating with the SLES team.
-> Currently, running the file recovery script is working on machines with SLES 12 SP2 and SP3 OS versions.
->
- The script also requires Python and bash components to execute and connect securely to the recovery point. |Component | Version |
If the file recovery process hangs after you run the file-restore script (for ex
![Registry key changes](media/backup-azure-restore-files-from-vm/iscsi-reg-key-changes.png) ```registry-- HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Disk\TimeOutValue ΓÇô change this from 60 to 1200 secs.-- HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Class\{4d36e97b-e325-11ce-bfc1-08002be10318}\0003\Parameters\SrbTimeoutDelta ΓÇô change this from 15 to 1200 secs.
+- HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Disk\TimeOutValue ΓÇô change this from 60 to 2400 secs.
+- HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Class\{4d36e97b-e325-11ce-bfc1-08002be10318}\0003\Parameters\SrbTimeoutDelta ΓÇô change this from 15 to 2400 secs.
- HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Class\{4d36e97b-e325-11ce-bfc1-08002be10318}\0003\Parameters\EnableNOPOut ΓÇô change this from 0 to 1-- HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Class\{4d36e97b-e325-11ce-bfc1-08002be10318}\0003\Parameters\MaxRequestHoldTime - change this from 60 to 1200 secs.
+- HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Class\{4d36e97b-e325-11ce-bfc1-08002be10318}\0003\Parameters\MaxRequestHoldTime - change this from 60 to 2400 secs.
``` ### For Linux
backup Backup Azure Sql Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-automation.md
Azure Backup can restore SQL Server databases that are running on Azure VMs as f
Check the prerequisites mentioned [here](restore-sql-database-azure-vm.md#restore-prerequisites) before restoring SQL DBs. > [!WARNING]
-> Due to a security issue related to RBAC, we had to introduce a breaking change in the restore commands for SQL DB via Powershell. Please upgrade to Az 6.0.0 version or above for the proper restore commands to be submitted via Powershell. The latest PS commands are provided below.
+> Due to a security issue related to RBAC, we had to introduce a breaking change in the restore commands for SQL DB via PowerShell. Please upgrade to Az 6.0.0 version or above for the proper restore commands to be submitted via PowerShell. The latest PS commands are provided below.
First fetch the relevant backed up SQL DB using the [Get-AzRecoveryServicesBackupItem](/powershell/module/az.recoveryservices/get-azrecoveryservicesbackupitem) PowerShell cmdlet.
PointInTime : 1/1/0001 12:00:00 AM
#### Alternate workload restore to a vault in secondary region > [!IMPORTANT]
-> Support for secondary region restores for SQL from Powershell is available from Az 6.0.0
+> Support for secondary region restores for SQL from PowerShell is available from Az 6.0.0
If you have enabled cross region restore, then the recovery points will be replicated to the secondary, paired region as well. Then, you can fetch those recovery points and trigger a restore to a machine, present in that paired region. As with the normal restore, the target machine should be registered to the target vault in the secondary region. The following sequence of steps should clarify the end-to-end process.
backup Restore Blobs Storage Account Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-blobs-storage-account-ps.md
Title: Restore Azure blobs via Azure Powershell
-description: Learn how to restore Azure blobs to any point-in-time using Azure Powershell.
+ Title: Restore Azure blobs via Azure PowerShell
+description: Learn how to restore Azure blobs to any point-in-time using Azure PowerShell.
Last updated 05/05/2021
-# Restore Azure blobs to point-in-time using Azure Powershell
+# Restore Azure blobs to point-in-time using Azure PowerShell
This article describes how to restore [blobs](blob-backup-overview.md) to any point-in-time using Azure Backup.
batch Batch Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-customer-managed-key.md
Title: Configure customer-managed keys for your Azure Batch account with Azure K
description: Learn how to encrypt Batch data using customer-managed keys. Last updated 02/11/2021
+ms.devlang: csharp
batch Disk Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/disk-encryption.md
Title: Create a pool with disk encryption enabled
description: Learn how to use disk encryption configuration to encrypt nodes with a platform-managed key. Last updated 04/16/2021
+ms.devlang: csharp
cloud-services-extended-support Deploy Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/deploy-template.md
This tutorial explains how to create a Cloud Service (extended support) deployme
## Deploy a Cloud Service (extended support) > [!NOTE]
-> An easier and faster way of generating your ARM template and parameter file is via the [Azure portal](https://portal.azure.com). You can [download the generated ARM template](generate-template-portal.md) via the portal to create your Cloud Service via Powershell
+> An easier and faster way of generating your ARM template and parameter file is via the [Azure portal](https://portal.azure.com). You can [download the generated ARM template](generate-template-portal.md) via the portal to create your Cloud Service via PowerShell
1. Create virtual network. The name of the virtual network must match the references in the Service Configuration (.cscfg) file. If using an existing virtual network, omit this section from the ARM template.
cloud-services-extended-support Post Migration Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/post-migration-changes.md
Customers need to update their tooling and automation to start using the new API
## Changes to Certificate Management Post Migration
-As a standard practice to manage your certificates, all the valid .pfx certificate files should be added to certificate store in Key Vault and update would work perfectly fine via any client - Portal, Powershell or Rest API.
+As a standard practice to manage your certificates, all the valid .pfx certificate files should be added to certificate store in Key Vault and update would work perfectly fine via any client - Portal, PowerShell or Rest API.
Currently, Azure Portal does a validation for you to check if all the required Certificates are uploaded in certificate store in Key Vault and warns if a certificate is not found. However, if you are planning to use Certificates as secrets, then these certificates cannot be validated for their thumbprint and any update operation which involves addition of secrets would fail via Portal. Customers are reccomended to use PowerShell or RestAPI to continue updates involving Secrets.
cloudfoundry Create Cloud Foundry On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloudfoundry/create-cloud-foundry-on-azure.md
editor: ruyakubu
ms.assetid: Last updated 09/13/2018 multiple
cognitive-services Luis Reference Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-reference-regions.md
# Authoring and publishing regions and the associated keys
-LUIS authoring regions are supported by the LUIS portal. To publish a LUIS app to more than one region, you need at least one key per region.
-
+LUIS authoring regions are supported by the LUIS portal. To publish a LUIS app to more than one region, you need at least one predection key per region.
<a name="luis-website"></a>
LUIS authoring regions are supported by the LUIS portal. To publish a LUIS app t
Authoring regions are the regions where the application gets created and the training take place.
-LUIS has the following authoring regions available:
+LUIS has the following authoring regions available with [paired fail-over regions](../../availability-zones/cross-region-replication-azure.md):
* Australia east * West Europe * West US * Switzerland north -
-LUIS has one portal you can use regardless of region, [www.luis.ai](https://www.luis.ai). You must still author and publish in the same region. Authoring regions have [paired fail-over regions](../../availability-zones/cross-region-replication-azure.md).
+LUIS has one portal you can use regardless of region, [www.luis.ai](https://www.luis.ai).
<a name="regions-and-azure-resources"></a> ## Publishing regions and Azure resources
-Publishing regions are the regions where the application will be used in runtime. To use the application in a publishing region, you must create a resource in this region and publish your application to it.
+Publishing regions are the regions where the application will be used in runtime. To use the application in a publishing region, you must create a resource in this region and assign your application to it. For example, if you create an app with the *westus* authoring region and publish it to the *eastus* and *brazilsouth* regions, the app will run in those two regions.
-The app is published to all regions associated with the LUIS resources added in the LUIS portal. For example, for an app created on [www.luis.ai][www.luis.ai], if you create a LUIS or Cognitive Service resource in **westus** and [add it to the app as a resource](luis-how-to-azure-subscription.md), the app is published in that region.
- ## Public apps
-A public app is published in all regions so that a user with a region-based LUIS resource key can access the app in whichever region is associated with their resource key.
+A public app is published in all regions so that a user with a supported predection resource can access the app in all regions.
<a name="publishing-regions"></a> ## Publishing regions are tied to authoring regions
-When you first create our LUIS application, you are required to choose an [authoring region](#luis-authoring-regions). To use the application in runtime, you are required to create a publishing region.
-
-The authoring region app can only be published to a corresponding publish region. If your app is currently in the wrong authoring region, export the app, and import it into the correct authoring region for your publishing region.
+When you first create our LUIS application, you are required to choose an [authoring region](#luis-authoring-regions). To use the application in runtime, you are required to create a resource in a publishing region.
-> [!NOTE]
-> LUIS apps created on https://www.luis.ai can now be published to all endpoints including the [European](#publishing-to-europe) and [Australian](#publishing-to-australia) regions.
+Your app can only be published to one of its corresponding authoring regions, which are listed in the tables below. If your app is currently in the wrong authoring region, export the app, and import it into the correct authoring region to match the required publishing region.
## Single data residency
-Regions that fall under single data residency are the regions where data do not leave the boundaries of the region.
-
-The following publishing regions do not have a failover region:
--
-* Brazil South
-* Southeast Asia
+Single data residency means that the data does not leave the boundaries of the region.
> [!Note]
-> * Make sure to set `log=false` for [V3 APIs](https://westus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0/operations/5cb0a91e54c9db63d589f433) to disable active learning. By default this value is `false`, to ensure that data does not leave the boundaries of the publishing region.
-> * If `log=true`, data is returned to the authoring region for active learning even if the publishing region is one of the single data residnecy regions.
-
+> * Make sure to set `log=false` for [V3 APIs](https://westus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0/operations/5cb0a91e54c9db63d589f433) to disable active learning. By default this value is `false`, to ensure that data does not leave the boundaries of the runtime region.
+> * If `log=true`, data is returned to the authoring region for active learning.
## Publishing to Europe
Learn more about the [authoring and prediction endpoints](developer-reference-re
## Failover regions
-Each region has a secondary region to fail over to. Europe fails over inside Europe and Australia fails over inside Australia.
+Each region has a secondary region to fail over to. Failover will only happen in the same geographical region.
-Publishing regions that fall under [single data residency](#single-data-residency) do not have a failover region.
+Authoring regions have [paired fail-over regions](../../availability-zones/cross-region-replication-azure.md).
+The following publishing regions do not have a failover region:
-Authoring regions have [paired fail-over regions](../../availability-zones/cross-region-replication-azure.md).
+* Brazil South
+* Southeast Asia
## Next steps
cognitive-services Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/How-To/network-isolation.md
Add-AzWebAppAccessRestrictionRule -ResourceGroupName "<resource group name>" -We
> [!div class="mx-imgBorder"] > [ ![Screenshot of access restriction rule with the addition of public IP address]( ../media/network-isolation/public-address.png) ]( ../media/network-isolation/public-address.png#lightbox)
+### Outbound access from App Service
+
+The QnA Maker App Service requires outbound access to the below endpoints. Please make sure theyΓÇÖre added to the allow list if there are any restrictions on the outbound traffic.
+- https://qnamakerstore.blob.core.windows.net
+- https://qnamaker-data.trafficmanager.net
+- https://qnamakerconfigprovider.trafficmanager.net
++ ### Configure App Service Environment to host QnA Maker App Service The App Service Environment (ASE) can be used to host the QnA Maker App Service instance. Follow the steps below:
cognitive-services Custom Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-neural-voice.md
The following articles help you start using this feature:
* To get started with Custom Neural Voice and create a project, see [Get started with Custom Neural Voice](how-to-custom-voice.md). * To prepare and upload your audio data, see [Prepare training data](how-to-custom-voice-prepare-data.md).
-* To train and deploy your models, see [Create and use your voice model](how-to-custom-voice-create-voice.md).
+* To train and deploy your models, see [Train your voice model](how-to-custom-voice-create-voice.md) and [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md).
## Terms and definitions
cognitive-services How To Custom Voice Create Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md
Last updated 01/23/2022 +
-# Create and use your voice model
+# Train your voice model
In [Prepare training data](how-to-custom-voice-prepare-data.md), you learned about the different data types you can use to train a custom neural voice, and the different format requirements. After you've prepared your data and the voice talent verbal statement, you can start to upload them to [Speech Studio](https://aka.ms/custom-voice-portal). In this article, you learn how to train a custom neural voice through the Speech Studio portal.
To train a neural voice, you must create a voice talent profile with an audio fi
Upload this audio file to the Speech Studio as shown in the following screenshot. You create a voice talent profile, which is used to verify against your training data when you create a voice model. For more information, see [voice talent verification](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext).
- :::image type="content" source="media/custom-voice/upload-verbal-statement.png" alt-text="Screenshot that shows the upload voice talent statement.":::
> [!NOTE] > Custom Neural Voice is available with limited access. Make sure you understand the [responsible AI requirements](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext), and then [apply for access](https://aka.ms/customneural).
For more information, [learn more about the capabilities and limits of this feat
> [!NOTE] > Custom Neural Voice training is only available in the three regions: East US, Southeast Asia, and UK South. But you can easily copy a neural voice model from the three regions to other regions. For more information, see the [regions for Custom Neural Voice](regions.md#text-to-speech).
-## Create and use a Custom Neural Voice endpoint
-
-After you've successfully created and tested your voice model, you deploy it in a custom text-to-speech endpoint. Use this endpoint instead of the usual endpoint when you're making text-to-speech requests through the REST API. The subscription that you've used to deploy the model is the only one that can call your custom endpoint.
-
-To create a Custom Neural Voice endpoint:
-
-1. On the **Deploy model** tab, select **Deploy model**.
-1. Enter a **Name** and **Description** for your custom endpoint.
-1. Select a voice model that you want to associate with this endpoint.
-1. Select **Deploy** to create your endpoint.
-
-In the endpoint table, you now see an entry for your new endpoint. It might take a few minutes to instantiate a new endpoint. When the status of the deployment is **Succeeded**, the endpoint is ready for use.
-
-You can suspend and resume your endpoint if you don't use it all the time. When an endpoint is reactivated after suspension, the endpoint URL is retained, so you don't need to change your code in your apps.
-
-You can also update the endpoint to a new model. To change the model, make sure the new model is named the same as the one you want to update.
-
-> [!NOTE]
->- Standard subscription (S0) users can create up to 50 endpoints, each with its own custom neural voice.
->- To use your custom neural voice, you must specify the voice model name, use the custom URI directly in an HTTP request, and use the same subscription to pass through the authentication of the text-to-speech service.
-
-After your endpoint is deployed, the endpoint name appears as a link. Select the link to display information specific to your endpoint, such as the endpoint key, endpoint URL, and sample code.
-
-The custom endpoint is functionally identical to the standard endpoint that's used for text-to-speech requests. For more information, see the [Speech SDK](./get-started-text-to-speech.md) or [REST API](rest-text-to-speech.md).
-
-[Audio Content Creation](https://speech.microsoft.com/audiocontentcreation) is a tool that allows you to fine-tune audio output by using a friendly UI.
- ## Next steps
+- [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md)
- [How to record voice samples](record-custom-voice-samples.md)
+- [Text-to-Speech API reference](rest-text-to-speech.md)
- [Long Audio API](long-audio-api.md)+
cognitive-services How To Custom Voice Prepare Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-prepare-data.md
All audio files should be grouped into a zip file. Once your dataset is successf
## Next steps -- [Create and use your voice model](how-to-custom-voice-create-voice.md)
+- [Train your voice model](how-to-custom-voice-create-voice.md)
+- [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md)
- [How to record voice samples](record-custom-voice-samples.md)
cognitive-services How To Custom Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice.md
If you're using the old version of Custom Voice (which is scheduled to be retire
## Next steps -- [Prepare data for Custom Neural Voice](how-to-custom-voice-prepare-data.md)-- [Train and deploy a custom neural voice](how-to-custom-voice-create-voice.md)
+- [Prepare data for custom neural voice](how-to-custom-voice-prepare-data.md)
+- [Train your voice model](how-to-custom-voice-create-voice.md)
+- [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md)
- [How to record voice samples](record-custom-voice-samples.md)
cognitive-services How To Deploy And Use Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-deploy-and-use-endpoint.md
+
+ Title: How to deploy and use voice model - Speech service
+
+description: Learn about how to deploy and use a custom neural voice model.
++++++ Last updated : 02/09/2022++++
+# Deploy and use your voice model
+
+After you've successfully created and trained your voice model, you deploy it to a custom neural voice endpoint. Use the custom neural voice endpoint instead of the usual text-to-speech endpoint for requests with the REST API. Use the speech studio to create a custom neural voice endpoint. Use the REST API to suspend or resume a custom neural voice endpoint.
+
+## Create a custom neural voice endpoint
+
+To create a custom neural voice endpoint:
+
+1. On the **Deploy model** tab, select **Deploy model**.
+1. Enter a **Name** and **Description** for your custom endpoint.
+1. Select a voice model that you want to associate with this endpoint.
+1. Select **Deploy** to create your endpoint.
+
+In the endpoint table, you now see an entry for your new endpoint. It might take a few minutes to instantiate a new endpoint. When the status of the deployment is **Succeeded**, the endpoint is ready for use.
+
+You can suspend and resume your endpoint if you don't use it all the time. When an endpoint is reactivated after suspension, the endpoint URL is retained, so you don't need to change your code in your apps.
+
+You can also update the endpoint to a new model. To change the model, make sure the new model is named the same as the one you want to update.
+
+> [!NOTE]
+>- Standard subscription (S0) users can create up to 50 endpoints, each with its own custom neural voice.
+>- To use your custom neural voice, you must specify the voice model name, use the custom URI directly in an HTTP request, and use the same subscription to pass through the authentication of the text-to-speech service.
+
+After your endpoint is deployed, the endpoint name appears as a link. Select the link to display information specific to your endpoint, such as the endpoint key, endpoint URL, and sample code.
+
+The custom endpoint is functionally identical to the standard endpoint that's used for text-to-speech requests. For more information, see the [Speech SDK](./get-started-text-to-speech.md) or [REST API](rest-text-to-speech.md).
+
+[Audio Content Creation](https://speech.microsoft.com/audiocontentcreation) is a tool that allows you to fine-tune audio output by using a friendly UI.
+
+## Copy your voice model to another project
+
+You can copy your voice model to another project for the same region or another region. For example, you can copy a neural voice model that was trained in one region, to a project for another region.
+
+> [!NOTE]
+> Custom neural voice training is only available in the these regions: East US, Southeast Asia, and UK South. But you can copy a neural voice model from those regions to other regions. For more information, see the [regions for custom neural voice](regions.md#text-to-speech).
+
+To copy your custom neural voice model to another project:
+
+1. On the **Train model** tab, select a voice model that you want to copy, and then select **Copy to project**.
+
+ :::image type="content" source="media/custom-voice/cnv-model-copy.png" alt-text="Copy to project":::
+
+1. Select the **Region**, **Speech resource**, and **Project** where you want to copy the model. You must have a speech resource and project in the target region, otherwise you need to create them first.
+
+ :::image type="content" source="media/custom-voice/cnv-model-copy-dialog.png" alt-text="Copy voice model":::
+
+1. Select **Submit** to copy the model.
+1. Select **View model** under the notification message for copy success.
+1. On the **Train model** tab, select the newly copied model and then select **Deploy model**.
+
+## Suspend and resume an endpoint
+
+You can suspend or resume an endpoint, to limit spend and conserve resources that aren't in use. You won't be charged while the endpoint is suspended. When you resume an endpoint, you can use the same endpoint URL in your application to synthesize speech.
+
+You can suspend and resume an endpoint in Speech Studio or via the REST API.
+
+> [!NOTE]
+> The suspend operation will complete almost immediately. The resume operation completes in about the same amount of time as a new deployment.
+
+### Suspend and resume an endpoint in Speech Studio
+
+This section describes how to suspend or resume a custom neural voice endpoint in the Speech Studio portal.
+
+#### Suspend endpoint
+
+1. To suspend and deactivate your endpoint, select **Suspend** from the **Deploy model** tab in [Speech Studio](https://aka.ms/custom-voice-portal).
+
+ :::image type="content" source="media/custom-voice/cnv-endpoint-suspend.png" alt-text="Screenshot of the select suspend endpoint option":::
+
+1. In the dialog box that appears, select **Submit**. After the endpoint is suspended, Speech Studio will show the **Successfully suspended endpoint** notification.
+
+#### Resume endpoint
+
+1. To resume and activate your endpoint, select **Resume** from the **Deploy model** tab in [Speech Studio](https://aka.ms/custom-voice-portal).
+
+ :::image type="content" source="media/custom-voice/cnv-endpoint-resume.png" alt-text="Screenshot of the select resume endpoint option":::
+
+1. In the dialog box that appears, select **Submit**. After you successfully reactivate the endpoint, the status will change from **Suspended** to **Succeeded**.
+
+### Suspend and resume endpoint via REST API
+
+This section will show you how to [get](#get-endpoint), [suspend](#suspend-endpoint), or [resume](#resume-endpoint) a custom neural voice endpoint via REST API.
+
+#### Application settings
+
+The application settings that you use as REST API [request parameters](#request-parameters) are available on the **Deploy model** tab in [Speech Studio](https://aka.ms/custom-voice-portal).
++
+* The **Endpoint key** shows the subscription key the endpoint is associated with. Use the endpoint key as the value of your `Ocp-Apim-Subscription-Key` request header.
+* The **Endpoint URL** shows your service region. Use the value that precedes `voice.speech.microsoft.com` as your service region request parameter. For example, use `eastus` if the endpoint URL is `https://eastus.voice.speech.microsoft.com/cognitiveservices/v1`.
+* The **Endpoint URL** shows your endpoint ID. Use the value appended to the `?deploymentId=` query parameter as the value of your endpoint ID request parameter.
+* The Azure region the endpoint is associated with.
+
+#### Get endpoint
+
+Get the endpoint by endpoint ID. The operation returns details about an endpoint such as model ID, project ID, and status.
+
+For example, you might want to track the status progression for [suspend](#suspend-endpoint) or [resume](#resume-endpoint) operations. Use the `status` property in the response payload to determine the status of the endpoint.
+
+The possible `status` property values are:
+
+| Status | Description |
+| - | |
+| `NotStarted` | The endpoint hasn't yet been deployed, and it's not available for speech synthesis. |
+| `Running` | The endpoint is in the process of being deployed or resumed, and it's not available for speech synthesis. |
+| `Succeeded` | The endpoint is active and available for speech synthesis. The endpoint has been deployed or the resume operation succeeded. |
+| `Failed` | The endpoint deploy or suspend operation failed. The endpoint can only be viewed or deleted in [Speech Studio](https://aka.ms/custom-voice-portal).|
+| `Disabling` | The endpoint is in the process of being suspended, and it's not available for speech synthesis. |
+| `Disabled` | The endpoint is inactive, and it's not available for speech synthesis. The suspend operation succeeded or the resume operation failed. |
+
+> [!Tip]
+> If the status is `Failed` or `Disabled`, check `properties.error` for a detailed error message. However, there won't be error details if the status is `Disabled` due to a successful suspend operation.
+
+##### Get endpoint example
+
+For information about endpoint ID, region, and subscription key parameters, see [request parameters](#request-parameters).
+
+HTTP example:
+
+```HTTP
+GET api/texttospeech/v3.0/endpoints/<YourEndpointId> HTTP/1.1
+Ocp-Apim-Subscription-Key: YourSubscriptionKey
+Host: <YourServiceRegion>.customvoice.api.speech.microsoft.com
+```
+
+cURL example:
+
+```Console
+curl -v -X GET "https://<YourServiceRegion>.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/endpoints/<YourEndpointId>" -H "Ocp-Apim-Subscription-Key: <YourSubscriptionKey >"
+```
+
+Response header example:
+
+```
+Status code: 200 OK
+```
+
+Response body example:
+
+```json
+{
+ "model": {
+ "id": "a92aa4b5-30f5-40db-820c-d2d57353de44"
+ },
+ "project": {
+ "id": "ffc87aba-9f5f-4bfa-9923-b98186591a79"
+ },
+ "properties": {},
+ "status": "Succeeded",
+ "lastActionDateTime": "2019-01-07T11:36:07Z",
+ "id": "e7ffdf12-17c7-4421-9428-a7235931a653",
+ "createdDateTime": "2019-01-07T11:34:12Z",
+ "locale": "en-US",
+ "name": "Voice endpoint",
+ "description": "Example for voice endpoint"
+}
+```
+
+#### Suspend endpoint
+
+You can suspend an endpoint to limit spend and conserve resources that aren't in use. You won't be charged while the endpoint is suspended. When you resume an endpoint, you can use the same endpoint URL in your application to synthesize speech.
+
+You suspend an endpoint with its unique deployment ID. The endpoint status must be `Succeeded` before you can suspend it.
+
+Use the [get endpoint](#get-endpoint) operation to poll and track the status progression from `Succeeded`, to `Disabling`, and finally to `Disabled`.
+
+##### Suspend endpoint example
+
+For information about endpoint ID, region, and subscription key parameters, see [request parameters](#request-parameters).
+
+HTTP example:
+
+```HTTP
+POST api/texttospeech/v3.0/endpoints/<YourEndpointId>/suspend HTTP/1.1
+Ocp-Apim-Subscription-Key: YourSubscriptionKey
+Host: <YourServiceRegion>.customvoice.api.speech.microsoft.com
+Content-Type: application/json
+Content-Length: 0
+```
+
+cURL example:
+
+```Console
+curl -v -X POST "https://<YourServiceRegion>.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/endpoints/<YourEndpointId>/suspend" -H "Ocp-Apim-Subscription-Key: <YourSubscriptionKey >" -H "content-type: application/json" -H "content-length: 0"
+```
+
+Response header example:
+
+```
+Status code: 202 Accepted
+```
+
+For more information, see [response headers](#response-headers).
+
+#### Resume endpoint
+
+When you resume an endpoint, you can use the same endpoint URL that you used before it was suspended.
+
+You resume an endpoint with its unique deployment ID. The endpoint status must be `Disabled` before you can resume it.
+
+Use the [get endpoint](#get-endpoint) operation to poll and track the status progression from `Disabled`, to `Running`, and finally to `Succeeded`. If the resume operation failed, the endpoint status will be `Disabled`.
+
+##### Resume endpoint example
+
+For information about endpoint ID, region, and subscription key parameters, see [request parameters](#request-parameters).
+
+HTTP example:
+
+```HTTP
+POST api/texttospeech/v3.0/endpoints/<YourEndpointId>/resume HTTP/1.1
+Ocp-Apim-Subscription-Key: YourSubscriptionKey
+Host: <YourServiceRegion>.customvoice.api.speech.microsoft.com
+Content-Type: application/json
+Content-Length: 0
+```
+
+cURL example:
+
+```Console
+curl -v -X POST "https://<YourServiceRegion>.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/endpoints/<YourEndpointId>/resume" -H "Ocp-Apim-Subscription-Key: <YourSubscriptionKey >" -H "content-type: application/json" -H "content-length: 0"
+```
+
+Response header example:
+```
+Status code: 202 Accepted
+```
+
+For more information, see [response headers](#response-headers).
+
+#### Parameters and response codes
+
+##### Request parameters
+
+You use these request parameters with calls to the REST API. See [application settings](#application-settings) for information about where to get your region, endpoint ID, and subscription key in Speech Studio.
+
+| Name | Location | Required | Type | Description |
+| | | -- | | |
+| `YourServiceRegion` | Path | `True` | string | The Azure region the endpoint is associated with. |
+| `YourEndpointId` | Path | `True` | string | The identifier of the endpoint. |
+| `Ocp-Apim-Subscription-Key` | Header | `True` | string | The subscription key the endpoint is associated with. |
+
+##### Response headers
+
+Status code: 202 Accepted
+
+| Name | Type | Description |
+| - | | -- |
+| `Location` | string | The location of the endpoint that can be used as the full URL to get endpoint. |
+| `Retry-After` | string | The total seconds of recommended interval to retry to get endpoint status. |
+
+##### HTTP status codes
+
+The HTTP status code for each response indicates success or common errors.
+
+| HTTP status code | Description | Possible reason |
+| - | -- | |
+| 200 | OK | The request was successful. |
+| 202 | Accepted | The request has been accepted and is being processed. |
+| 400 | Bad Request | The value of a parameter is invalid, or a required parameter is missing, empty, or null. One common issue is a header that is too long. |
+| 401 | Unauthorized | The request isn't authorized. Check to make sure your subscription key or [token](rest-speech-to-text.md#authentication) is valid and in the correct region. |
+| 429 | Too Many Requests | You've exceeded the quota or rate of requests allowed for your subscription. |
+| 502 | Bad Gateway | Network or server-side issue. May also indicate invalid headers. |
+
+## Next steps
+
+- [How to record voice samples](record-custom-voice-samples.md)
+- [Text-to-Speech API reference](rest-text-to-speech.md)
+- [Long Audio API](long-audio-api.md)
cognitive-services Record Custom Voice Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/record-custom-voice-samples.md
Listen to each file carefully. At this stage, you can edit out small unwanted so
Convert each file to 16 bits and a sample rate of 24 KHz before saving and if you recorded the studio chatter, remove the second channel. Save each file in WAV format, naming the files with the utterance number from your script.
-Finally, create the *transcript* that associates each WAV file with a text version of the corresponding utterance. [Create and use your voice model](./how-to-custom-voice-create-voice.md) includes details of the required format. You can copy the text directly from your script. Then create a Zip file of the WAV files and the text transcript.
+Finally, create the *transcript* that associates each WAV file with a text version of the corresponding utterance. [Train your voice model](./how-to-custom-voice-create-voice.md) includes details of the required format. You can copy the text directly from your script. Then create a Zip file of the WAV files and the text transcript.
Archive the original recordings in a safe place in case you need them later. Preserve your script and notes, too.
Archive the original recordings in a safe place in case you need them later. Pre
You're ready to upload your recordings and create your custom neural voice. > [!div class="nextstepaction"]
-> [Create and use your voice model](./how-to-custom-voice-create-voice.md)
+> [Train your voice model](./how-to-custom-voice-create-voice.md)
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
The following table has descriptions of each supported style.
|`style="disgruntled"`|Expresses a disdainful and complaining tone. Speech of this emotion displays displeasure and contempt.| |`style="embarrassed"`|Expresses an uncertain and hesitant tone when the speaker is feeling uncomfortable.| |`style="empathetic"`|Expresses a sense of caring and understanding.|
+|`style="envious"`|Express a tone of admiration when you desire something that someone else has.|
|`style="fearful"`|Expresses a scared and nervous tone, with higher pitch, higher vocal energy, and faster rate. The speaker is in a state of tension and unease.| |`style="gentle"`|Expresses a mild, polite, and pleasant tone, with lower pitch and vocal energy.| |`style="lyrical"`|Expresses emotions in a melodic and sentimental way.| |`style="narration-professional"`|Expresses a professional, objective tone for content reading.|
+|`style="narration-relaxed"`|Express a soothing and melodious tone for content reading.|
|`style="newscast"`|Expresses a formal and professional tone for narrating news.| |`style="newscast-casual"`|Expresses a versatile and casual tone for general news delivery.| |`style="newscast-formal"`|Expresses a formal, confident, and authoritative tone for news delivery.|
cognitive-services Fail Over https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/fail-over.md
Use the following JSON in your request. Use the name of the model you wan to dep
```json {
- "trainedModelLabel": "{MODEL-NAME}"
+ "trainedModelLabel": "{MODEL-NAME}",
+ "deploymentName": {DEPLOYMENT-NAME}
} ```
cognitive-services Fail Over https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/fail-over.md
Use the following JSON in your request. Use the name of the model you wan to dep
```json {
- "trainedModelLabel": "{MODEL-NAME}"
+ "trainedModelLabel": "{MODEL-NAME}",
+ "deploymentName": "{DEPLOYMENT-NAME}"
} ```
cognitive-services Fail Over https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/fail-over.md
Use the following JSON in your request. Use the name of the model you wan to dep
```json {
- "trainedModelLabel": "{MODEL-NAME}"
+ "trainedModelLabel": "{MODEL-NAME}",
+ "deploymentName": "{DEPLOYMENT-NAME}"
} ```
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/overview.md
Previously updated : 11/19/2021 Last updated : 02/16/2022
Text Analytics for health is one of the features offered by [Azure Cognitive Service for Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language.
-Text Analytics for health extracts and labels relevant medical information from unstructured texts such as doctor's notes, discharge summaries, clinical documents, and electronic health records.
- This documentation contains the following types of articles: * [**Quickstarts**](quickstart.md) are getting-started instructions to guide you through making requests to the service. * [**How-to guides**](how-to/call-api.md) contain instructions for using the service in more specific or customized ways. * The [**conceptual articles**](concepts/health-entity-categories.md) provide in-depth explanations of the service's functionality and features.
-> [!VIDEO https://docs.microsoft.com/Shows/AI-Show/Introducing-Text-Analytics-for-Health/player]
+## Text Analytics for health features
-## Features
+Text Analytics for health extracts and labels relevant medical information from unstructured texts such as doctor's notes, discharge summaries, clinical documents, and electronic health records.
[!INCLUDE [Text Analytics for health](includes/features.md)]
+> [!VIDEO https://docs.microsoft.com/Shows/AI-Show/Introducing-Text-Analytics-for-Health/player]
+
+## Get started with Text analytics for health
+
+To use this feature, you submit raw unstructured text for analysis and handle the API output in your application. Analysis is performed as-is, with no additional customization to the model used on your data. There are three ways to use Text Analytics for health:
-## Deploy on premises using Docker containers
-Use the available Docker container to [deploy this feature on-premises](how-to/use-containers.md). These docker containers enable you to bring the service closer to your data for compliance, security, or other operational reasons.
+|Development option |Description | Links |
+||||
+| Language Studio | A web-based platform that enables you to try Text Analytics for health without needing writing code. | ΓÇó [Language Studio website](https://language.cognitive.azure.com/tryout/healthAnalysis) <br> ΓÇó [Quickstart: Use the Language studio](../language-studio.md) |
+| REST API or Client library (Azure SDK) | Integrate Text Analytics for health into your applications using the REST API, or the client library available in a variety of languages. | ΓÇó [Quickstart: Use Text Analytics for health](quickstart.md) |
+| Docker container | Use the available Docker container to deploy this feature on-premises, letting you bring the service closer to your data for compliance, security, or other operational reasons. | ΓÇó [How to deploy on-premises](how-to/use-containers.md) |
+
+## Input requirements and service limits
+
+* Text Analytics for health takes raw unstructured text for analysis. See the [data and service limits](how-to/call-api.md#data-limits) in the how-to guide for more information.
+* Text Analytics for health works with a variety of written languages. See [language support](language-support.md) for more information.
+
+## Reference documentation and code samples
+
+As you use Text Analytics for health in your applications, see the following reference documentation and samples for Azure Cognitive Services for Language:
+
+|Development option / language |Reference documentation |Samples |
+||||
+|REST API | [REST API documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-2-Preview-2/operations/Analyze) | |
+|C# | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/textanalytics/Azure.AI.TextAnalytics/samples) |
+| Java | [Java documentation](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true) | [Java Samples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/textanalytics/azure-ai-textanalytics/src/samples) |
+|JavaScript | [JavaScript documentation](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true) | [JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/textanalytics/ai-text-analytics/samples/v5) |
+|Python | [Python documentation](/python/api/overview/azure/ai-textanalytics-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/textanalytics/azure-ai-textanalytics/samples) |
## Responsible AI An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for Text Analytics for health](/legal/cognitive-services/language-service/transparency-note-health?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information: [!INCLUDE [Responsible AI links](../includes/overview-responsible-ai-links.md)]-
-## Next steps
-
-There are two ways to get started using the entity linking feature:
-* [Language Studio](../language-studio.md), which is a web-based platform that enables you to try several Azure Cognitive Service for Language features without needing to write code.
-* The [quickstart article](quickstart.md) for instructions on making requests to the service using the REST API and client library SDK.
container-apps Azure Resource Manager Api Spec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-resource-manager-api-spec.md
Entries in the `resources` array of the ARM template have the following properti
| `tags` | Collection of Azure tags associated with the container app. | array | | `type` | Always `Microsoft.Web/containerApps` ARM endpoint determines which API to forward to | string |
+> [!NOTE]
+> Azure Container Apps resources are in the process of migrating from the `Microsoft.Web` namespace to the `Microsoft.App` namespace. Refer to [Namespace migration from Microsoft.Web to Microsoft.App in March 2022](https://github.com/microsoft/azure-container-apps/issues/109) for more details.
+ In this example, you put your values in place of the placeholder tokens surrounded by `<>` brackets. ## properties
container-apps Microservices Dapr Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-azure-resource-manager.md
az extension add `
Now that the extension is installed, register the `Microsoft.Web` namespace.
+> [!NOTE]
+> Azure Container Apps resources are in the process of migrating from the `Microsoft.Web` namespace to the `Microsoft.App` namespace. Refer to [Namespace migration from Microsoft.Web to Microsoft.App in March 2022](https://github.com/microsoft/azure-container-apps/issues/109) for more details.
+ # [Bash](#tab/bash) ```azurecli
container-apps Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quotas.md
The following quotas exist per subscription for Azure Container Apps Preview.
| Feature | Quantity | |||
-| Environments | 2 |
+| Environments per region | 2 |
| Container apps per environment | 20 | | Replicas per container app | 25 | | Cores per replica | 2 |
cosmos-db How To Provision Throughput Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/how-to-provision-throughput-cassandra.md
Last updated 10/15/2020
+ms.devlang: csharp
cosmos-db Continuous Backup Restore Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-permissions.md
description: Learn how to isolate and restrict the restore permissions for conti
Previously updated : 07/29/2021 Last updated : 02/16/2022 -+ # Manage permissions to restore an Azure Cosmos DB account [!INCLUDE[appliesto-sql-mongodb-api](includes/appliesto-sql-mongodb-api.md)]
-Azure Cosmos DB allows you to isolate and restrict the restore permissions for continuous backup account to a specific role or a principal. The owner of the account can trigger a restore and assign a role to other principals to perform the restore operation. These permissions can be applied at the subscription scope or more granularly at the source account scope as shown in the following image:
+Azure Cosmos DB allows you to isolate and restrict the restore permissions for continuous backup account to a specific role or a principal. The owner of the account can trigger a restore and assign a role to other principals to perform the restore operation. These permissions can be applied at the subscription scope as shown in the following image:
Scope is a set of resources that have access, to learn more on scopes, see the [Azure RBAC](../role-based-access-control/scope-overview.md) documentation. In Azure Cosmos DB, applicable scopes are the source subscription and database account for most of the use cases. The principal performing the restore actions should have write permissions to the destination resource group.
To perform a restore, a user or a principal need the permission to restore (that
||| |Subscription | /subscriptions/00000000-0000-0000-0000-000000000000 | |Resource group | /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/Example-cosmosdb-rg |
-|CosmosDB restorable account resource | /subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDB/locations/West US/restorableDatabaseAccounts/23e99a35-cd36-4df4-9614-f767a03b9995|
-The restorable account resource can be extracted from the output of the `az cosmosdb restorable-database-account list --account-name <accountname>` command in CLI or `Get-AzCosmosDBRestorableDatabaseAccount -DatabaseAccountName <accountname>` cmdlet in PowerShell. The name attribute in the output represents the `instanceID` of the restorable account.
-
-## Permissions
+## Permissions on the source account
Following permissions are required to perform the different activities pertaining to restore for continuous backup mode accounts: > [!NOTE]
-> Permission can be assigned to restorable database account at account scope or subscription scope. Assigning permissions at resource group scope is not supported.
-
-|Permission |Impact |Minimum scope |Maximum scope |
-|||||
-|`Microsoft.Resources/deployments/validate/action`, `Microsoft.Resources/deployments/write` | These permissions are required for the ARM template deployment to create the restored account. See the sample permission [RestorableAction](#custom-restorable-action) below for how to set this role. | Not applicable | Not applicable |
-|`Microsoft.DocumentDB/databaseAccounts/write` | This permission is required to restore an account into a resource group | Resource group under which the restored account is created. | Subscription under which the restored account is created |
-|`Microsoft.DocumentDB/locations/restorableDatabaseAccounts/restore/action` </br> You can't choose resource group as the permission scope. |This permission is required on the source restorable database account scope to allow restore actions to be performed on it. | The *RestorableDatabaseAccount* resource belonging to the source account being restored. This value is also given by the `ID` property of the restorable database account resource. An example of restorable account is */subscriptions/subscriptionId/providers/Microsoft.DocumentDB/locations/regionName/restorableDatabaseAccounts/\<guid-instanceid\>* | The subscription containing the restorable database account. |
-|`Microsoft.DocumentDB/locations/restorableDatabaseAccounts/read` </br> You can't choose resource group as the permission scope. |This permission is required on the source restorable database account scope to list the database accounts that can be restored. | The *RestorableDatabaseAccount* resource belonging to the source account being restored. This value is also given by the `ID` property of the restorable database account resource. An example of restorable account is */subscriptions/subscriptionId/providers/Microsoft.DocumentDB/locations/regionName/restorableDatabaseAccounts/\<guid-instanceid\>*| The subscription containing the restorable database account. |
-|`Microsoft.DocumentDB/locations/restorableDatabaseAccounts/*/read` </br> You can't choose resource group as the permission scope. | This permission is required on the source restorable account scope to allow reading of restorable resources such as list of databases and containers for a restorable account. | The *RestorableDatabaseAccount* resource belonging to the source account being restored. This value is also given by the `ID` property of the restorable database account resource. An example of restorable account is */subscriptions/subscriptionId/providers/Microsoft.DocumentDB/locations/regionName/restorableDatabaseAccounts/\<guid-instanceid\>*| The subscription containing the restorable database account. |
+> Assigning permissions at resource group scope is not supported.
+
+|Permission |Impact |
+|||
+|`Microsoft.DocumentDB/locations/restorableDatabaseAccounts/restore/action` </br> You can't choose resource group as the permission scope. |This permission is required on the source restorable database account scope to allow restore actions to be performed on it. |
+|`Microsoft.DocumentDB/locations/restorableDatabaseAccounts/read` </br> You can't choose resource group as the permission scope. |This permission is required on the source restorable database account scope to list the database accounts that can be restored. |
+|`Microsoft.DocumentDB/locations/restorableDatabaseAccounts/*/read` </br> You can't choose resource group as the permission scope. | This permission is required on the source restorable account scope to allow reading of restorable resources such as list of databases and containers for a restorable account. |
+## Permissions on the destination account
+
+Following permissions are required to perform the different activities pertaining to restore for continuous backup mode accounts:
+
+
+|Permission |Impact |
+|||
+|`Microsoft.Resources/deployments/validate/action`, `Microsoft.Resources/deployments/write` | These permissions are required for the ARM template deployment to create the restored account. See the sample permission [RestorableAction](#custom-restorable-action) below for how to set this role.
+|`Microsoft.DocumentDB/databaseAccounts/write` | This permission is required to restore an account into a resource group |
+
## Azure CLI role assignment scenarios to restore at different scopes
cosmos-db How To Provision Throughput Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/how-to-provision-throughput-gremlin.md
Last updated 10/15/2020
+ms.devlang: csharp
cosmos-db Tutorial Query Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/tutorial-query-graph.md
Previously updated : 11/08/2021 Last updated : 02/16/2022 ms.devlang: csharp
This article covers the following tasks:
## Prerequisites
-For these queries to work, you must have an Azure Cosmos DB account and have graph data in the container. Don't have any of those? Complete the [5-minute quickstart](create-graph-dotnet.md) or the [developer tutorial](tutorial-query-graph.md) to create an account and populate your database. You can run the following queries using the [Gremlin console](https://tinkerpop.apache.org/docs/current/reference/#gremlin-console), or your favorite Gremlin driver.
+For these queries to work, you must have an Azure Cosmos DB account and have graph data in the container. Don't have any of those? Complete the [5-minute quickstart](create-graph-dotnet.md) to create an account and populate your database. You can run the following queries using the [Gremlin console](https://tinkerpop.apache.org/docs/current/reference/#gremlin-console), or your favorite Gremlin driver.
## Count vertices in the graph
cosmos-db High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/high-availability.md
Title: High availability in Azure Cosmos DB description: This article describes how to build a highly available solution using Cosmos DB-+ Previously updated : 11/11/2021- Last updated : 02/17/2022+
# Achieve high availability with Cosmos DB [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
-To build a highly-available solution, you have to evaluate the reliability characteristics of all its components. Cosmos DB is designed to provide multiple features and configuration options to achieve high availability for all solutions' availability needs.
+To build a solution with high-availability, you have to evaluate the reliability characteristics of all its components. Cosmos DB is designed to provide multiple features and configuration options to achieve high availability for all solutions' availability needs.
-We will use the terms **RTO** (Recovery Time Objective), to indicate the time between the beginning of an outage impacting Cosmos DB and the recovery to full availability, and **RPO** (Recovery Point Objective), to indicate the time between the last write correctly restored and the time of the beginning of the outage affecting Cosmos DB.
+We'll use the terms **RTO** (Recovery Time Objective), to indicate the time between the beginning of an outage impacting Cosmos DB and the recovery to full availability, and **RPO** (Recovery Point Objective), to indicate the time between the last write correctly restored and the time of the beginning of the outage affecting Cosmos DB.
> [!NOTE] > Expected and maximum RPOs and RTOs depend on the kind of outage that Cosmos DB is experiencing. For instance, an outage of a single node will have different expected RTO and RPO than a whole region outage.
We will use the terms **RTO** (Recovery Time Objective), to indicate the time be
This article details the events that can affect Cosmos DB availability and the corresponding Cosmos DB configuration options to achieve the availability characteristics required by your solution. ## Replica maintenance
-Cosmos DB is a fully-managed multi-tenant service that manages all details of individual compute nodes transparently. Users do not have to worry about any kind of patching and planned maintenance. Using redundancy and with no user involvement, Cosmos DB guarantees SLAs for availability and P99 latency through all automatic maintenance operations performed by the system.
+Cosmos DB is a managed multi-tenant service that manages all details of individual compute nodes transparently. Users don't have to worry about any kind of patching and planned maintenance. Cosmos DB guarantees SLAs for availability and P99 latency through all automatic maintenance operations performed by the system.
Refer to the [SLAs section](#slas) for the guaranteed availability SLAs. ## Replica outages Replica outages refer to outages of individual nodes in a Cosmos DB cluster deployed in an Azure region.
-Cosmos DB automatically mitigates replica outages by guaranteeing at least two replicas of your data at all times in each Azure region where your account is deployed.
-This results in RTO = 0 and and RPO = 0, for individual node outages, with no application changes or configurations required.
+Cosmos DB automatically mitigates replica outages by guaranteeing at least three replicas of your data in each Azure region for your account within a four replica quorum.
+This results in RTO = 0 and RPO = 0, for individual node outages, with no application changes or configurations required.
-In many Azure regions, it is possible to distribute your Cosmos DB cluster across **availability zones**, which results increased SLAs, as availability zones are physically separate and provide distinct power source, network, and cooling. See [Availability Zones](/azure/architecture/reliability/architect).
-When using this option, Cosmos DB provides RTO = 0 and and RPO = 0 even in case of outages of a whole availability zone.
+In many Azure regions, it's possible to distribute your Cosmos DB cluster across **availability zones**, which results increased SLAs, as availability zones are physically separate and provide distinct power source, network, and cooling. See [Availability Zones](/azure/architecture/reliability/architect).
+When a Cosmos DB account is deployed using availability zones, Cosmos DB provides RTO = 0 and RPO = 0 even in a zone outage.
-When deploying in a single Azure region, with no extra user input, Cosmos DB is resilient to node outages. Enabling redundancy across availability zones makes Cosmos DB resilient to entire availability zone outages at the cost of increased charges. Both SLAs and price are reported in the [SLAs section](#slas).
+When users deploy in a single Azure region, with no extra user input, Cosmos DB is resilient to node outages. Enabling redundancy across availability zones makes Cosmos DB resilient to zone outages at the cost of increased charges. Both SLAs and price are reported in the [SLAs section](#slas).
-Zone redundancy can only be configured when adding a new region to an Azure Cosmos account. For existing regions, zone redundancy can be enabled by removing the region then adding it back with the zone redundancy enabled. For a single region account, this requires adding one additional region to temporarily failover to, then removing and adding the desired region with zone redundancy enabled.
+Zone redundancy can only be configured when adding a new region to an Azure Cosmos account. For existing regions, zone redundancy can be enabled by removing the region then adding it back with the zone redundancy enabled. For a single region account, this requires adding a region to temporarily fail over to, then removing and adding the desired region with zone redundancy enabled.
-By default, a Cosmos DB account does not use multiple availability zones. You can enable deployment across multiple availability zones in the following ways:
+By default, a Cosmos DB account doesn't use multiple availability zones. You can enable deployment across multiple availability zones in the following ways:
* [Azure portal](how-to-manage-database-account.md#addremove-regions-from-your-database-account)
Region outages refer to outages that affect all Cosmos DB nodes in an Azure regi
In the rare cases of region outages, Cosmos DB can be configured to support various outcomes of durability and availability. ### Durability
-In case of Cosmos DB accounts that use a single region, most of the times no data loss occurs and data access is restored after Cosmos DB services recovers in the affected region. Data loss may occur only in case of unrecoverable disasters in the Cosmos DB region.
+When a Cosmos DB account is deployed in a single region, generally no data loss occurs and data access is restored after Cosmos DB services recovers in the affected region. Data loss may occur only with an unrecoverable disaster in the Cosmos DB region.
To protect against complete data loss that may result from catastrophic disasters in a region, Azure Cosmos DB provides 2 different backup modes: - [Continuous backups](./continuous-backup-restore-introduction.md) ensure the backup is taken in each region every 100 seconds and provide the ability to restore your data to any desired point in time with second granularity. In each region, the backup is dependent on the data committed in that region. - [Periodic backups](./configure-periodic-backup-restore.md) take full backups of all partitions from all containers under your account, with no synchronization across partitions. The minimum backup interval is 1 hour.
-In case of Cosmos DB accounts in multiple regions, data durability depends on the consistency level configured on the account. The following table details, for all consistency levels, the RPO of Cosmos DB account deployed in at least 2 regions.
+When a Cosmos DB account is deployed in multiple regions, data durability depends on the consistency level configured on the account. The following table details, for all consistency levels, the RPO of Cosmos DB account deployed in at least 2 regions.
|**Consistency level**|**RPO in case of region outage**| |||
For multi-region accounts, the minimum value of *K* and *T* is 100,000 write ope
Refer to [Consistency levels](./consistency-levels.md) for more information on the differences between consistency levels. ### Availability
-If your solution requires continuous availability in case of region outages, Cosmos DB can be configured to replicate your data across multiple regions and to transparently failover to available regions when required.
+If your solution requires continuous availability during region outages, Cosmos DB can be configured to replicate your data across multiple regions and to transparently fail over to available regions when required.
Single-region accounts may lose availability following a regional outage. To ensure high availability at all times it's recommended to set up your Azure Cosmos DB account with **a single write region and at least a second (read) region** and enable **Service-Managed failover**.
-Service-managed failover allows Cosmos DB to failover the write region of multi-region account, in order to preserve availability at the cost of data loss as per [durability section](#durability). Regional failovers are detected and handled in the Azure Cosmos DB client. They don't require any changes from the application.
+Service-managed failover allows Cosmos DB to fail over the write region of multi-region account, in order to preserve availability at the cost of data loss as per [durability section](#durability). Regional failovers are detected and handled in the Azure Cosmos DB client. They don't require any changes from the application.
Refer to [How to manage an Azure Cosmos DB account](./how-to-manage-database-account.md) for the instructions on how to enable multiple read regions and service-managed failover. > [!IMPORTANT] > It is strongly recommended that you configure the Azure Cosmos accounts used for production workloads to **enable automatic failover**. This enables Cosmos DB to failover the account databases to available regions automatically. In the absence of this configuration, the account will experience loss of write availability for all the duration of the write region outage, as manual failover will not succeed due to lack of region connectivity. ### Multiple write regions
-Azure Cosmos DB can be configured to accept writes in multiple regions. This is useful to reduce write latency in geographically distributed applications. When using multiple write regions, strong consistency is not supported and write conflicts may arise. Refer to [Conflict types and resolution policies when using multiple write regions](./conflict-resolution-policies.md) for more information on how resolve conflicts in multiple write region configurations.
+Azure Cosmos DB can be configured to accept writes in multiple regions. This is useful to reduce write latency in geographically distributed applications. When a Cosmos DB account is configured for multiple write regions, strong consistency isn't supported and write conflicts may arise. Refer to [Conflict types and resolution policies when using multiple write regions](./conflict-resolution-policies.md) for more information on how to resolve conflicts in multiple write region configurations.
-Given the internal Azure Cosmos DB architecture, using multiple write regions does not guarantee write availability during a region outage. The best configuration to achieve high availability in case of region outage is single write region with service-managed failover.
+Given the internal Azure Cosmos DB architecture, using multiple write regions doesn't guarantee write availability during a region outage. The best configuration to achieve high availability during a region outage is single write region with service-managed failover.
#### Conflict-resolution region
-When a Cosmos DB account is configured with multi-region writes, one of the region acts as an arbiter in case of write conflicts. When such conflicts happen, they're routed to this region for consistent resolution.
+When a Cosmos DB account is configured with multi-region writes, one of the regions will act as an arbiter in case of write conflicts. When such conflicts happen, they're routed to this region for consistent resolution.
### What to expect during a region outage Client of single-region accounts will experience loss of read and write availability until service is restored.
Multi-region accounts will experience different behaviors depending on the follo
| Configuration | Outage | Availability impact | Durability impact| What to do | | -- | -- | -- | -- | -- | | Single write region | Read region outage | All clients will automatically redirect reads to other regions. No read or write availability loss for all configurations, except 2 regions with strong consistency which loses write availability until the service is restored or, if **service-managed failover** is enabled, the region is marked as failed and a failover occurs. | No data loss. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> When the outage is over, re-adjust provisioned RUs as appropriate. |
-| Single write region | Write region outage | Clients will redirect reads to other regions. <p/> **Without service-manages failover**, clients will experience write availability loss, until write availability is restored automatically when the outage ends. <p/> **With service-managed failover** clients will experience write availability loss until the services manages a failover to a new write region selected according to your preferences. | If strong consistency level is not selected, some data may not have been replicated to the remaining active regions. This depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> Do *not* trigger a manual failover during the outage, as it will not succeed. <p/> When the outage is over, re-adjust provisioned RUs as appropriate. Accounts using SQL APIs may also recover the non-replicated data in the failed region from your [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). |
+| Single write region | Write region outage | Clients will redirect reads to other regions. <p/> **Without service-manages failover**, clients will experience write availability loss, until write availability is restored automatically when the outage ends. <p/> **With service-managed failover** clients will experience write availability loss until the services manages a failover to a new write region selected according to your preferences. | If strong consistency level isn't selected, some data may not have been replicated to the remaining active regions. This depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> Do *not* trigger a manual failover during the outage, as it will not succeed. <p/> When the outage is over, re-adjust provisioned RUs as appropriate. Accounts using SQL APIs may also recover the non-replicated data in the failed region from your [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). |
| Multiple write regions | Any regional outage | Possibility of temporary write availability loss, analogously to single write region with service-managed failover. The failover of the [conflict-resolution region](#conflict-resolution-region) may also cause a loss of write availability if a high number of conflicting writes happen at the time of the outage. | Recently updated data in the failed region may be unavailable in the remaining active regions, depending on the selected [consistency level](consistency-levels.md). If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support additional traffic. <p/> When the outage is over, you may re-adjust provisioned RUs as appropriate. If possible, Cosmos DB will automatically recover non-replicated data in the failed region using the configured conflict resolution method for SQL API accounts, and Last Write Wins for accounts using other APIs. | ### Additional information on read region outages
Multi-region accounts will experience different behaviors depending on the follo
* Subsequent reads are redirected to the recovered region without requiring any changes to your application code. During both failover and rejoining of a previously failed region, read consistency guarantees continue to be honored by Azure Cosmos DB.
-* Even in a rare and unfortunate event when the Azure region is permanently irrecoverable, there is no data loss if your multi-region Azure Cosmos account is configured with *Strong* consistency. In the rare event of a permanently irrecoverable write region, a multi-region Azure Cosmos account has the durability characteristics specified in the [Durability](#durability) section.
+* Even in a rare and unfortunate event when the Azure region is permanently irrecoverable, there's no data loss if your multi-region Azure Cosmos account is configured with *Strong* consistency. In the rare event of a permanently irrecoverable write region, a multi-region Azure Cosmos account has the durability characteristics specified in the [Durability](#durability) section.
### Additional information on write region outages * During a write region outage, the Azure Cosmos account will automatically promote a secondary region to be the new primary write region when **automatic (service-managed) failover** is configured on the Azure Cosmos account. The failover will occur to another region in the order of region priority you've specified.
-* Note that manual failover should not be triggered and will not succeed in presence of an outage of the source or destination region. This is because of a consistency check required by the failover procedure which requires connectivity between the regions.
+* Note that manual failover shouldn't be triggered and will not succeed in presence of an outage of the source or destination region. This is because of a consistency check required by the failover procedure which requires connectivity between the regions.
-* When the previously impacted region is back online, any write data that was not replicated when the region failed, is made available through the [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). Applications can read the conflicts feed, resolve the conflicts based on the application-specific logic, and write the updated data back to the Azure Cosmos container as appropriate.
+* When the previously impacted region is back online, any write data that wasn't replicated when the region failed, is made available through the [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). Applications can read the conflicts feed, resolve the conflicts based on the application-specific logic, and write the updated data back to the Azure Cosmos container as appropriate.
* Once the previously impacted write region recovers, it becomes automatically available as a read region. You can switch back to the recovered region as the write region. You can switch the regions by using [PowerShell, Azure CLI or Azure portal](how-to-manage-database-account.md#manual-failover). There is **no data or availability loss** before, during or after you switch the write region and your application continues to be highly available.
The following table summarizes the high availability capability of various accou
* Review the expected [behavior of the Azure Cosmos SDKs](troubleshoot-sdk-availability.md) during these events and which are the configurations that affect it.
-* To ensure high write and read availability, configure your Azure Cosmos account to span at least two regions and three, if using strong consistency. Remember that the best configuration to achieve high availability in case of region outage is single write region with service-managed failover. To learn more, see how to [configure your Azure Cosmos account with multiple write-regions](tutorial-global-distribution-sql-api.md).
+* To ensure high write and read availability, configure your Azure Cosmos account to span at least two regions and three, if using strong consistency. Remember that the best configuration to achieve high availability for a region outage is single write region with service-managed failover. To learn more, see how to [configure your Azure Cosmos account with multiple write-regions](tutorial-global-distribution-sql-api.md).
-* For multi-region Azure Cosmos accounts that are configured with a single-write region, [enable service-managed failover by using Azure CLI or Azure portal](how-to-manage-database-account.md#automatic-failover). After you enable automatic failover, whenever there is a regional disaster, Cosmos DB will failover your account without any user inputs.
+* For multi-region Azure Cosmos accounts that are configured with a single-write region, [enable service-managed failover by using Azure CLI or Azure portal](how-to-manage-database-account.md#automatic-failover). After you enable automatic failover, whenever there's a regional disaster, Cosmos DB will fail over your account without any user inputs.
* Even if your Azure Cosmos account is highly available, your application may not be correctly designed to remain highly available. To test the end-to-end high availability of your application, as a part of your application testing or disaster recovery (DR) drills, temporarily disable automatic-failover for the account, invoke the [manual failover by using PowerShell, Azure CLI or Azure portal](how-to-manage-database-account.md#manual-failover), then monitor your application's failover. Once complete, you can fail back over to the primary region and restore automatic-failover for the account. > [!IMPORTANT] > Do not invoke manual failover during a Cosmos DB outage on either the source or destination regions, as it requires regions connectivity to maintain data consistency and it will not succeed.
-* Within a globally distributed database environment, there is a direct relationship between the consistency level and data durability in the presence of a region-wide outage. As you develop your business continuity plan, you need to understand the maximum acceptable time before the application fully recovers after a disruptive event. The time required for an application to fully recover is known as recovery time objective (RTO). You also need to understand the maximum period of recent data updates the application can tolerate losing when recovering after a disruptive event. The time period of updates that you might afford to lose is known as recovery point objective (RPO). To see the RPO and RTO for Azure Cosmos DB, see [Consistency levels and data durability](./consistency-levels.md#rto)
+* Within a globally distributed database environment, there's a direct relationship between the consistency level and data durability in the presence of a region-wide outage. As you develop your business continuity plan, you need to understand the maximum acceptable time before the application fully recovers after a disruptive event. The time required for an application to fully recover is known as recovery time objective (RTO). You also need to understand the maximum period of recent data updates the application can tolerate losing when recovering after a disruptive event. The time period of updates that you might afford to lose is known as recovery point objective (RPO). To see the RPO and RTO for Azure Cosmos DB, see [Consistency levels and data durability](./consistency-levels.md#rto)
## What to expect during a Cosmos DB region outage
Multi-region accounts will experience different behaviors depending on the follo
| Write regions | Automatic failover | What to expect | What to do | | -- | -- | -- | -- |
-| Single write region | Not enabled | In case of outage in a read region when not using strong consistency, all clients will redirect to other regions. No read or write availability loss. No data loss. When using strong consistency, read region outage can impact write availability if fewer than two read regions remaining.<p/> In case of an outage in the write region, clients will experience write availability loss. If strong consistency level is not selected, some data may not have been replicated to the remaining active regions. This depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, unreplicated data may be lost. <p/> Cosmos DB will restore write availability automatically when the outage ends. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> Do *not* trigger a manual failover during the outage, as it will not succeed. <p/> When the outage is over, re-adjust provisioned RUs as appropriate. |
-| Single write region | Enabled | In case of outage in a read region when not using strong consistency, all clients will redirect to other regions. No read or write availability loss. No data loss. When using strong consistency, read region outage can impact write availability if fewer than two read regions remaining.<p/> In case of an outage in the write region, clients will experience write availability loss until Cosmos DB automatically elects a new region as the new write region according to your preferences. If strong consistency level is not selected, some data may not have been replicated to the remaining active regions. This depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> Do *not* trigger a manual failover during the outage, as it will not succeed. <p/> When the outage is over, you may move the write region back to the original region, and re-adjust provisioned RUs as appropriate. Accounts using SQL APIs may also recover the non-replicated data in the failed region from your [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). |
-| Multiple write regions | Not applicable | No read or write availability loss. <p/> Recently updated data in the failed region may be unavailable in the remaining active regions. Eventual, consistent prefix, and session consistency levels guarantee a staleness of <15mins. Bounded staleness guarantees less than K updates or T seconds, depending on the configuration. If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support additional traffic. <p/> When the outage is over, you may re-adjust provisioned RUs as appropriate. If possible, Cosmos DB will automatically recover non-replicated data in the failed region using the configured conflict resolution method for SQL API accounts, and Last Write Wins for accounts using other APIs. |
+| Single write region | Not enabled | In case of outage in a read region when not using strong consistency, all clients will redirect to other regions. No read or write availability loss. No data loss. When using strong consistency, read region outage can impact write availability if fewer than two read regions remaining.<p/> In case of an outage in the write region, clients will experience write availability loss. If strong consistency level isn't selected, some data may not have been replicated to the remaining active regions. This depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, unreplicated data may be lost. <p/> Cosmos DB will restore write availability automatically when the outage ends. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> Do *not* trigger a manual failover during the outage, as it will not succeed. <p/> When the outage is over, re-adjust provisioned RUs as appropriate. |
+| Single write region | Enabled | In case of outage in a read region when not using strong consistency, all clients will redirect to other regions. No read or write availability loss. No data loss. When using strong consistency, read region outage can impact write availability if fewer than two read regions remaining.<p/> In case of an outage in the write region, clients will experience write availability loss until Cosmos DB automatically elects a new region as the new write region according to your preferences. If strong consistency level isn't selected, some data may not have been replicated to the remaining active regions. This depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> Do *not* trigger a manual failover during the outage, as it will not succeed. <p/> When the outage is over, you may move the write region back to the original region, and re-adjust provisioned RUs as appropriate. Accounts using SQL APIs may also recover the non-replicated data in the failed region from your [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). |
+| Multiple write regions | Not applicable | Recently updated data in the failed region may be unavailable in the remaining active regions. Eventual, consistent prefix, and session consistency levels guarantee a staleness of <15mins. Bounded staleness guarantees less than K updates or T seconds, depending on the configuration. If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support additional traffic. <p/> When the outage is over, you may re-adjust provisioned RUs as appropriate. If possible, Cosmos DB will automatically recover non-replicated data in the failed region using the configured conflict resolution method for SQL API accounts, and Last Write Wins for accounts using other APIs. |
## Next steps
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-rbac.md
description: Learn how to configure role-based access control with Azure Active
Previously updated : 07/21/2021 Last updated : 02/16/2022 + # Configure role-based access control with Azure Active Directory for your Azure Cosmos DB account
The Azure Cosmos DB data plane RBAC is built on concepts that are commonly found
- An Azure Cosmos DB database, - An Azure Cosmos DB container.
- :::image type="content" source="./media/how-to-setup-rbac/concepts.png" alt-text="RBAC concepts":::
+ :::image type="content" source="./media/how-to-setup-rbac/concepts.svg" alt-text="RBAC concepts":::
## <a id="permission-model"></a> Permission model
The table below lists all the actions exposed by the permission model.
| `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/create` | Create a new item. | | `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/read` | Read an individual item by its ID and partition key (point-read). | | `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/replace` | Replace an existing item. |
-| `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/upsert` | "Upsert" an item, which means create it if it doesn't exist, or replace it if it exists. |
+| `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/upsert` | "Upsert" an item, which means to create or insert an item if it doesn't already exist, or to update or replace an item if it exists. |
| `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/delete` | Delete an item. | | `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/executeQuery` | Execute a [SQL query](sql-query-getting-started.md). | | `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/readChangeFeed` | Read from the container's [change feed](read-change-feed.md). |
The actual metadata requests allowed by the `Microsoft.DocumentDB/databaseAccoun
## Built-in role definitions
-Azure Cosmos DB exposes 2 built-in role definitions:
+Azure Cosmos DB exposes two built-in role definitions:
| ID | Name | Included actions | ||||
See [this page](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/sql-res
## Initialize the SDK with Azure AD
-To use the Azure Cosmos DB RBAC in your application, you have to update the way you initialize the Azure Cosmos DB SDK. Instead of passing your account's primary key, you have to pass an instance of a `TokenCredential` class. This instance provides the Azure Cosmos DB SDK with the context required to fetch an AAD token on behalf of the identity you wish to use.
+To use the Azure Cosmos DB RBAC in your application, you have to update the way you initialize the Azure Cosmos DB SDK. Instead of passing your account's primary key, you have to pass an instance of a `TokenCredential` class. This instance provides the Azure Cosmos DB SDK with the context required to fetch an Azure AD (AAD) token on behalf of the identity you wish to use.
-The way you create a `TokenCredential` instance is beyond the scope of this article. There are many ways to create such an instance depending on the type of AAD identity you want to use (user principal, service principal, group etc.). Most importantly, your `TokenCredential` instance must resolve to the identity (principal ID) that you've assigned your roles to. You can find examples of creating a `TokenCredential` class:
+The way you create a `TokenCredential` instance is beyond the scope of this article. There are many ways to create such an instance depending on the type of Azure AD identity you want to use (user principal, service principal, group etc.). Most importantly, your `TokenCredential` instance must resolve to the identity (principal ID) that you've assigned your roles to. You can find examples of creating a `TokenCredential` class:
- [In .NET](/dotnet/api/overview/azure/identity-readme#credential-classes) - [In Java](/java/api/overview/azure/identity-readme#credential-classes)
When you access the [Azure Cosmos DB Explorer](https://cosmos.azure.com/?feature
## Audit data requests
-When using the Azure Cosmos DB RBAC, [diagnostic logs](cosmosdb-monitor-resource-logs.md) get augmented with identity and authorization information for each data operation. This lets you perform detailed auditing and retrieve the AAD identity used for every data request sent to your Azure Cosmos DB account.
+When using the Azure Cosmos DB RBAC, [diagnostic logs](cosmosdb-monitor-resource-logs.md) get augmented with identity and authorization information for each data operation. This lets you perform detailed auditing and retrieve the Azure AD identity used for every data request sent to your Azure Cosmos DB account.
This additional information flows in the **DataPlaneRequests** log category and consists of two extra columns: -- `aadPrincipalId_g` shows the principal ID of the AAD identity that was used to authenticate the request.
+- `aadPrincipalId_g` shows the principal ID of the Azure AD identity that was used to authenticate the request.
- `aadAppliedRoleAssignmentId_g` shows the [role assignment](#role-assignments) that was honored when authorizing the request. ## <a id="disable-local-auth"></a> Enforcing RBAC as the only authentication method In situations where you want to force clients to connect to Azure Cosmos DB through RBAC exclusively, you have the option to disable the account's primary/secondary keys. When doing so, any incoming request using either a primary/secondary key or a resource token will be actively rejected.
-### Using Azure Resource Manager templates
+### Use Azure Resource Manager templates
When creating or updating your Azure Cosmos DB account using Azure Resource Manager templates, set the `disableLocalAuth` property to `true`:
cosmos-db Create Mongodb Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/create-mongodb-nodejs.md
+ms.devlang: javascript
Last updated 08/26/2021
cosmos-db Mongodb Time To Live https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/mongodb-time-to-live.md
ms.devlang: csharp, java, javascript Previously updated : 12/26/2018 Last updated : 02/16/2022 # Expire data with Azure Cosmos DB's API for MongoDB
Time-to-live (TTL) functionality allows the database to automatically expire dat
## TTL indexes To enable TTL universally on a collection, a ["TTL index" (time-to-live index)](mongodb-indexing.md) needs to be created. The TTL index is an index on the `_ts` field with an "expireAfterSeconds" value.
-JavaScript example:
+MongoShell example:
-```js
+```
globaldb:PRIMARY> db.coll.createIndex({"_ts":1}, {expireAfterSeconds: 10}) { "_t" : "CreateIndexesResponse",
cosmos-db Tutorial Develop Nodejs Part 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-develop-nodejs-part-4.md
description: Part 4 of the tutorial series on creating a MongoDB app with Angula
+ms.devlang: javascript
Last updated 08/26/2021
cosmos-db Monitor Normalized Request Units https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-normalized-request-units.md
Previously updated : 09/16/2021 Last updated : 02/17/2022
Last updated 09/16/2021
Azure Monitor for Azure Cosmos DB provides a metrics view to monitor your account and create dashboards. The Azure Cosmos DB metrics are collected by default, this feature does not require you to enable or configure anything explicitly. The **Normalized RU Consumption** metric is used to see how well saturated the partition key ranges are with respect to the traffic. Azure Cosmos DB distributes the throughput equally across all the partition key ranges. This metric provides a per second view of the maximum throughput utilization for partition key range. Use this metric to calculate the RU/s usage across partition key range for given container. By using this metric, if you see high percentage of request units utilization across all partition key ranges in Azure monitor, you should increase the throughput to meet the needs of your workload.
-Example - Normalized utilization is defined as the max of the RU/s utilization across all partition key ranges. For example, suppose your max throughput is 20,000 RU/s and you have two partition key ranges, P_1 and P_2, each capable of scaling to 10,000 RU/s. In a given second, if P_1 has used 6000 RUs, and P_2 8000 RUs, the normalized utilization is MAX(6000 RU / 10,000 RU, 8000 RU / 10,000 RU) = 0.8.
+Example - Normalized utilization is defined as the max of the RU/s utilization across all partition key ranges. For example, suppose your max throughput is 24,000 RU/s and you have three partition key ranges, P_1, P_2, and P_3 each capable of scaling to 8,000 RU/s. In a given second, if P_1 has used 6000 RUs, P_2 7000 RUs, and P_3 5000 RUs the normalized utilization is MAX(6000 RU / 8000 RU, 7000 RU / 8000 RU, 5000 RU / 8000 RU) = 0.875.
## What to expect and do when normalized RU/s is higher
cosmos-db Partitioning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partitioning-overview.md
Transactions (in stored procedures or triggers) are allowed only against items i
## Replica sets
-Each physical partition consists of a set of replicas, also referred to as a [*replica set*](global-dist-under-the-hood.md). Each replica set hosts an instance of the database engine. A replica set makes the data stored within the physical partition durable, highly available, and consistent. Each replica that makes up the physical partition inherits the partition's storage quota. All replicas of a physical partition collectively support the throughput that's allocated to the physical partition. Azure Cosmos DB automatically manages replica sets.
+Each physical partition consists of a set of replicas, also referred to as a [*replica set*](global-dist-under-the-hood.md). Each replica hosts an instance of the database engine. A replica set makes the data stored within the physical partition durable, highly available, and consistent. Each replica that makes up the physical partition inherits the partition's storage quota. All replicas of a physical partition collectively support the throughput that's allocated to the physical partition. Azure Cosmos DB automatically manages replica sets.
Typically, smaller containers only require a single physical partition, but they will still have at least 4 replicas.
cosmos-db Create Sql Api Dotnet V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-dotnet-v4.md
+ms.devlang: csharp
Last updated 08/26/2021
cosmos-db Create Sql Api Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-dotnet.md
+ms.devlang: csharp
Last updated 08/26/2021
cosmos-db How To Manage Consistency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-manage-consistency.md
Previously updated : 07/02/2021 Last updated : 02/16/2022 ms.devlang: csharp, java, javascript
client = cosmos_client.CosmosClient(self.account_endpoint, {
## Utilize session tokens
-One of the consistency levels in Azure Cosmos DB is *Session* consistency. This is the default level applied to Cosmos accounts by default. When working with *Session* consistency, the client will use a session token internally with each read/query request to ensure that the set consistency level is maintained.
+One of the consistency levels in Azure Cosmos DB is *Session* consistency. This is the default level applied to Cosmos accounts by default. When working with Session consistency, each new write request to Azure Cosmos DB is assigned a new SessionToken. The CosmosClient will use this token internally with each read/query request to ensure that the set consistency level is maintained.
+
+In some scenarios you need to manage this Session yourself. Consider a web application with multiple nodes, each node will have its own instance of CosmosClient. If you wanted these nodes to participate in the same session (to be able read your own writes consistently across web tiers) you would have to send the SessionToken from FeedResponse<T> of the write action to the end-user using a cookie or some other mechanism, and have that token flow back to the web tier and ultimately the CosmosClient for subsequent reads. If you are using a round-robin load balancer which does not maintain session affinity between requests, such as the Azure Load Balancer, the read could potentially land on a different node to the write request, where the session was created.
+
+If you do not flow the Azure Cosmos DB SessionToken across as described above you could end up with inconsistent read results for a period of time.
To manage session tokens manually, get the session token from the response and set them per request. If you don't need to manage session tokens manually, you don't need to use these samples. The SDK keeps track of session tokens automatically. If you don't set the session token manually, by default, the SDK uses the most recent session token.
cosmos-db Troubleshoot Dot Net Sdk Slow Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-dot-net-sdk-slow-request.md
description: Learn how to diagnose and fix slow requests when using Azure Cosmos
Previously updated : 02/02/2022 Last updated : 02/17/2022
Consider the following when developing your application:
* Use Direct + TCP connectivity mode * Avoid High CPU. Make sure to look at Max CPU and not average, which is the default for most logging systems. Anything above roughly 40% can increase the latency.
+## Metadata operations
+
+Do not verify a Database and/or Container exists by calling `Create...IfNotExistsAsync` and/or `Read...Async` in the hot path and/or before doing an item operation. The validation should only be done on application startup when it is necessary, if you expect them to be deleted (otherwise it's not needed). These metadata operations will generate extra end-to-end latency, have no SLA, and their own separate [limitations](https://aka.ms/CosmosDB/sql/errors/metadata-429) that do not scale like data operations.
## <a name="capture-diagnostics"></a>Capture the diagnostics
cosmos-db Tutorial Springboot Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/tutorial-springboot-azure-kubernetes-service.md
description: This tutorial demonstrates how to deploy a Spring Boot application
+ms.devlang: java
Last updated 10/01/2021
cosmos-db Create Table Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/create-table-dotnet.md
description: This quickstart shows how to access the Azure Cosmos DB Table API f
+ms.devlang: csharp
Last updated 09/26/2021
cost-management-billing Quick Acm Cost Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/quick-acm-cost-analysis.md
Title: Quickstart - Explore Azure costs with cost analysis
description: This quickstart helps you use cost analysis to explore and analyze your Azure organizational costs. Previously updated : 12/03/2021 Last updated : 02/17/2021
Here's a view of Azure service costs for the current month, grouped by Service n
![Grouped daily accumulated view showing example Azure service costs for last month](./media/quick-acm-cost-analysis/grouped-daily-accum-view.png) - The following image shows resource group names. You can group by tag to view total costs per tag or use the **Cost by resource** view to see all tags for a particular resource. ![Full data for current view showing resource group names](./media/quick-acm-cost-analysis/full-data-set.png)
By default, cost analysis shows all usage and purchase costs as they're accrued
![Change between actual and amortized cost to see reservation purchases spread across the term and allocated to the resources that used the reservation](./media/quick-acm-cost-analysis/metric-picker.png)
-Amortized cost breaks down reservation purchases into daily chunks and spreads them over the duration of the reservation term. For example, instead of seeing a $365 purchase on January 1, you'll see a $1.00 purchase every day from January 1 to December 31. In addition to basic amortization, these costs are also reallocated and associated by using the specific resources that used the reservation. For example, if that $1.00 daily charge was split between two virtual machines, you'd see two $0.50 charges for the day. If part of the reservation isn't utilized for the day, you'd see one $0.50 charge associated with the applicable virtual machine and another $0.50 charge with a charge type of `UnusedReservation`. Unused reservation costs can be seen only when viewing amortized cost.
+Amortized cost breaks down reservation purchases into daily chunks and spreads them over the duration of the reservation term. Most reservation terms are one or three years. Let's look at a one-year reservation example. Instead of seeing a $365 purchase on January 1, you'll see a $1.00 purchase every day from January 1 to December 31. In addition to basic amortization, these costs are also reallocated and associated by using the specific resources that used the reservation. For example, if that $1.00 daily charge was split between two virtual machines, you'd see two $0.50 charges for the day. If part of the reservation isn't utilized for the day, you'd see one $0.50 charge associated with the applicable virtual machine and another $0.50 charge with a charge type of `UnusedReservation`. Unused reservation costs can be seen only when viewing amortized cost.
+
+If you buy a one-year reservation on May 26 with an upfront payment, the amortized cost is divided by 365 (assuming it's not a leap year) and spread from May 26 through May 25 of the next year. If you pay monthly, the monthly fee is divided by the number of days in that month and spread evenly across May 26 through June 25, with the next month's fee spread across June 26 through July 25.
Because of the change in how costs are represented, it's important to note that actual cost and amortized cost views will show different total numbers. In general, the total cost of months with a reservation purchase will decrease when viewing amortized costs, and months following a reservation purchase will increase. Amortization is available only for reservation purchases and doesn't apply to Azure Marketplace purchases at this time.
cost-management-billing Download Azure Invoice Daily Usage Date https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/download-azure-invoice-daily-usage-date.md
tags: billing
Previously updated : 01/06/2022 Last updated : 02/17/2022
If you have a Microsoft Customer Agreement, you must be a billing profile Owner,
## Download your Azure invoices (.pdf)
-For most subscriptions, you can download your invoice from the Azure portal. If you have a Microsoft Customer Agreement, see Download invoices for a billing profile.
+For most subscriptions you can download your invoice from the Azure portal. If you have a Microsoft Customer Agreement, see [Download invoices for a Microsoft Customer Agreement](#download-invoices-for-a-microsoft-customer-agreement).
### Download invoices for an individual subscription
Invoices are generated for each [billing profile](../understand/mca-overview.md#
5. Click on the download button at the end of the row. 6. In the download context menu, select **Invoice**.
-If you don't see an invoice for the last billing period, see **Additional information**. <!-- Fix this -->
+If you don't see an invoice for the last billing period, see the following section.
+ ### <a name="noinvoice"></a> Why don't I see an invoice for the last billing period? There could be several reasons that you don't see an invoice:
If you have a Microsoft Customer Agreement, you can opt in to get your invoice i
You can opt out of getting your invoice by email by following the steps above and clicking **Opt out**. All Owners, Contributors, Readers, and Invoice managers will be opted out of getting the invoice by email, too. If you are a Reader, you cannot change the email invoice preference.
+## Azure Government support for invoices
+
+Azure Government users use the same agreement types as other Azure users.
+
+Azure Government billing billing owners can opt in to receive invoices by email. However, they can't allow others to get invoices by email.
+
+To download your invoice, follow the steps above at [Download invoices for an individual subscription](#download-invoices-for-an-individual-subscription).
+ ## Next steps To learn more about your invoice and charges, see:
cost-management-billing Download Azure Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/download-azure-invoice.md
tags: billing
Previously updated : 05/17/2021 Last updated : 02/17/2022
You must have an account admin role on a subscription or a support plan to opt i
## Share subscription and support plan invoice
-You may want to share the invoice for your subscription and support plan every month with your accounting team or send them to one of your other email addresses.
+You may want to share the invoice for your subscription and support plan every month with your accounting team or send them to one of your other email addresses.
1. Follow the steps in [Get your subscription's and support plan's invoices in email](#get-mosp-subscription-invoice-in-email) and select **Configure recipients**. [![Screenshot that shows a user selecting configure recipients](./media/download-azure-invoice/invoice-article-step03.png)](./media/download-azure-invoice/invoice-article-step03-zoomed.png#lightbox)
You may want to share your invoice every month with your accounting team or send
[![Screenshot that shows additional recipients for the invoice email](./media/download-azure-invoice/mca-billing-profile-add-invoice-recipients.png)](./media/download-azure-invoice/mca-billing-profile-add-invoice-recipients-zoomed.png#lightbox) 1. Select **Save**.
+## Azure Government support for invoices
+
+Azure Government users use the same agreement types as other Azure users.
+
+Azure Government billing billing owners can opt in to receive invoices by email. However, they can't allow others to get invoices by email.
+
+To download your invoice, follow the steps above at [Download your MOSP Azure subscription invoice](#download-your-mosp-azure-subscription-invoice).
+ ## Why you might not see an invoice <a name="noinvoice"></a>
data-catalog Data Catalog Adopting Data Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-adopting-data-catalog.md
Title: Approach and process for adopting Azure Data Catalog description: This article presents an approach and process for organizations considering adopting Azure Data Catalog, including defining a vision, identifying key business use cases, and choosing a pilot project.--++ Previously updated : 08/01/2019 Last updated : 02/17/2022 # Approach and process for adopting Azure Data Catalog [!INCLUDE [Azure Purview redirect](../../includes/data-catalog-use-purview.md)]
-This article helps you get started adopting **Azure Data Catalog** in your organization. To successfully adopt **Azure Data Catalog**, you focus on three key items: define your vision, identify key business use cases within your organization, and choose a pilot project.
+This article helps you get started adopting **Azure Data Catalog** in your organization. To successfully adopt **Azure Data Catalog**, focus on three key items: define your vision, identify key business use cases within your organization, and choose a pilot project.
## Introducing the Azure Data Catalog
This article presents an approach to getting started using **Azure Data Catalog*
## Azure Data Catalog adoption plan
-An **Azure Data Catalog** adoption plan describes how the benefits of using the service are communicated to stakeholders and users, and what kind of training you provide to users. One key success driver to adopt Data Catalog is how effectively you communicate the value of the service to users and stakeholders. The primary audiences in an initial adoption plan are the users of the service. No matter how much buy-in you get from stakeholders, if the users, or customers, of your Data Catalog offering do not incorporate it into their usage, the adoption will not be successful. Therefore, this article assumes you have stakeholder buy-in, and focuses on creating a plan for user adoption of Data Catalog.
+An **Azure Data Catalog** adoption plan describes how the benefits of using the service are communicated to stakeholders and users, and what kind of training you provide to users. One key success driver to adopt Data Catalog is how effectively you communicate the value of the service to users and stakeholders. The primary audiences in an initial adoption plan are the users of the service. No matter how much buy-in you get from stakeholders, if the users, or customers, of your Data Catalog offering don't incorporate it into their usage, the adoption won't be successful. Therefore, this article assumes you have stakeholder buy-in, and focuses on creating a plan for user adoption of Data Catalog.
An effective adoption plan successfully engages people in what is possible with Data Catalog and gives them the information and guidance to achieve it. Users need to understand the value that Data Catalog provides to help them succeed in their jobs. When people see how Data Catalog can help them achieve more results with data, the value of adopting Data Catalog becomes clear. Change is hard, so an effective plan needs to take the challenges of change into account. An adoption plan helps you communicate what is critical for people to succeed and achieve their goals. A typical plan explains how Data Catalog is going to make users' lives easier, and includes the following parts:
-* **Vision Statement** - It helps you concisely discuss the adoption plan with users, and stakeholders. It's your elevator pitch.
+* **Vision Statement** - It helps concisely discuss the adoption plan with users, and stakeholders. It's your elevator pitch.
* **Pilot team and Influencers** - Learning from a pilot team and influencers help you refine how to introduce teams and users to Data Catalog. Influencers can peer coach fellow users. It also helps you identify blockers and drivers to adoption. * **Plan for Communications and Buzz** - It helps users to understand how Data Catalog can help them, and can foster organic adoption within teams, and ultimately the entire organization. * **Training Plan** - Comprehensive training generally leads to adoption success and favorable results.
Here are some tips to define an **Azure Data Catalog** adoption plan.
The first step to define an **Azure Data Catalog** adoption plan is to write an aspirational description of what you are trying to accomplish. It's best to keep the vision statement fairly broad, yet concise enough to define specific short-term, and long-term goals.
-Here are some tips to help you define you vision:
+Here are some tips to help you define your vision:
* **Identify the key deployment driver** - Think about the specific data source management needs from the business that can be addressed with Data Catalog. It helps you state the top advantages of using Data Catalog. For example, there may be common data sources that all new employees need to learn about and use, or important and complex data sources that only a few key people deeply understand. **Azure Data Catalog** can help make these data sources easy to discover and understand, so that these well-known pain points can be addressed directly and early in the adoption of the service. * **Be crisp and clear** - A clear understanding of the vision gets everyone on the same page about the value Data Catalog brings to the organization, and how the vision supports organizational goals. * **Inspire people to want to use Data Catalog** - Your vision, and communication plan should inspire folks to recognize that Data Catalog can benefit them to find and connect to data sources to achieve more with data. * **Specify goals and timeline** - It ensures your adoption plan has specific, achievable deliverables. A timeline keeps everyone focused, and allows for checkpoints to measure success.
-Here is an example vision statement for a Data Catalog adoption plan for the fictitious company called Adventure Works:
+Here's an example vision statement for a Data Catalog adoption plan for the fictitious company called Adventure Works:
**Azure Data Catalog** empowers the Adventure Works Finance team to collaborate on key data sources, so every team member can easily find and use the data they need and can share their knowledge with the team as a whole.
Once you have a crisp vision statement, you should identify a suitable pilot pro
To identify use cases that are relevant to Data Catalog, engage with experts from various business units to identify relevant use cases and business issues to solve. Review existing challenges people have identifying and understanding data assets. For example, do teams learn about data assets only after asking several people in the organization who has relevant data sources?
-It is best to choose use cases that represent low hanging fruit: cases that are important yet have a high likelihood of success if solved with Data Catalog.
+It's best to choose use cases that represent low hanging fruit: cases that are important yet have a high likelihood of success if solved with Data Catalog.
Here are some tips to identify use cases: * **Define the goals of the team** - How does the team achieve their goals? Don't focus on Data Catalog yet since you want to be objective at this stage. Remember it's about the business results, not about the technology. * **Define the business problem** - What are the issues faced by the team regarding finding and learning about data assets? For example, information about important data sources may be found in Excel workbooks in a network folder, and the team may spend much time locating the workbooks.
-* **Understand team culture related to change** - Many adoption challenges relate to resistance to change rather than the implementation of a new tool. How a team responds to change is important when identifying use cases since the existing process could be in place because "this is how we've always done it" or "if it ain't broke, why fix it?". Adopting any new tool or process is always easiest when the people affected understand the value to be realized from the change, and appreciate the importance of the problems to be solved.
-* **Keep focus related to data assets** - When discussing the business problems a team faces, you need to "cut through the weeds", and focus on what's relevant to leveraging enterprise data assets more effectively.
+* **Understand team culture related to change** - Many adoption challenges relate to resistance to change rather than the implementation of a new tool. How a team responds to change is important when identifying use cases since the existing process could be in place because "this is how we've always done it" or "if it isn't broken, why fix it?". Adopting any new tool or process is always easiest when the people affected understand the value to be realized from the change, and appreciate the importance of the problems to be solved.
+* **Keep focus related to data assets** - When discussing the business problems a team faces, you need to "cut through the weeds", and focus on what's relevant to using enterprise data assets more effectively.
Here are some example use cases related to Data Catalog:
Once you identify some use cases for Data Catalog, common scenarios should emerg
## Choose a Data Catalog pilot project
-A key success factor is to simplify, and start small. A well-defined pilot with a constrained scope helps keep the project moving forward without getting bogged down with a project that is too complex, or which has too many participants. But it is also important to include a mix of users, from early adopters to skeptics. Users who embrace the solution help you refine your future communication and buzz plan. Skeptics help you identify and address blocking issues. As skeptics become champions, you can use their feedback to identify success drivers.
+A key success factor is to simplify, and start small. A well-defined pilot with a constrained scope helps keep the project moving forward without getting bogged down with a project that is too complex, or which has too many participants. But it's also important to include a mix of users, from early adopters to skeptics. Users who embrace the solution help you refine your future communication and buzz plan. Skeptics help you identify and address blocking issues. As skeptics become champions, you can use their feedback to identify success drivers.
Your pilot plan should phase in business goals that you want to achieve with Data Catalog. As you learn from the initial pilot, you can expand your user base. An initial closed pilot is good to establish measurable success, but the ultimate goal is for organic or viral growth. With organic growth of Data Catalog, users are in control of their own data usage, and can influence and encourage others to adopt and contribute to the catalog.
Your first pilot project should have a few individuals who produce data and cons
**Data Consumers** are people with expertise on the use of the data to solve business problems. For example, Nancy is a business analyst uses Adventure Works SQL Server data sources to analyze data.
-One of the business problems that **Azure Data Catalog** solves is to connect **Data Producers** to **Data Consumers**. It does so by serving as a central repository for information about enterprise data sources. Using Data Catalog, David registers Adventure Works and SQL Server data sources. Using crowdsourcing any user who discovers this data source can share her opinions on the data, in addition to using the data they have discovered. For example, Nancy discovers the data sources by searching the catalog, and shares her specialized knowledge about the data. Now, others in the organization benefit from shared knowledge by searching the data catalog.
+One of the business problems that **Azure Data Catalog** solves is to connect **Data Producers** to **Data Consumers**. It does so by serving as a central repository for information about enterprise data sources. David registers Adventure Works and SQL Server data sources in Data Catalog. Using crowdsourcing any user who discovers this data source can share her opinions on the data, in addition to using the data they've discovered. For example, Nancy discovers the data sources by searching the catalog, and shares her specialized knowledge about the data. Now, others in the organization benefit from shared knowledge by searching the data catalog.
* To learn more about registering data sources, see [Register data sources](data-catalog-get-started.md). * To learn more about discovering data sources, see [Search data sources](data-catalog-get-started.md).
One of the business problems that **Azure Data Catalog** solves is to connect **
For most enterprise pilot projects, you should seed the catalog with high-value data sources so that business users can quickly see the value of Data Catalog. IT is a good place to start identifying common data sources that would be of interest to your pilot team. For supported data sources, such as SQL Server, we recommend using the **Azure Data Catalog** data source registration tool. With the data source registration tool, you can register a wide range of data sources including SQL Server and Oracle databases, and SQL Server Reporting Services reports. For a complete list of current data sources, see [Azure Data Catalog supported data sources](data-catalog-dsr.md).
-Once you have identified and registered key data sources, it is possible to also import data source descriptions stored in other locations. The Data Catalog API allows developers to load descriptions and annotations from another location, such as the Excel Workbook that David created and maintains.
+Once you have identified and registered key data sources, it's possible to also import data source descriptions stored in other locations. The Data Catalog API allows developers to load descriptions and annotations from another location, such as the Excel Workbook that David created and maintains.
The next section describes an example project from the Adventure Works company.
After the pilot project is in place, it's time to execute your Data Catalog adop
### Execute
-At this point you have identified use cases for Data Catalog, and you have identified your first project. In addition, you have registered the key Adventure Works data sources and have added information from the existing Excel workbook using the tool that IT built. Now it's time to work with the pilot team to start the Data Catalog adoption process.
+At this point you have identified use cases for Data Catalog, and you've identified your first project. In addition, you've registered the key Adventure Works data sources and have added information from the existing Excel workbook using the tool that IT built. Now it's time to work with the pilot team to start the Data Catalog adoption process.
Here are some tips to get you started: * **Create excitement** - Business users get excited if they believe that **Azure Data Catalog** makes their lives easier. Try to make the conversation around the solution and the benefits it provides, not the technology. * **Facilitate change** - Start small and communicate the plan to business users. To be successful, it's crucial to involve users from the beginning so that they influence the outcome and develop a sense of ownership about the solution. * **Groom early adopters** - Early adopters are business users that are passionate about what they do, and excited to evangelize the benefits of **Azure Data Catalog** to their peers.
-* **Target training** - Business users do not need to know everything about Data Catalog, so target training to address specific team goals. Focus on what users do, and how some of their tasks might change, to incorporate **Azure Data Catalog** into their daily routine.
+* **Target training** - Business users don't need to know everything about Data Catalog, so target training to address specific team goals. Focus on what users do, and how some of their tasks might change, to incorporate **Azure Data Catalog** into their daily routine.
* **Be willing to fail** - If the pilot isn't achieving the desired results, reevaluate, and identify areas to change - fix problems in the pilot before moving on to a larger scope. Before your pilot team jumps into using Data Catalog, schedule a kick-off meeting to discuss expectations for the pilot project, and provide initial training. ### Set expectations
-Setting expectations and goals helps business users focus on specific deliverables. To keep the project on track, assign regular (for example: daily or weekly based on the scope and duration of the pilot) homework assignments. One of the most valuable capabilities of Data Catalog is crowdsourcing data assets so that business users can benefit from knowledge of enterprise data. A great homework assignment is for each pilot team member to register or annotate at least one data source they have used. See [Register a data source](data-catalog-get-started.md) and [How to annotate data sources](data-catalog-get-started.md).
+Setting expectations and goals helps business users focus on specific deliverables. To keep the project on track, assign regular (for example: daily or weekly based on the scope and duration of the pilot) homework assignments. One of the most valuable capabilities of Data Catalog is crowdsourcing data assets so that business users can benefit from knowledge of enterprise data. A great homework assignment is for each pilot team member to register or annotate at least one data source they've used. See [Register a data source](data-catalog-get-started.md) and [How to annotate data sources](data-catalog-get-started.md).
Meet with the team on a regular schedule to review some of the annotations. Good annotations about data sources are at the heart of a successful Data Catalog adoption because they provide meaningful data source insights in a central location. Without good annotations, knowledge about data sources remains scattered throughout the enterprise. See [How to annotate data sources](data-catalog-get-started.md).
-And, the ultimate test of the project is whether users can discover and understand the data sources they need to use. Pilot users should regularly test the catalog to ensure that the data sources they use for their day to day work are relevant. When a required data source is missing or not properly annotated, this should serve as a reminder to register additional data sources or to provide additional annotations. This practice does not only add value to the pilot effort but also builds effective habits that carry over to other teams after the pilot is complete.
+And, the ultimate test of the project is whether users can discover and understand the data sources they need to use. Pilot users should regularly test the catalog to ensure that the data sources they use for their day to day work are relevant. When a required data source is missing or not properly annotated, this should serve as a reminder to register more data sources or to provide more annotations. This practice doesn't only add value to the pilot effort but also builds effective habits that carry over to other teams after the pilot is complete.
### Provide training
Training should be enough to get the users started, and tailored to the specific
## Conclusion
-Once your pilot team is running fairly smoothly and you have achieved your initial goals, you should expand Data Catalog adoption to more teams. Apply and refine what you learned from your pilot project to expand Data Catalog throughout your organization.
+Once your pilot team is running fairly smoothly and you've achieved your initial goals, you should expand Data Catalog adoption to more teams. Apply and refine what you learned from your pilot project to expand Data Catalog throughout your organization.
-The early adopters who participated in the pilot can be helpful to get the word out about the benefits of adopting Data Catalog. They can share with other teams how Data Catalog helped their team solve business problems, discover data sources more easily, and share insights about the data sources they use. For example, early adopters on the Adventure Works pilot team could show others how easy it is to find information about Adventure Works data assets that were once hard to find and understand.
+The early adopters who participated in the pilot can be helpful to communicate about the benefits of adopting Data Catalog. They can share with other teams how Data Catalog helped their team solve business problems, discover data sources more easily, and share insights about the data sources they use. For example, early adopters on the Adventure Works pilot team could show others how easy it's to find information about Adventure Works data assets that were once hard to find and understand.
This article was about getting started with **Azure Data Catalog** in your organization. We hope you were able to start a Data Catalog pilot project, and expand Data Catalog throughout your organization.
data-catalog Data Catalog Developer Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-developer-concepts.md
Title: Azure Data Catalog developer concepts description: Introduction to the key concepts in Azure Data Catalog conceptual model, as exposed through the Catalog REST API.--++ Previously updated : 08/01/2019 Last updated : 02/16/2022 # Azure Data Catalog developer concepts
Last updated 08/01/2019
Microsoft **Azure Data Catalog** is a fully managed cloud service that provides capabilities for data source discovery and for crowdsourcing data source metadata. Developers can use the service via its REST APIs. Understanding the concepts implemented in the service is important for developers to successfully integrate with **Azure Data Catalog**.
-## Key concepts
+## Key concepts
+ The **Azure Data Catalog** conceptual model is based on four key concepts: The **Catalog**, **Users**, **Assets**, and **Annotations**.
-![Azure Data Catalog conceptual model illustration](./media/data-catalog-developer-concepts/concept2.png)
### Catalog
-A **Catalog** is the top-level container for all the metadata that an organization stores. There is one **Catalog** allowed per Azure Account. Catalogs are tied to an Azure subscription, but only one **Catalog** can be created for any given Azure account, even though an account can have multiple subscriptions.
+
+A **Catalog** is the top-level container for all the metadata that an organization stores. There's one **Catalog** allowed per Azure Account. Catalogs are tied to an Azure subscription, but only one **Catalog** can be created for any given Azure account, even though an account can have multiple subscriptions.
A catalog contains **Users** and **Assets**. ### Users
-Users are security principals that have permissions to perform actions (search the catalog, add, edit or remove items, etc.) in the Catalog.
+
+**Users** are security principals that have permissions to perform actions (search the catalog, add, edit or remove items, etc.) in the Catalog.
There are several different roles a user can have. For information on roles, see the section Roles and Authorization.
Individual users and security groups can be added.
Azure Data Catalog uses Azure Active Directory for identity and access management. Each Catalog user must be a member of the Active Directory for the account. ### Assets+ A **Catalog** contains data assets. **Assets** are the unit of granularity managed by the catalog. The granularity of an asset varies by data source. For SQL Server or Oracle Database, an asset can be a Table or a View. For SQL Server Analysis Services, an asset can be a Measure, a Dimension, or a Key Performance Indicator (KPI). For SQL Server Reporting Services, an asset is a Report.
-An **Asset** is the thing you add or remove from a Catalog. It is the unit of result you get back from **Search**.
+An **Asset** is the thing you add or remove from a Catalog. It's the unit of result you get back from **Search**.
An **Asset** is made up from its name, location, and type, and annotations that further describe it. ### Annotations
-Annotations are items that represent metadata about Assets.
+
+**Annotations** are items that represent metadata about Assets.
Examples of annotations are description, tags, schema, documentation, etc. See the [Asset Object model section](#asset-object-model) for a full list of the asset types and annotation types. ## Crowdsourcing annotations and user perspective (multiplicity of opinion)
-A key aspect of Azure Data Catalog is how it supports the crowdsourcing of metadata in the system. As opposed to a wiki approach ΓÇô where there is only one opinion and the last writer wins ΓÇô the Azure Data Catalog model allows multiple opinions to live side by side in the system.
+
+A key aspect of Azure Data Catalog is how it supports the crowdsourcing of metadata in the system. As opposed to a wiki approach ΓÇô where there's only one opinion and the last writer wins ΓÇô the Azure Data Catalog model allows multiple opinions to live side by side in the system.
This approach reflects the real world of enterprise data where different users can have different perspectives on a given asset:
The UX can then choose how to display the combination. There are three different
* A third pattern is ΓÇ£last writer winsΓÇ¥. In this pattern, only the most recent value typed in is shown. friendlyName is an example of this pattern. ## Asset object model+ As introduced in the Key Concepts section, the **Azure Data Catalog** object model includes items, which can be assets or annotations. Items have properties, which can be optional or required. Some properties apply to all items. Some properties apply to all assets. Some properties apply only to specific asset types. ### System properties
-<table><tr><td><b>Property Name</b></td><td><b>Data Type</b></td><td><b>Comments</b></td></tr><tr><td>timestamp</td><td>DateTime</td><td>The last time the item was modified. This field is generated by the server when an item is inserted and every time an item is updated. The value of this property is ignored on input of publish operations.</td></tr><tr><td>ID</td><td>Uri</td><td>Absolute url of the item (read-only). It is the unique addressable URI for the item. The value of this property is ignored on input of publish operations.</td></tr><tr><td>type</td><td>String</td><td>The type of the asset (read-only).</td></tr><tr><td>etag</td><td>String</td><td>A string corresponding to the version of the item that can be used for optimistic concurrency control when performing operations that update items in the catalog. "*" can be used to match any value.</td></tr></table>
+
+|Property name |Data type |Comments|
+|-|--||
+|timestamp |DateTime |The last time the item was modified. This field is generated by the server when an item is inserted and every time an item is updated. The value of this property is ignored on input of publish operations. |
+|ID|Uri |Absolute url of the item (read-only). It's the unique addressable URI for the item. The value of this property is ignored on input of publish operations.|
+|type|String |The type of the asset (read-only).|
+|etag|String< |A string corresponding to the version of the item that can be used for optimistic concurrency control when performing operations that update items in the catalog. "*" can be used to match any value.|
### Common properties+ These properties apply to all root asset types and all annotation types.
-<table>
-<tr><td><b>Property Name</b></td><td><b>Data Type</b></td><td><b>Comments</b></td></tr>
-<tr><td>fromSourceSystem</td><td>Boolean</td><td>Indicates whether item's data is derived from a source system (like SQL Server Database, Oracle Database) or authored by a user.</td></tr>
-</table>
+|Property name |Data type |Comments|
+|-|--||
+|fromSourceSystem |Boolean |Indicates whether item's data is derived from a source system (like SQL Server Database, Oracle Database) or authored by a user. |
### Common root properties
-<p>
+ These properties apply to all root asset types.
-<table><tr><td><b>Property Name</b></td><td><b>Data Type</b></td><td><b>Comments</b></td></tr><tr><td>name</td><td>String</td><td>A name derived from the data source location information</td></tr><tr><td>dsl</td><td>DataSourceLocation</td><td>Uniquely describes the data source and is one of the identifiers for the asset. (See dual identity section). The structure of the dsl varies by the protocol and source type.</td></tr><tr><td>dataSource</td><td>DataSourceInfo</td><td>More detail on the type of asset.</td></tr><tr><td>lastRegisteredBy</td><td>SecurityPrincipal</td><td>Describes the user who most recently registered this asset. Contains both the unique ID for the user (the upn) and a display name (lastName and firstName).</td></tr><tr><td>containerID</td><td>String</td><td>ID of the container asset for the data source. This property is not supported for the Container type.</td></tr></table>
+|Property name |Data type |Comments|
+|-|--||
+|name |String |A name derived from the data source location information |
+|dsl |DataSourceLocation |Uniquely describes the data source and is one of the identifiers for the asset. (See dual identity section). The structure of the dsl varies by the protocol and source type. |
+|dataSource |DataSourceInfo |More detail on the type of asset. |
+|lastRegisteredBy |SecurityPrincipal |Describes the user who most recently registered this asset. Contains both the unique ID for the user (the upn) and a display name (lastName and firstName). |
+|containerID |String |ID of the container asset for the data source. This property isn't supported for the Container type. |
### Common non-singleton annotation properties+ These properties apply to all non-singleton annotation types (annotations, which allowed to be multiple per asset).
-<table>
-<tr><td><b>Property Name</b></td><td><b>Data Type</b></td><td><b>Comments</b></td></tr>
-<tr><td>key</td><td>String</td><td>A user specified key, which uniquely identifies the annotation in the current collection. The key length cannot exceed 256 characters.</td></tr>
-</table>
+|Property name |Data type |Comments|
+|-|--||
+|key |String |A user specified key, which uniquely identifies the annotation in the current collection. The key length canΓÇÖt exceed 256 characters. |
### Root asset types
-Root asset types are those types that represent the various types of data assets that can be registered in the catalog. For each root type, there is a view, which describes asset and annotations included in the view. View name should be used in the corresponding {view_name} url segment when publishing an asset using REST API.
-<table><tr><td><b>Asset Type (View name)</b></td><td><b>Additional Properties</b></td><td><b>Data Type</b></td><td><b>Allowed Annotations</b></td><td><b>Comments</b></td></tr><tr><td>Table ("tables")</td><td></td><td></td><td>Description<p>FriendlyName<p>Tag<p>Schema<p>ColumnDescription<p>ColumnTag<p> Expert<p>Preview<p>AccessInstruction<p>TableDataProfile<p>ColumnDataProfile<p>ColumnDataClassification<p>Documentation<p></td><td>A Table represents any tabular data. For example: SQL Table, SQL View, Analysis Services Tabular Table, Analysis Services Multidimensional dimension, Oracle Table, etc. </td></tr><tr><td>Measure ("measures")</td><td></td><td></td><td>Description<p>FriendlyName<p>Tag<p>Expert<p>AccessInstruction<p>Documentation<p></td><td>This type represents an Analysis Services measure.</td></tr><tr><td></td><td>measure</td><td>Column</td><td></td><td>Metadata describing the measure</td></tr><tr><td></td><td>isCalculated </td><td>Boolean</td><td></td><td>Specifies if the measure is calculated or not.</td></tr><tr><td></td><td>measureGroup</td><td>String</td><td></td><td>Physical container for measure</td></tr><td>KPI ("kpis")</td><td></td><td></td><td>Description<p>FriendlyName<p>Tag<p>Expert<p>AccessInstruction<p>Documentation</td><td></td></tr><tr><td></td><td>measureGroup</td><td>String</td><td></td><td>Physical container for measure</td></tr><tr><td></td><td>goalExpression</td><td>String</td><td></td><td>An MDX numeric expression or a calculation that returns the target value of the KPI.</td></tr><tr><td></td><td>valueExpression</td><td>String</td><td></td><td>An MDX numeric expression that returns the actual value of the KPI.</td></tr><tr><td></td><td>statusExpression</td><td>String</td><td></td><td>An MDX expression that represents the state of the KPI at a specified point in time.</td></tr><tr><td></td><td>trendExpression</td><td>String</td><td></td><td>An MDX expression that evaluates the value of the KPI over time. The trend can be any time-based criterion that is useful in a specific business context.</td>
-<tr><td>Report ("reports")</td><td></td><td></td><td>Description<p>FriendlyName<p>Tag<p>Expert<p>AccessInstruction<p>Documentation<p></td><td>This type represents a SQL Server Reporting Services report </td></tr><tr><td></td><td>assetCreatedDate</td><td>String</td><td></td><td></td></tr><tr><td></td><td>assetCreatedBy</td><td>String</td><td></td><td></td></tr><tr><td></td><td>assetModifiedDate</td><td>String</td><td></td><td></td></tr><tr><td></td><td>assetModifiedBy</td><td>String</td><td></td><td></td></tr><tr><td>Container ("containers")</td><td></td><td></td><td>Description<p>FriendlyName<p>Tag<p>Expert<p>AccessInstruction<p>Documentation<p></td><td>This type represents a container of other assets such as a SQL database, an Azure Blobs container, or an Analysis Services model.</td></tr></table>
+Root asset types are those types that represent the various types of data assets that can be registered in the catalog. For each root type, there's a view, which describes asset and annotations included in the view. View name should be used in the corresponding {view_name} url segment when publishing an asset using REST API.
+
+|Asset type (view name) |Additional properties |Data type|Allowed annotations|Comments|
+|-|--||||
+|Table ("tables") | ||Description|A Table represents any tabular data. For example: SQL Table, SQL View, Analysis Services Tabular Table, Analysis Services Multidimensional dimension, Oracle Table, etc.|
+||||FriendlyName||
+||||Tag||
+||||Schema||
+||||ColumnDescription||
+||||ColumnTag||
+||||Expert||
+||||Preview||
+||||AccessInstruction||
+||||TableDataProfile||
+||||ColumnDataProfile||
+||||ColumnDataClassification||
+||||Documentation||
+|Measure ("measures") |||Description|This type represents an Analysis Services measure.|
+||||FriendlyName||
+||||Tag||
+||||Expert||
+||||AccessInstruction||
+||||Documentation||
+||measure|Column||Metadata describing the measure.|
+||isCalculated |Boolean||Specifies if the measure is calculated or not.|
+||measureGroup |String||Specifies if the measure is calculated or not.|
+|KPI ("kpis") |||Description||
+||||FriendlyName||
+||||Tag||
+||||Expert||
+||||AccessInstruction||
+||||Documentation||
+||measureGroup|String||Physical container for measure.|
+||goalExpression|String||An MDX numeric expression or a calculation that returns the target value of the KPI.|
+||valueExpression|String||An MDX numeric expression that returns the actual value of the KPI.|
+||statusExpression|String||An MDX expression that represents the state of the KPI at a specified point in time.|
+||trendExpression|String||An MDX expression that evaluates the value of the KPI over time. The trend can be any time-based criterion that is useful in a specific business context.|
+|Report ("reports") |||Description|This type represents a SQL Server Reporting Services report.|
+||||FriendlyName||
+||||Tag||
+||||Expert||
+||||AccessInstruction||
+||||Documentation||
+||assetCreatedDate|String|||
+||assetCreatedDate|String|||
+||assetModifiedDate|String|||
+||assetModifiedBy|String|||
+|Report ("reports") |||Description|This type represents a container of other assets such as an SQL database, an Azure Blobs container, or an Analysis Services model.|
+||||Tag||
+||||Expert||
+||||AccessInstruction||
+||||Documentation||
### Annotation types
-Annotation types represent types of metadata that can be assigned to other types within the catalog.
-
-<table>
-<tr><td><b>Annotation Type (Nested view name)</b></td><td><b>Additional Properties</b></td><td><b>Data Type</b></td><td><b>Comments</b></td></tr>
-
-<tr><td>Description ("descriptions")</td><td></td><td></td><td>This property contains a description for an asset. Each user of the system can add their own description. Only that user can edit the Description object. (Admins and Asset owners can delete the Description object but not edit it). The system maintains users' descriptions separately. Thus there is an array of descriptions on each asset (one for each user who has contributed their knowledge about the asset, in addition to possibly one that contains information derived from the data source).</td></tr>
-<tr><td></td><td>description</td><td>string</td><td>A short description (2-3 lines) of the asset</td></tr>
-
-<tr><td>Tag ("tags")</td><td></td><td></td><td>This property defines a tag for an asset. Each user of the system can add multiple tags for an asset. Only the user who created Tag objects can edit them. (Admins and Asset owners can delete the Tag object but not edit it). The system maintains users' tags separately. Thus there is an array of Tag objects on each asset.</td></tr>
-<tr><td></td><td>tag</td><td>string</td><td>A tag describing the asset.</td></tr>
-
-<tr><td>FriendlyName ("friendlyName")</td><td></td><td></td><td>This property contains a friendly name for an asset. FriendlyName is a singleton annotation - only one FriendlyName can be added to an asset. Only the user who created FriendlyName object can edit it. (Admins and Asset owners can delete the FriendlyName object but not edit it). The system maintains users' friendly names separately.</td></tr>
-<tr><td></td><td>friendlyName</td><td>string</td><td>A friendly name of the asset.</td></tr>
-
-<tr><td>Schema ("schema")</td><td></td><td></td><td>The Schema describes the structure of the data. It lists the attribute (column, attribute, field, etc.) names, types as well other metadata. This information is all derived from the data source. Schema is a singleton annotation - only one Schema can be added for an asset.</td></tr>
-<tr><td></td><td>columns</td><td>Column[]</td><td>An array of column objects. They describe the column with information derived from the data source.</td></tr>
-
-<tr><td>ColumnDescription ("columnDescriptions")</td><td></td><td></td><td>This property contains a description for a column. Each user of the system can add their own descriptions for multiple columns (at most one per column). Only the user who created ColumnDescription objects can edit them. (Admins and Asset owners can delete the ColumnDescription object but not edit it). The system maintains these user's column descriptions separately. Thus there is an array of ColumnDescription objects on each asset (one per column for each user who has contributed their knowledge about the column in addition to possibly one that contains information derived from the data source). The ColumnDescription is loosely bound to the Schema so it can get out of sync. The ColumnDescription might describe a column that no longer exists in the schema. It is up to the writer to keep description and schema in sync. The data source may also have columns description information and they are additional ColumnDescription objects that would be created when running the tool.</td></tr>
-<tr><td></td><td>columnName</td><td>String</td><td>The name of the column this description refers to.</td></tr>
-<tr><td></td><td>description</td><td>String</td><td>a short description (2-3 lines) of the column.</td></tr>
-
-<tr><td>ColumnTag ("columnTags")</td><td></td><td></td><td>This property contains a tag for a column. Each user of the system can add multiple tags for a given column and can add tags for multiple columns. Only the user who created ColumnTag objects can edit them. (Admins and Asset owners can delete the ColumnTag object but not edit it). The system maintains these users'
-column tags separately. Thus there is an array of ColumnTag objects on each asset. The ColumnTag is loosely bound to the schema so it can get out of sync. The ColumnTag might describe a column that no longer exists in the schema. It is up to the writer to keep column tag and schema in sync.</td></tr>
-<tr><td></td><td>columnName</td><td>String</td><td>The name of the column this tag refers to.</td></tr>
-<tr><td></td><td>tag</td><td>String</td><td>A tag describing the column.</td></tr>
-
-<tr><td>Expert ("experts")</td><td></td><td></td><td>This property contains a user who is considered an expert in the data set. The expertsΓÇÖ opinions(descriptions) bubble to the top of the UX when listing descriptions. Each user can specify their own experts. Only that user can edit the experts' object. (Admins and Asset owners can delete the Expert objects but not edit it).</td></tr>
-<tr><td></td><td>expert</td><td>SecurityPrincipal</td><td></td></tr>
-<tr><td>Preview ("previews")</td><td></td><td></td><td>The preview contains a snapshot of the top 20 rows of data for the asset. Preview only make sense for some types of assets (it makes sense for Table but not for Measure).</td></tr>
-<tr><td></td><td>preview</td><td>object[]</td><td>Array of objects that represent a column. Each object has a property mapping to a column with a value for that column for the row.</td></tr>
-
-<tr><td>AccessInstruction ("accessInstructions")</td><td></td><td></td><td></td></tr>
-<tr><td></td><td>mimeType</td><td>string</td><td>The mime type of the content.</td></tr>
-<tr><td></td><td>content</td><td>string</td><td>The instructions for how to get access to this data asset. The content could be a URL, an email address, or a set of instructions.</td></tr>
-
-<tr><td>TableDataProfile ("tableDataProfiles")</td><td></td><td></td><td></td></tr>
-<tr><td></td><td>numberOfRows</td></td><td>int</td><td>The number of rows in the data set</td></tr>
-<tr><td></td><td>size</td><td>long</td><td>The size in bytes of the data set. </td></tr>
-<tr><td></td><td>schemaModifiedTime</td><td>string</td><td>The last time the schema was modified</td></tr>
-<tr><td></td><td>dataModifiedTime</td><td>string</td><td>The last time the data set was modified (data was added, modified, or delete)</td></tr>
-
-<tr><td>ColumnsDataProfile ("columnsDataProfiles")</td><td></td><td></td><td></td></tr>
-<tr><td></td><td>columns</td></td><td>ColumnDataProfile[]</td><td>An array of column data profiles.</td></tr>
-
-<tr><td>ColumnDataClassification ("columnDataClassifications")</td><td></td><td></td><td></td></tr>
-<tr><td></td><td>columnName</td><td>String</td><td>The name of the column this classification refers to.</td></tr>
-<tr><td></td><td>classification</td><td>String</td><td>The classification of the data in this column.</td></tr>
-
-<tr><td>Documentation ("documentation")</td><td></td><td></td><td>A given asset can have only one documentation associated with it.</td></tr>
-<tr><td></td><td>mimeType</td><td>string</td><td>The mime type of the content.</td></tr>
-<tr><td></td><td>content</td><td>string</td><td>The documentation content.</td></tr>
+Annotation types represent types of metadata that can be assigned to other types within the catalog.
-</table>
+|Annotation type (nested view name) |Additional properties |Data type|Comments|
+|-|--|||
+|Description ("descriptions") |||This property contains a description for an asset. Each user of the system can add their own description. Only that user can edit the Description object. (Admins and Asset owners can delete the Description object but not edit it). The system maintains users' descriptions separately. Thus there's an array of descriptions on each asset (one for each user who has contributed their knowledge about the asset, in addition to possibly one that contains information derived from the data source).|
+||description|string|A short description (2-3 lines) of the asset.|
+|Tag ("tags") |||This property defines a tag for an asset. Each user of the system can add multiple tags for an asset. Only the user who created Tag objects can edit them. (Admins and Asset owners can delete the Tag object but not edit it). The system maintains users' tags separately. Thus there's an array of Tag objects on each asset.|
+||tag|string|A tag describing the asset.|
+|FriendlyName ("friendlyName") |||This property contains a friendly name for an asset. FriendlyName is a singleton annotation - only one FriendlyName can be added to an asset. Only the user who created FriendlyName object can edit it. (Admins and Asset owners can delete the FriendlyName object but not edit it). The system maintains users' friendly names separately.|
+||friendlyName|string|A friendly name of the asset.|
+|FriendlyName ("friendlyName") |||This property contains a friendly name for an asset. FriendlyName is a singleton annotation - only one FriendlyName can be added to an asset. Only the user who created FriendlyName object can edit it. (Admins and Asset owners can delete the FriendlyName object but not edit it). The system maintains users' friendly names separately.|
+||friendlyName|string|A friendly name of the asset.|
+|Schema ("schema") |||The Schema describes the structure of the data. It lists the attribute (column, attribute, field, etc.) names, types as well other metadata. This information is all derived from the data source. Schema is a singleton annotation - only one Schema can be added for an asset.|
+||columns|Column[]|An array of column objects. They describe the column with information derived from the data source.|
+|ColumnDescription ("columnDescriptions") |||This property contains a description for a column. Each user of the system can add their own descriptions for multiple columns (at most one per column). Only the user who created ColumnDescription objects can edit them. (Admins and Asset owners can delete the ColumnDescription object but not edit it). The system maintains these user's column descriptions separately. Thus there's an array of ColumnDescription objects on each asset (one per column for each user who has contributed their knowledge about the column in addition to possibly one that contains information derived from the data source). The ColumnDescription is loosely bound to the Schema so it can get out of sync. The ColumnDescription might describe a column that no longer exists in the schema. It's up to the writer to keep description and schema in sync. The data source may also have columns description information and they're other ColumnDescription objects that would be created when running the tool.|
+||columnName|String|The name of the column this description refers to.|
+||description|String|A short description (2-3 lines) of the column.|
+|ColumnTag ("columnTags") |||This property contains a tag for a column. Each user of the system can add multiple tags for a given column and can add tags for multiple columns. Only the user who created ColumnTag objects can edit them. (Admins and Asset owners can delete the ColumnTag object but not edit it). The system maintains these users' column tags separately. Thus there's an array of ColumnTag objects on each asset. The ColumnTag is loosely bound to the schema so it can get out of sync. The ColumnTag might describe a column that no longer exists in the schema. It's up to the writer to keep column tag and schema in sync.|
+||columnName|String|The name of the column this tag refers to.|
+||tag|String|A tag describing the column.|
+|Expert ("experts") |||This property contains a user who is considered an expert in the data set. The expertsΓÇÖ opinions(descriptions) bubble to the top of the UX when listing descriptions. Each user can specify their own experts. Only that user can edit the experts' object. (Admins and Asset owners can delete the Expert objects but not edit it).|
+||expert|SecurityPrincipal||
+|Preview ("previews") |||The preview contains a snapshot of the top 20 rows of data for the asset. Preview only make sense for some types of assets (it makes sense for Table but not for Measure).|
+||preview|object[]|Array of objects that represent a column. Each object has a property mapping to a column with a value for that column for the row.|
+|AccessInstruction ("accessInstructions") ||||
+||mimeType|string|The mime type of the content.|
+||content|string|The instructions for how to get access to this data asset. The content could be a URL, an email address, or a set of instructions.|
+|TableDataProfile ("tableDataProfiles") ||||
+||numberOfRows|int|The number of rows in the data set.|
+||size|long|The size in bytes of the data set.|
+||schemaModifiedTime|string|The last time the schema was modified.|
+||dataModifiedTime|string|The last time the data set was modified (data was added, modified, or delete).|
+|ColumnsDataProfile ("columnsDataProfiles") ||||
+||columns|ColumnDataProfile[]|An array of column data profiles.|
+|ColumnDataClassification ("columnDataClassifications") ||||
+||columnName|String|The name of the column this classification refers to.|
+||classification|String|The classification of the data in this column.|
+|Documentation ("documentation") |||A given asset can have only one documentation associated with it.|
+||mimeType|string|The mime type of the content.|
+||content|string|The documentation content.|
### Common types
-Common types can be used as the types for properties, but are not Items.
-
-<table>
-<tr><td><b>Common Type</b></td><td><b>Properties</b></td><td><b>Data Type</b></td><td><b>Comments</b></td></tr>
-<tr><td>DataSourceInfo</td><td></td><td></td><td></td></tr>
-<tr><td></td><td>sourceType</td><td>string</td><td>Describes the type of data source. For example: SQL Server, Oracle Database, etc. </td></tr>
-<tr><td></td><td>objectType</td><td>string</td><td>Describes the type of object in the data source. For example: Table, View for SQL Server.</td></tr>
-
-<tr><td>DataSourceLocation</td><td></td><td></td><td></td></tr>
-<tr><td></td><td>protocol</td><td>string</td><td>Required. Describes a protocol used to communicate with the data source. For example: `tds` for SQL Server, `oracle` for Oracle, etc. Refer to [Data source reference specification - DSL Structure](data-catalog-dsr.md) for the list of currently supported protocols.</td></tr>
-<tr><td></td><td>address</td><td>Dictionary&lt;string, object&gt;</td><td>Required. Address is a set of data specific to the protocol that is used to identify the data source being referenced. The address data scoped to a particular protocol, meaning it is meaningless without knowing the protocol.</td></tr>
-<tr><td></td><td>authentication</td><td>string</td><td>Optional. The authentication scheme used to communicate with the data source. For example: windows, oauth, etc.</td></tr>
-<tr><td></td><td>connectionProperties</td><td>Dictionary&lt;string, object&gt;</td><td>Optional. Additional information on how to connect to a data source.</td></tr>
-
-<tr><td>SecurityPrincipal</td><td></td><td></td><td>The backend does not perform any validation of provided properties against Azure Active Directory during publishing.</td></tr>
-<tr><td></td><td>upn</td><td>string</td><td>Unique email address of user. Must be specified if objectId is not provided or in the context of "lastRegisteredBy" property, otherwise optional.</td></tr>
-<tr><td></td><td>objectId</td><td>Guid</td><td>User or security group Azure Active Directory identity. Optional. Must be specified if upn is not provided, otherwise optional.</td></tr>
-<tr><td></td><td>firstName</td><td>string</td><td>First name of user (for display purposes). Optional. Only valid in the context of "lastRegisteredBy" property. Cannot be specified when providing security principal for "roles", "permissions" and "experts".</td></tr>
-<tr><td></td><td>lastName</td><td>string</td><td>Last name of user (for display purposes). Optional. Only valid in the context of "lastRegisteredBy" property. Cannot be specified when providing security principal for "roles", "permissions" and "experts".</td></tr>
-
-<tr><td>Column</td><td></td><td></td><td></td></tr>
-<tr><td></td><td>name</td><td>string</td><td>Name of the column or attribute.</td></tr>
-<tr><td></td><td>type</td><td>string</td><td>data type of the column or attribute. The Allowable types depend on data sourceType of the asset. Only a subset of types is supported.</td></tr>
-<tr><td></td><td>maxLength</td><td>int</td><td>The maximum length allowed for the column or attribute. Derived from data source. Only applicable to some source types.</td></tr>
-<tr><td></td><td>precision</td><td>byte</td><td>The precision for the column or attribute. Derived from data source. Only applicable to some source types.</td></tr>
-<tr><td></td><td>isNullable</td><td>Boolean</td><td>Whether the column is allowed to have a null value or not. Derived from data source. Only applicable to some source types.</td></tr>
-<tr><td></td><td>expression</td><td>string</td><td>If the value is a calculated column, this field contains the expression that expresses the value. Derived from data source. Only applicable to some source types.</td></tr>
-
-<tr><td>ColumnDataProfile</td><td></td><td></td><td></td></tr>
-<tr><td></td><td>columnName </td><td>string</td><td>The name of the column</td></tr>
-<tr><td></td><td>type </td><td>string</td><td>The type of the column</td></tr>
-<tr><td></td><td>min </td><td>string</td><td>The minimum value in the data set</td></tr>
-<tr><td></td><td>max </td><td>string</td><td>The maximum value in the data set</td></tr>
-<tr><td></td><td>avg </td><td>double</td><td>The average value in the data set</td></tr>
-<tr><td></td><td>stdev </td><td>double</td><td>The standard deviation for the data set</td></tr>
-<tr><td></td><td>nullCount </td><td>int</td><td>The count of null values in the data set</td></tr>
-<tr><td></td><td>distinctCount </td><td>int</td><td>The count of distinct values in the data set</td></tr>
-</table>
+
+Common types can be used as the types for properties, but aren't Items.
+
+|Common type |Properties |Data type|Comments|
+|-|--|||
+|DataSourceInfo|sourceType|string|Describes the type of data source. For example: SQL Server, Oracle Database, etc. |
+||objectType|string|Describes the type of object in the data source. For example: Table, View for SQL Server.|
+|DataSourceLocation|protocol|string|Required. Describes a protocol used to communicate with the data source. For example: `tds` for SQL Server, `oracle` for Oracle, etc. Refer to [Data source reference specification - DSL Structure](data-catalog-dsr.md) for the list of currently supported protocols.|
+||address|Dictionary\<string,object\>|Required. Address is a set of data specific to the protocol that is used to identify the data source being referenced. The address data scoped to a particular protocol, meaning it's meaningless without knowing the protocol.|
+||authentication|string|Optional. The authentication scheme used to communicate with the data source. For example: windows, oauth, etc.|
+||connectionProperties|Dictionary\<string,object\>|Optional. Additional information on how to connect to a data source.|
+|DataSourceLocation|||The backend doesn't perform any validation of provided properties against Azure Active Directory during publishing.|
+||upn|string|Required. Unique email address of user. Must be specified if objectId isn't provided or in the context of "lastRegisteredBy" property, otherwise optional.|
+||objectId|Guid|Optional. User or security group Azure Active Directory identity. Optional. Must be specified if upn isn't provided, otherwise optional.|
+||firstName|string|First name of user (for display purposes). Optional. Only valid in the context of "lastRegisteredBy" property. CanΓÇÖt be specified when providing security principal for "roles", "permissions" and "experts".|
+||lastName|string|Last name of user (for display purposes). Optional. Only valid in the context of "lastRegisteredBy" property. CanΓÇÖt be specified when providing security principal for "roles", "permissions" and "experts".|
+|Column|name|string|Name of the column or attribute.|
+||type|string|Data type of the column or attribute. The Allowable types depend on data sourceType of the asset. Only a subset of types is supported.|
+||maxLength|int|The maximum length allowed for the column or attribute. Derived from data source. Only applicable to some source types.|
+||precision|int|The precision for the column or attribute. Derived from data source. Only applicable to some source types.|
+||isNullable|isNullable|Whether the column is allowed to have a null value or not. Derived from data source. Only applicable to some source types.|
+||expression|string|If the value is a calculated column, this field contains the expression that expresses the value. Derived from data source. Only applicable to some source types.|
+|ColumnDataProfile|columnName|string|Name of the column.|
+||type|string|The type of the column.|
+||min|string|The minimum value in the data set.|
+||max|string|The maximum value in the data set.|
+||avg|double|The average value in the data set.|
+||stdev|double|The standard deviation for the data set|
+||nullCount|int|The count of null values in the data set.|
+||distinctCount |int|The count of distinct values in the data set.|
## Asset identity+ Azure Data Catalog uses "protocol" and identity properties from the "address" property bag of the DataSourceLocation "dsl" property to generate identity of the asset, which is used to address the asset inside the Catalog. For example, the Tabular Data Stream (TDS) protocol has identity properties "server", "database", "schema", and "object". The combinations of the protocol and the identity properties are used to generate the identity of the SQL Server Table Asset. Azure Data Catalog provides several built-in data source protocols, which are listed at [Data source reference specification - DSL Structure](data-catalog-dsr.md). The set of supported protocols can be extended programmatically (Refer to Data Catalog REST API reference). Administrators of the Catalog can register custom data source protocols. The following table describes the properties needed to register a custom protocol. ### Custom data source protocol specification
-<table>
-<tr><td><b>Type</b></td><td><b>Properties</b></td><td><b>Data Type</b></td><td><b>Comments</b></td></tr>
-<tr><td>DataSourceProtocol</td><td></td><td></td><td></td></tr>
-<tr><td></td><td>namespace</td><td>string</td><td>The namespace of the protocol. Namespace must be from 1 to 255 characters long, contain one or more non-empty parts separated by dot (.). Each part must be from 1 to 255 characters long, start with a letter and contain only letters and numbers.</td></tr>
-<tr><td></td><td>name</td><td>string</td><td>The name of the protocol. Name must be from 1 to 255 characters long, start with a letter and contain only letters, numbers, and the dash (-) character.</td></tr>
-<tr><td></td><td>identityProperties</td><td>DataSourceProtocolIdentityProperty[]</td><td>List of identity properties, must contain at least one, but no more than 20 properties. For example: "server", "database", "schema", "object" are identity properties of the "tds" protocol.</td></tr>
-<tr><td></td><td>identitySets</td><td>DataSourceProtocolIdentitySet[]</td><td>List of identity sets. Defines sets of identity properties, which represent valid asset's identity. Must contain at least one, but no more than 20 sets. For example: {"server", "database", "schema" and "object"} is an identity set for the TDS protocol, which defines identity of SQL Server Table asset.</td></tr>
+There are three different types of data source protocol specificiations. Listed below are the types, followed by a table of their properties.
+
+#### DataSourceProtocol
+
+|Properties |Data type|Comments|
+|--|||
+|namespace|string|The namespace of the protocol. Namespace must be from 1 to 255 characters long, contain one or more non-empty parts separated by dot (.). Each part must be from 1 to 255 characters long, start with a letter and contain only letters and numbers. |
+|name|string|The name of the protocol. Name must be from 1 to 255 characters long, start with a letter and contain only letters, numbers, and the dash (-) character.|
+|identityProperties|DataSourceProtocolIdentityProperty[]|List of identity properties, must contain at least one, but no more than 20 properties. For example: "server", "database", "schema", "object" are identity properties of the "tds" protocol.|
+|identitySets|DataSourceProtocolIdentitySet[]|List of identity sets. Defines sets of identity properties, which represent valid asset's identity. Must contain at least one, but no more than 20 sets. For example: {"server", "database", "schema" and "object"} is an identity set for the TDS protocol, which defines identity of SQL Server Table asset.|
-<tr><td>DataSourceProtocolIdentityProperty</td><td></td><td></td><td></td></tr>
-<tr><td></td><td>name</td><td>string</td><td>The name of the property. Name must be from 1 to 100 characters long, start with a letter and can contain only letters and numbers.</td></tr>
-<tr><td></td><td>type</td><td>string</td><td>The type of the property. Supported values: "bool", boolean", "byte", "guid", "int", "integer", "long", "string", "url"</td></tr>
-<tr><td></td><td>ignoreCase</td><td>bool</td><td>Indicates whether case should be ignored when using property's value. Can only be specified for properties with "string" type. Default value is false.</td></tr>
-<tr><td></td><td>urlPathSegmentsIgnoreCase</td><td>bool[]</td><td>Indicates whether case should be ignored for each segment of the url's path. Can only be specified for properties with "url" type. Default value is [false].</td></tr>
+#### DataSourceProtocolIdentityProperty
-<tr><td>DataSourceProtocolIdentitySet</td><td></td><td></td><td></td></tr>
-<tr><td></td><td>name</td><td>string</td><td>The name of the identity set.</td></tr>
-<tr><td></td><td>properties</td><td>string[]</td><td>The list of identity properties included into this identity set. It cannot contain duplicates. Each property referenced by identity set must be defined in the list of "identityProperties" of the protocol.</td></tr>
+|Properties |Data type|Comments|
+|--|||
+|name|string|The name of the property. Name must be from 1 to 100 characters long, start with a letter and can contain only letters and numbers.|
+|type|string|The type of the property. Supported values: "bool", boolean", "byte", "guid", "int", "integer", "long", "string", "url"|
+|ignoreCase|bool|Indicates whether case should be ignored when using property's value. Can only be specified for properties with "string" type. Default value is false.|
+|urlPathSegmentsIgnoreCase|bool[]|Indicates whether case should be ignored for each segment of the url's path. Can only be specified for properties with "url" type. Default value is [false].|
-</table>
+#### DataSourceProtocolIdentitySet
+
+|Properties |Data type|Comments|
+|--|||
+|name|string|The name of the identity set.|
+|properties|string[]|The list of identity properties included into this identity set. It canΓÇÖt contain duplicates. Each property referenced by identity set must be defined in the list of "identityProperties" of the protocol.|
## Roles and authorization+ Microsoft Azure Data Catalog provides authorization capabilities for CRUD operations on assets and annotations. The Azure Data Catalog uses two authorization mechanisms:
The Azure Data Catalog uses two authorization mechanisms:
* Permission-based authorization ### Roles
-There are three roles: **Administrator**, **Owner**, and **Contributor**. Each role has its scope and rights, which are summarized in the following table.
-<table><tr><td><b>Role</b></td><td><b>Scope</b></td><td><b>Rights</b></td></tr><tr><td>Administrator</td><td>Catalog (all assets/annotations in the Catalog)</td><td>Read
-Delete
-ViewRoles
+There are three roles: **Administrator**, **Owner**, and **Contributor**. Each role has its scope and rights, which are summarized in the following table.
-ChangeOwnership
-ChangeVisibility
-ViewPermissions</td></tr><tr><td>Owner</td><td>Each asset (root item)</td><td>Read
-Delete
-ViewRoles
+|Role |Scope |Rights|
+|-|--||
+|Administrator|Catalog (all assets/annotations in the Catalog)|Read, Delete, ViewRoles, ChangeOwnership, ChangeVisibility, ViewPermissions|
+|Owner|Each asset (root item)|Read, Delete, ViewRoles, ChangeOwnership, ChangeVisibility, ViewPermissions|
+|Contributor|Each individual asset and annotation|Read*, Update, Delete, ViewRoles|
-ChangeOwnership
-ChangeVisibility
-ViewPermissions</td></tr><tr><td>Contributor</td><td>Each individual asset and annotation</td><td>Read
-Update
-Delete
-ViewRoles
-Note: all the rights are revoked if the Read right on the item is revoked from the Contributor</td></tr></table>
+> [!NOTE]
+> *All the rights are revoked if the Read right on the item is revoked from the Contributor
> [!NOTE] > **Read**, **Update**, **Delete**, **ViewRoles** rights are applicable to any item (asset or annotation) while **TakeOwnership**, **ChangeOwnership**, **ChangeVisibility**, **ViewPermissions** are only applicable to the root asset.
->
> **Delete** right applies to an item and any subitems or single item underneath it. For example, deleting an asset also deletes any annotations for that asset.
->
### Permissions+ Permission is as list of access control entries. Each access control entry assigns set of rights to a security principal. Permissions can only be specified on an asset (that is, root item) and apply to the asset and any subitems. During the **Azure Data Catalog** preview, only **Read** right is supported in the permissions list to enable scenario to restrict visibility of an asset.
During the **Azure Data Catalog** preview, only **Read** right is supported in t
By default any authenticated user has **Read** right for any item in the catalog unless visibility is restricted to the set of principals in the permissions. ## REST API+ **PUT** and **POST** view item requests can be used to control roles and permissions: in addition to item payload, two system properties can be specified **roles** and **permissions**. > [!NOTE] > **permissions** only applicable to a root item.
->
> **Owner** role only applicable to a root item.
->
> By default when an item is created in the catalog its **Contributor** is set to the currently authenticated user. If item should be updatable by everyone, **Contributor** should be set to &lt;Everyone&gt; special security principal in the **roles** property when item is first published (refer to the following example). **Contributor** cannot be changed and stays the same during life-time of an item (even **Administrator** or **Owner** doesnΓÇÖt have the right to change the **Contributor**). The only value supported for the explicit setting of the **Contributor** is &lt;Everyone&gt;: **Contributor** can only be a user who created an item or &lt;Everyone&gt;.
->
### Examples+ **Set Contributor to &lt;Everyone&gt; when publishing an item.** Special security principal &lt;Everyone&gt; has objectId "00000000-0000-0000-0000-000000000201". **POST** https:\//api.azuredatacatalog.com/catalogs/default/views/tables/?api-version=2016-03-30 > [!NOTE] > Some HTTP client implementations may automatically reissue requests in response to a 302 from the server, but typically strip Authorization headers from the request. Since the Authorization header is required to make requests to Azure Data Catalog, you must ensure the Authorization header is still provided when reissuing a request to a redirect location specified by Azure Data Catalog. The following sample code demonstrates it using the .NET HttpWebRequest object.
->
-**Body**
+#### Body
+ ```json { "roles": [
Special security principal &lt;Everyone&gt; has objectId "00000000-0000-0000-000
> [!NOTE] > In PUT itΓÇÖs not required to specify an item payload in the body: PUT can be used to update just roles and/or permissions.
->
## Next steps+ [Azure Data Catalog REST API reference](/rest/api/datacatalog/)
data-catalog Data Catalog How To Documentation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-documentation.md
Title: How to document data sources in Azure Data Catalog description: How-to article highlighting how to document data assets in Azure Data Catalog.--++ Previously updated : 08/01/2019 Last updated : 02/17/2022 # How to document data sources in Azure Data Catalog [!INCLUDE [Azure Purview redirect](../../includes/data-catalog-use-purview.md)] ## Introduction+ **Microsoft Azure Data Catalog** is a fully managed cloud service that serves as a system of registration and system of discovery for enterprise data sources. In other words, **Azure Data Catalog** is all about helping people discover, *understand*, and use data sources, and helping organizations to get more value from their existing data. When a data source is registered with **Azure Data Catalog**, its metadata is copied and indexed by the service, but the story doesnΓÇÖt end there. **Azure Data Catalog** also allows users to provide their own complete documentation that can describe the usage and common scenarios for the data source.
In [How to annotate data sources](data-catalog-how-to-annotate.md), you learn th
Tags and descriptions are great for simple annotations. However, to help data consumers better understand the use of a data source, and business scenarios for a data source, an expert can provide complete, detailed documentation. It's easy to document a data source. Select a data asset or container, and choose **Documentation**.
-![Documentation tab in a Data Catalog](media/data-catalog-documentation/data-catalog-documentation.png)
## Documenting data assets+ The benefit of **Azure Data Catalog** documentation allows you to use your Data Catalog as a content repository to create a complete narrative of your data assets. You can explore detailed content that describes containers and tables. If you already have content in another content repository, such as SharePoint or a file share, you can add to the asset documentation links to reference this existing content. This feature makes your existing documents more discoverable. > [!NOTE] > Documentation is not included in search index.
->
-![Documentation tab and hyperlink to web link](media/data-catalog-documentation/data-catalog-documentation2.png)
The level of documentation can range from describing the characteristics and value of a data asset container to a detailed description of table schema within a container. The level of documentation provided should be driven by your business needs. But in general, here are a few pros and cons of documenting data assets:
The level of documentation can range from describing the characteristics and val
* Document containers and tables: Most comprehensive approach, but might introduce more maintenance of the documents. ## Summary+ Documenting data sources with **Azure Data Catalog** can create a narrative about your data assets in as much detail as you need. By using links, you can link to content stored in an existing content repository, which brings your existing docs and data assets together. Once your users discover appropriate data assets, they can have a complete set of documentation.
data-catalog Data Catalog Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-samples.md
Title: Azure Data Catalog developer samples description: This article provides an overview of the available developer samples for the Data Catalog REST API. --++ Previously updated : 08/01/2019 Last updated : 02/16/2022 # Azure Data Catalog developer samples
Get started developing Azure Data Catalog apps using the Data Catalog REST API.
* [Get started with Azure Data Catalog](https://github.com/Azure-Samples/data-catalog-dotnet-get-started/) The get started sample shows you how to authenticate with Azure AD to Register, Search, and Delete a data asset using the Data Catalog REST API.
-
+ * [Get started with Azure Data Catalog using Service Principal](https://github.com/Azure-Samples/data-catalog-dotnet-service-principal-get-started/) This sample shows you how to register, search, and delete a data asset using the Data Catalog REST API. This sample uses the Service Principal authentication.
Get started developing Azure Data Catalog apps using the Data Catalog REST API.
* [Publish relationships into Azure Data Catalog](https://github.com/Azure-Samples/data-catalog-dotnet-publish-relationships/) This sample shows you how can programmatically publish relationship information to a data catalog.
-
+ ## Next steps+ [Azure Data Catalog REST API reference](/rest/api/datacatalog/)
data-factory Concepts Data Flow Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-overview.md
Mapping data flows are available in the following regions in ADF:
| Germany Non-Regional (Sovereign) | | | Germany North (Public) | | | Germany Northeast (Sovereign) | |
-| Germany West Central (Public) | |
+| Germany West Central (Public) | Γ£ô |
| Japan East | Γ£ô |
-| Japan West | |
+| Japan West | Γ£ô |
| Korea Central | Γ£ô | | Korea South | | | North Central US | Γ£ô |
Mapping data flows are available in the following regions in ADF:
| South Africa North | Γ£ô | | South Africa West | | | South Central US | |
-| South India | |
+| South India | Γ£ô |
| Southeast Asia | Γ£ô | | Switzerland North | Γ£ô | | Switzerland West | |
Mapping data flows are available in the following regions in ADF:
| US Gov Virginia | Γ£ô | | West Central US | | | West Europe | Γ£ô |
-| West India | |
+| West India | Γ£ô |
| West US | Γ£ô | | West US 2 | Γ£ô |
+| West US 3 | Γ£ô |
## Next steps
data-factory Connector Troubleshoot Sharepoint Online List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-sharepoint-online-list.md
Previously updated : 10/01/2021 Last updated : 01/25/2022
This article provides suggestions to troubleshoot common problems with the Share
- **Recommendation**: Check your registered application (service principal ID) and key to see whether they're set correctly.
+## Connection failed after granting permission in SharePoint Online List
+
+### Symptoms
+
+You granted permission to your data factory in SharePoint Online List, but you still fail with the following error message:
+
+`Failed to get metadata of odata service, please check if service url and credential is correct and your application has permission to the resource. Expected status code: 200, actual status code: Unauthorized, response is : {"error":"invalid_request","error_description":"Token type is not allowed."}.`
+
+### Cause
+
+The SharePoint Online List uses ACS to acquire the access token to grant access to other applications. But for the tenant built after November 7, 2018, ACS is disabled by default.
+
+### Recommendation
+
+You need to enable ACS to acquire the access token. Take the following steps:
+
+1. Download [SharePoint Online Management Shell](https://www.microsoft.com/download/details.aspx?id=35588#:~:text=The%20SharePoint%20Online%20Management%20Shell%20has%20a%20new,and%20saving%20the%20file%20to%20your%20hard%20disk.), and ensure that you have a tenant admin account.
+1. Run the following command in the SharePoint Online Management Shell. Replace `<tenant name>` with your tenant name and add `-admin` after it.
+
+ ```powershell
+ Connect-SPOService -Url https://<tenant name>-admin.sharepoint.com/
+ ```
+1. Enter your tenant admin information in the pop-up authentication window.
+1. Run the following command:
+
+ ```powershell
+ Set-SPOTenant -DisableCustomAppAuthentication $false
+ ```
+ :::image type="content" source="./media/connector-troubleshoot-guide/sharepoint-online-management-shell-command.png" alt-text="Diagram of Azure Data Lake Storage Gen1 connections for troubleshooting issues.":::
+
+1. Use ACS to get the access token.
++ ## Next steps For more troubleshooting help, try these resources:
databox-online Azure Stack Edge Gpu 2202 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2202-release-notes.md
This article applies to the **Azure Stack Edge 2202** release, which maps to sof
## What's new
-The 2202 release introduces clustering for Azure Stack Edge. You can now deploy a two-node device cluster in addition to a single node device. The clustering feature is in preview and is available only for the Azure Stack Edge Pro GPU devices.
+The 2202 release has the following features and enhancements:
-For more information, see [What is clustering on Azure Stack Edge?](azure-stack-edge-gpu-clustering-overview.md).
+- **Clustering support** - This release introduces clustering support for Azure Stack Edge. You can now deploy a two-node device cluster in addition to a single node device. The clustering feature is in preview and is available only for the Azure Stack Edge Pro GPU devices.
+ For more information, see [What is clustering on Azure Stack Edge?](azure-stack-edge-gpu-clustering-overview.md).
-<!--## Issues fixed in 2202 release
+- **Password reset extension** - Starting this release, password reset extension for both Windows and Linux virtual machines (VMs) are enabled.
+- **VM improvements** - A new VM size F12 was added in this release.
+- **Multi-Access Edge Computing (MEC) and Virtual Network Functions (VNF) improvements**:
+ - In this release, VM create and delete for VNF create and delete were parallelized. This has significantly reduced the creation time for VNFs that contain multiple VMs.
+ - The VHD ingestion job resource clean up was moved out of VNF create and delete. This reduced the VNF creation and deletion times.
+- **Updates for Azure Arc and Edge container registry** - Azure Arc and Edge container registry versions were updated. For more information, see [About updates](azure-stack-edge-gpu-install-update.md#about-latest-update).
+- **Security fixes** - Starting this release, a pod security policy is set up on the Kubernetes cluster on your Azure Stack Edge device. If you are using root privileges in your containerized solution, you may experience some change in the behavior. No action is required on your part.
+++
+## Issues fixed in 2202 release
The following table lists the issues that were release noted in previous releases and fixed in the current release. | No. | Feature | Issue | | | | |
-|**1.**|Multi-Access Edge Compute | In previous releases, the Azure Stack Edge device did not send VNF operation results back to the Azure Network Function Manager, owing to the MEC Operation Manager (a component of MEC agent) being reset. |-->
+|**1.**|Azure Arc | In the previous releases, there was a bug in the proxy implementation that resulted in Azure Arc not functioning properly. In this version, a web proxy bypass list was added to the Azure Arc *no_proxy* list. |
## Known issues in 2202 release
databox-online Azure Stack Edge Gpu Deploy Arc Kubernetes Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-arc-kubernetes-cluster.md
Previously updated : 10/05/2021 Last updated : 02/17/2022
Before you can enable Azure Arc on Kubernetes cluster, make sure that you have c
- You can have any other client with a [Supported operating system](azure-stack-edge-gpu-system-requirements.md#supported-os-for-clients-connected-to-device) as well. This article describes the procedure when using a Windows client.
+
1. You have completed the procedure described in [Access the Kubernetes cluster on Azure Stack Edge Pro device](azure-stack-edge-gpu-create-kubernetes-cluster.md). You have: - Installed `kubectl` on the client.
You can also register resource providers via the `az cli`. For more information,
`az ad sp create-for-rbac --name "<Informative name for service principal>"`
- For information on how to log into the `az cli`, [Start Cloud Shell in Azure portal](../cloud-shell/quickstart-powershell.md#start-cloud-shell)
+ For information on how to log into the `az cli`, [Start Cloud Shell in Azure portal](../cloud-shell/quickstart-powershell.md#start-cloud-shell). If using `az cli` on a local client to create the service principal, make sure that you are running version 2.25 or later.
Here is an example.
databox-online Azure Stack Edge Gpu Install Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-install-update.md
The current update is Update 2202. This update installs two updates, the device
For information on what's new in this update, go to [Release notes](azure-stack-edge-gpu-2202-release-notes.md).
-**To apply 2202 update, your device must be running 2106.**
+**To apply 2202 update, your device must be running 2106 or later.**
- If you are not running the minimal supported version, you'll see this error: *Update package cannot be installed as its dependencies are not met*. - You can update to 2106 from an older version and then install 2202.
The procedure to update an Azure Stack Edge is the same whether it is a single-n
- **Single node** - For a single node device, installing an update or hotfix is disruptive and will restart your device. Your device will experience a downtime for the entire duration of the update. -- **Two-node** - For a two-node cluster, this is an optimized update. The two-node cluster may experience short, intermittent disruptions while the update is in progress. We recommend that you shouldn't perform any operations on the other node when update is in progress on the first node of the cluster.
+- **Two-node** - For a two-node cluster, this is an optimized update. The two-node cluster may experience short, intermittent disruptions while the update is in progress. We recommend that you shouldn't perform any operations on the device node when update is in progress.
The Kubernetes worker VMs will go down when a node goes down. The Kubernetes master VM will fail over to the other node. Workloads will continue to run. For more information, see [Kubernetes failover scenarios for Azure Stack Edge](azure-stack-edge-gpu-kubernetes-failover-scenarios.md).
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Title: Container security with Microsoft Defender for Cloud description: Learn about Microsoft Defender for Containers Previously updated : 02/14/2022 Last updated : 02/16/2022 # Overview of Microsoft Defender for Containers
When reviewing the outstanding recommendations for your container-related resour
-### Workload protection best-practices using Kubernetes admission control
+### Environment hardening
For a bundle of recommendations to protect the workloads of your Kubernetes containers, install the **Azure Policy for Kubernetes**. You can also auto deploy this component as explained in [enable auto provisioning of agents and extensions](enable-data-collection.md#auto-provision-mma). By default, auto provisioning is enabled when you enable Defender for Containers.
defender-for-cloud Kubernetes Workload Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/kubernetes-workload-protections.md
Title: Workload protections for your Kubernetes workloads description: Learn how to use Microsoft Defender for Cloud's set of Kubernetes workload protection security recommendations Previously updated : 02/15/2022 Last updated : 02/16/2022 # Protect your Kubernetes workloads
If you disabled any of the default protections when you enabled Microsoft Defend
## Deploy the add-on to specified clusters
-You can manually configure the Kubernetes workload add-on, or extension protection through the Recommendations page. This can be accomplished by remediating the `Azure Policy add-on for Kubernetes should be installed and enabled on your clusters` recommendation.
+You can manually configure the Kubernetes workload add-on, or extension protection through the Recommendations page. This can be accomplished by remediating the `Azure Policy add-on for Kubernetes should be installed and enabled on your clusters` recommendation, or `Azure policy extension for Kubernetes should be installed and enabled on your clusters`.
**To Deploy the add-on to specified clusters**:
-1. From the recommendations page, search for the recommendation `Azure Policy add-on for Kubernetes should be installed and enabled on your clusters`.
+1. From the recommendations page, search for the recommendation `Azure Policy add-on for Kubernetes should be installed and enabled on your clusters`, or `Azure policy extension for Kubernetes should be installed and enabled on your clusters`.
:::image type="content" source="./media/defender-for-kubernetes-usage/recommendation-to-install-policy-add-on-for-kubernetes.png" alt-text="Recommendation **Azure Policy add-on for Kubernetes should be installed and enabled on your clusters**.":::
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
Title: Archive of what's new in Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud from six months ago and earlier. Previously updated : 02/13/2022 Last updated : 02/17/2022 # Archive for what's new in Defender for Cloud?
When the Azure Policy add-on for Kubernetes is installed on your Azure Kubernete
For example, you can mandate that privileged containers shouldn't be created, and any future requests to do so will be blocked.
-Learn more in [Workload protection best-practices using Kubernetes admission control](defender-for-containers-introduction.md#workload-protection-best-practices-using-kubernetes-admission-control).
+Learn more in [Workload protection best-practices using Kubernetes admission control](defender-for-containers-introduction.md#environment-hardening).
> [!NOTE] > While the recommendations were in preview, they didn't render an AKS cluster resource unhealthy, and they weren't included in the calculations of your secure score. with this GA announcement these will be included in the score calculation. If you haven't remediated them already, this might result in a slight impact on your secure score. Remediate them wherever possible as described in [Remediate recommendations in Azure Security Center](implement-security-recommendations.md).
When you've installed the Azure Policy add-on for Kubernetes on your AKS cluster
For example, you can mandate that privileged containers shouldn't be created, and any future requests to do so will be blocked.
-Learn more in [Workload protection best-practices using Kubernetes admission control](defender-for-containers-introduction.md#workload-protection-best-practices-using-kubernetes-admission-control).
+Learn more in [Workload protection best-practices using Kubernetes admission control](defender-for-containers-introduction.md#environment-hardening).
### Vulnerability assessment findings are now available in continuous export
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 01/20/2022 Last updated : 02/17/2022 # What's new in Microsoft Defender for Cloud?
To learn about *planned* changes that are coming soon to Defender for Cloud, see
> [!TIP] > If you're looking for items older than six months, you'll find them in the [Archive for What's new in Microsoft Defender for Cloud](release-notes-archive.md).
+## February 2022
+
+Updates in February include:
+
+- [Kubernetes workload protection for Arc enabled K8s clusters](#kubernetes-workload-protection-for-arc-enabled-k8s-clusters)
+
+### Kubernetes workload protection for Arc enabled K8s clusters
+
+Defender for Containers for Kubernetes workloads previously only protected AKS. We have now extended the protective coverage to include Azure Arc enabled Kubernetes clusters.
+
+Learn how to [set up your Kubernetes workload protection](kubernetes-workload-protections.md#set-up-your-workload-protection) for AKS and Azure Arc enabled Kubernetes clusters.
## January 2022
devtest-labs Configure Lab Remote Desktop Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/configure-lab-remote-desktop-gateway.md
In Azure DevTest Labs, you can configure a remote desktop gateway for your lab t
This approach is more secure because the lab user authenticates directly to the gateway machine or can use company credentials on a domain-joined gateway machine to connect to their machines. The lab also supports using token authentication to the gateway machine that allows users to connect to their lab virtual machines without having the RDP port exposed to the internet. This article walks through an example on how to set up a lab that uses token authentication to connect to lab machines.
+Looking to connect through Bastion, read "[Enable browser connection to DevTest Labs VMs with Azure Bastion](enable-browser-connection-lab-virtual-machines.md)".
+ ## Architecture of the solution ![Architecture of the solution](./media/configure-lab-remote-desktop-gateway/architecture.png)
Follow these steps to set up a sample solution for the remote desktop gateway fa
Once both gateway and lab are configured, the connection file created when the lab user clicks on the **Connect** will automatically include information necessary to connect using token authentication. ## Next steps
-See the following article to learn more about Remote Desktop
+See the following article to learn more about Remote Desktop
event-grid Cloudevents Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/cloudevents-schema.md
Title: Use Azure Event Grid with events in CloudEvents schema
description: Describes how to use the CloudEvents schema for events in Azure Event Grid. The service supports events in the JSON implementation of CloudEvents. Last updated 07/22/2021
+ms.devlang: csharp, javascript
event-grid Resize Images On Storage Blob Upload Event https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/resize-images-on-storage-blob-upload-event.md
Title: 'Tutorial: Use Azure Event Grid to automate resizing uploaded images'
description: 'Tutorial: Azure Event Grid can trigger on blob uploads in Azure Storage. You can use this to send image files uploaded to Azure Storage to other services, such as Azure Functions, for resizing and other improvements.' Last updated 09/28/2021
+ms.devlang: csharp, javascript
firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/overview.md
Previously updated : 01/20/2022 Last updated : 02/17/2022 # Customer intent: As an administrator, I want to evaluate Azure Firewall so I can determine if I want to use it.
Azure Firewall Standard has the following known issues:
|Unable to see Network Rule Name in Azure Firewall Logs|Azure Firewall network rule log data does not show the Rule name for network traffic.|A feature is being investigated to support this.| |XFF header in HTTP/S|XFF headers are overwritten with the original source IP address as seen by the firewall. This is applicable for the following use cases:<br>- HTTP requests<br>- HTTPS requests with TLS termination|A fix is being investigated.| | Firewall logs (Resource specific tables - Preview) | Resource specific log queries are in preview mode and aren't currently supported. | A fix is being investigated.|
-|Availability Zones for Firewall Premium in the Southeast Asia region|You can't currently deploy Azure Firewall Premium with Availability Zones in the Southeast Asia region.|Deploy the firewall in Southeast Asia without Availability Zones, or deploy in a region that supports Availability Zones.|
+|Can't upgrade to Premium with Availability Zones in the Southeast Asia region.|You can't currently upgrade to Azure Firewall Premium with Availability Zones in the Southeast Asia region.|Deploy a new Premium firewall in Southeast Asia without Availability Zones, or deploy in a region that supports Availability Zones.|
### Azure Firewall Premium
Untrusted customer signed certificates|Customer signed certificates are not trus
|Certificate Propagation|After a CA certificate is applied on the firewall, it may take between 5-10 minutes for the certificate to take effect.|A fix is being investigated.| |TLS 1.3 support|TLS 1.3 is partially supported. The TLS tunnel from client to the firewall is based on TLS 1.2, and from the firewall to the external Web server is based on TLS 1.3.|Updates are being investigated.| |KeyVault Private Endpoint|KeyVault supports Private Endpoint access to limit its network exposure. Trusted Azure Services can bypass this limitation if an exception is configured as described in the [KeyVault documentation](../key-vault/general/overview-vnet-service-endpoints.md#trusted-services). Azure Firewall is not currently listed as a trusted service and can't access the Key Vault.|A fix is being investigated.|
-|IDPS Bypass list|IDPS Bypass list doesn't support IP Groups.|A fix is being investigated.|
+|IDPS Bypass list|If you enable IDPS (either ΓÇÿAlertΓÇÖ or ΓÇÿAlert and DenyΓÇÖ mode) and actively delete one or more existing rules in IDPS Bypass list, you may be subject to packet loss which is correlated to the deleted rules source/destination IP addresses. |A fix is being investigated.<br><br>You may respond to this issue by taking one of the following actions:<br><br>- Do a start/stop procedure as explained [here](firewall-faq.yml#how-can-i-stop-and-start-azure-firewall).<br>- Open a support ticket and we will re-image your effected firewall virtual machines.|
+|Availability Zones for Firewall Premium in the Southeast Asia region|You can't currently deploy Azure Firewall Premium with Availability Zones in the Southeast Asia region.|Deploy the firewall in Southeast Asia without Availability Zones, or deploy in a region that supports Availability Zones.|
++ ## Next steps
governance Effects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/effects.md
related resources to match.
- **Type** (required) - Specifies the type of the related resource to match.
- - If **details.type** is a resource type underneath the **if** condition resource, the policy
+ - If **type** is a resource type underneath the **if** condition resource, the policy
queries for resources of this **type** within the scope of the evaluated resource. Otherwise,
- policy queries within the same resource group as the evaluated resource.
+ policy queries within the same resource group or subscription as the evaluated resource depending on the **existenceScope**.
- **Name** (optional) - Specifies the exact name of the resource to match and causes the policy to fetch one specific resource instead of all resources of the specified type.
related resources to match and the template deployment to execute.
- **Type** (required) - Specifies the type of the related resource to match.
- - Starts by trying to fetch a resource underneath the **if** condition resource, then queries
- within the same resource group as the **if** condition resource.
+ - If **type** is a resource type underneath the **if** condition resource, the policy
+ queries for resources of this **type** within the scope of the evaluated resource. Otherwise,
+ policy queries within the same resource group or subscription as the evaluated resource depending on the **existenceScope**.
- **Name** (optional) - Specifies the exact name of the resource to match and causes the policy to fetch one specific resource instead of all resources of the specified type.
governance Get Resource Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/how-to/get-resource-changes.md
Title: Get resource changes
-description: Understand how to find when a resource was changed, get a list of the properties that changed, and evaluate the diffs.
Previously updated : 08/17/2021
+description: Understand how to find when a resource was changed and query the list of resource configuration changes at scale
Last updated : 01/27/2022 # Get resource changes Resources get changed through the course of daily use, reconfiguration, and even redeployment. Change can come from an individual or by an automated process. Most change is by design, but
-sometimes it isn't. With the last 14 days of change history, Azure Resource Graph enables you to:
+sometimes it isn't. With the last seven days of changes, Resource configuration changes enables you to:
- Find when changes were detected on an Azure Resource Manager property - For each resource change, see property change details-- See a full comparison of the resource before and after the detected change
+- Query changes at scale across your subscriptions, Management group, or tenant
Change detection and details are valuable for the following example scenarios:
Change detection and details are valuable for the following example scenarios:
- Keeping a Configuration Management Database, known as a CMDB, up-to-date. Instead of refreshing all resources and their full property sets on a scheduled frequency, only get what changed. - Understanding what other properties may have been changed when a resource changed compliance
- state. Evaluation of these additional properties can provide insights into other properties that
+ state. Evaluation of these extra properties can provide insights into other properties that
may need to be managed via an Azure Policy definition.
-This article shows how to gather this information through Resource Graph's SDK. To see this
-information in the Azure portal, see Azure Policy's
-[Change history](../../policy/how-to/determine-non-compliance.md#change-history) or Azure Activity
+This article shows how to query Resource configuration changes through Resource Graph. To see this
+information in the Azure portal, see [Azure Resource Graph Explorer](../first-query-portal.md), Azure Policy's
+[Change history](../../policy/how-to/determine-non-compliance.md#change-history), or Azure Activity
Log [Change history](../../../azure-monitor/essentials/activity-log.md#view-the-activity-log). For details about changes to your applications from the infrastructure layer all the way to application deployment, see
deployment, see
Monitor. > [!NOTE]
-> Change details in Resource Graph are for Resource Manager properties. For tracking changes inside
+> Resource configuration changes is for Azure Resource Manager properties. For tracking changes inside
> a virtual machine, see Azure Automation's > [Change tracking](../../../automation/change-tracking/overview.md) or Azure Policy's > [Guest Configuration for VMs](../../policy/concepts/guest-configuration.md). > [!IMPORTANT]
-> Change history in Azure Resource Graph is in Public Preview.
+> Resource configuration changes is in Public Preview and only supports changes to resource types from the [Resources table](..//reference/supported-tables-resources.md#resources) in Resource Graph. This does not yet include changes to the resource container resources, such as Management groups, Subscriptions, and Resource groups.
## Find detected change events and view change details
-The first step in seeing what changed on a resource is to find the change events related to that
-resource within a window of time. Each change event also includes details about what changed on the
-resource. This step is done through the **resourceChanges** REST endpoint.
+When a resource is created, updated, or deleted, a new change resource (Microsoft.Resources/changes) is created to extend the modified resource and represent the changed properties.
-The **resourceChanges** endpoint accepts the following parameters in the request body:
--- **resourceId** \[required\]: The Azure resource to look for changes on.-- **interval** \[required\]: A property with _start_ and _end_ dates for when to check for a change
- event using the **Zulu Time Zone (Z)**.
-- **fetchPropertyChanges** (optional): A Boolean property that sets if the response object includes
- property changes.
-
-Example request body:
+Example change resource property bag:
```json {
- "resourceId": "/subscriptions/{subscriptionId}/resourceGroups/MyResourceGroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount",
- "interval": {
- "start": "2019-09-28T00:00:00.000Z",
- "end": "2019-09-29T00:00:00.000Z"
+ "targetResourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myResourceGroup/providers/microsoft.compute/virtualmachines/myVM",
+ "targetResourceType": "microsoft.compute/virtualmachines",
+ "changeType": "Update",
+ "changeAttributes": {
+ "changesCount": 2,
+ "correlationId": "88420d5d-8d0e-471f-9115-10d34750c617",
+ "timestamp": "2021-12-07T09:25:41.756Z",
+ "previousResourceSnapshotId": "ed90e35a-1661-42cc-a44c-e27f508005be",
+ "newResourceSnapshotId": "6eac9d0f-63b4-4e7f-97a5-740c73757efb"
+ },
+ "changes": {
+ "properties.provisioningState": {
+ "newValue": "Succeeded",
+ "previousValue": "Updating",
+ "changeCategory": "System",
+ "propertyChangeType": "Update"
},
- "fetchPropertyChanges": true
-}
-```
-
-With the above request body, the REST API URI for **resourceChanges** is:
-
-```http
-POST https://management.azure.com/providers/Microsoft.ResourceGraph/resourceChanges?api-version=2018-09-01-preview
-```
-
-The response looks similar to this example:
-
-```json
-{
- "changes": [
- {
- "changeId": "{\"beforeId\":\"3262e382-9f73-4866-a2e9-9d9dbee6a796\",\"beforeTime\":\"2019-09-28T00:45:35.012Z\",\"afterId\":\"6178968e-981e-4dac-ac37-340ee73eb577\",\"afterTime\":\"2019-09-28T00:52:53.371Z\"}",
- "beforeSnapshot": {
- "snapshotId": "3262e382-9f73-4866-a2e9-9d9dbee6a796",
- "timestamp": "2019-09-28T00:45:35.012Z"
- },
- "afterSnapshot": {
- "snapshotId": "6178968e-981e-4dac-ac37-340ee73eb577",
- "timestamp": "2019-09-28T00:52:53.371Z"
- },
- "changeType": "Create"
- },
- {
- "changeId": "{\"beforeId\":\"a00f5dac-86a1-4d86-a1c5-a9f7c8147b7c\",\"beforeTime\":\"2019-09-28T00:43:38.366Z\",\"afterId\":\"3262e382-9f73-4866-a2e9-9d9dbee6a796\",\"afterTime\":\"2019-09-28T00:45:35.012Z\"}",
- "beforeSnapshot": {
- "snapshotId": "a00f5dac-86a1-4d86-a1c5-a9f7c8147b7c",
- "timestamp": "2019-09-28T00:43:38.366Z"
- },
- "afterSnapshot": {
- "snapshotId": "3262e382-9f73-4866-a2e9-9d9dbee6a796",
- "timestamp": "2019-09-28T00:45:35.012Z"
- },
- "changeType": "Delete"
- },
- {
- "changeId": "{\"beforeId\":\"b37a90d1-7ebf-41cd-8766-eb95e7ee4f1c\",\"beforeTime\":\"2019-09-28T00:43:15.518Z\",\"afterId\":\"a00f5dac-86a1-4d86-a1c5-a9f7c8147b7c\",\"afterTime\":\"2019-09-28T00:43:38.366Z\"}",
- "beforeSnapshot": {
- "snapshotId": "b37a90d1-7ebf-41cd-8766-eb95e7ee4f1c",
- "timestamp": "2019-09-28T00:43:15.518Z"
- },
- "afterSnapshot": {
- "snapshotId": "a00f5dac-86a1-4d86-a1c5-a9f7c8147b7c",
- "timestamp": "2019-09-28T00:43:38.366Z"
- },
- "propertyChanges": [
- {
- "propertyName": "tags.org",
- "afterValue": "compute",
- "changeCategory": "User",
- "changeType": "Insert"
- },
- {
- "propertyName": "tags.team",
- "afterValue": "ARG",
- "changeCategory": "User",
- "changeType": "Insert"
- }
- ],
- "changeType": "Update"
- },
- {
- "changeId": "{\"beforeId\":\"19d12ab1-6ac6-4cd7-a2fe-d453a8e5b268\",\"beforeTime\":\"2019-09-28T00:42:46.839Z\",\"afterId\":\"b37a90d1-7ebf-41cd-8766-eb95e7ee4f1c\",\"afterTime\":\"2019-09-28T00:43:15.518Z\"}",
- "beforeSnapshot": {
- "snapshotId": "19d12ab1-6ac6-4cd7-a2fe-d453a8e5b268",
- "timestamp": "2019-09-28T00:42:46.839Z"
- },
- "afterSnapshot": {
- "snapshotId": "b37a90d1-7ebf-41cd-8766-eb95e7ee4f1c",
- "timestamp": "2019-09-28T00:43:15.518Z"
- },
- "propertyChanges": [{
- "propertyName": "tags.cgtest",
- "afterValue": "hello",
- "changeCategory": "User",
- "changeType": "Insert"
- }],
- "changeType": "Update"
- }
- ]
+ "tags.key1": {
+ "newValue": "NewTagValue",
+ "previousValue": "null",
+ "changeCategory": "User",
+ "propertyChangeType": "Insert"
+ }
+ }
} ```
-Each detected change event for the **resourceId** has the following properties:
+Each change resource has the following properties:
-- **changeId** - This value is unique to that resource. While the **changeId** string may sometimes
- contain other properties, it's only guaranteed to be unique.
-- **beforeSnapshot** - Contains the **snapshotId** and **timestamp** of the resource snapshot that
- was taken before a change was detected.
-- **afterSnapshot** - Contains the **snapshotId** and **timestamp** of the resource snapshot that
- was taken after a change was detected.
-- **changeType** - Describes the type of change detected for the entire change record between the
- **beforeSnapshot** and **afterSnapshot**. Values are: _Create_, _Update_, and _Delete_. The
- **propertyChanges** property array is only included when **changeType** is _Update_.
+- **targetResourceId** - The resourceID of the resource on which the change occurred.
+ - **targetResourceType** - The resource type of the resource on which the change occurred.
+- **changeType** - Describes the type of change detected for the entire change record. Values are: _Create_, _Update_, and _Delete_. The
+ **changes** property dictionary is only included when **changeType** is _Update_. For the _Delete_ case, the change resource will still be maintained as an extension of the deleted resource for seven days, even if the entire Resource group has been deleted. The change resource will not block deletions or impact any existing delete behavior.
- > [!IMPORTANT]
- > _Create_ is only available on resources that previously existed and were deleted within the last
- > 14 days.
-- **propertyChanges** - This array of properties details all of the resource properties that were
- updated between the **beforeSnapshot** and the **afterSnapshot**:
- - **propertyName** - The name of the resource property that was altered.
- - **changeCategory** - Describes what made the change. Values are: _System_ and _User_.
- - **changeType** - Describes the type of change detected for the individual resource property.
+- **changes** - Dictionary of the resource properties (with property name as the key) that were updated as part of the change:
+ - **propertyChangeType** - Describes the type of change detected for the individual resource property.
Values are: _Insert_, _Update_, _Remove_.
- - **beforeValue** - The value of the resource property in the **beforeSnapshot**. Isn't displayed
- when **changeType** is _Insert_.
- - **afterValue** - The value of the resource property in the **afterSnapshot**. Isn't displayed
- when **changeType** is _Remove_.
-
-## Compare resource changes
-
-With the **changeId** from the **resourceChanges** endpoint, the **resourceChangeDetails** REST
-endpoint is then used to get the before and after snapshots of the resource that was changed.
-
-The **resourceChangeDetails** endpoint requires two parameters in the request body:
--- **resourceId**: The Azure resource to compare changes on.-- **changeId**: The unique change event for the **resourceId** gathered from **resourceChanges**.-
-Example request body:
-
-```json
-{
- "resourceId": "/subscriptions/{subscriptionId}/resourceGroups/MyResourceGroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount",
- "changeId": "{\"beforeId\":\"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\",\"beforeTime\":'2019-05-09T00:00:00.000Z\",\"afterId\":\"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\",\"afterTime\":'2019-05-10T00:00:00.000Z\"}"
-}
+ - **previousValue** - The value of the resource property in the previous snapshot. Value is _null_ when **changeType** is _Insert_.
+ - **newValue** - The value of the resource property in the new snapshot. Value is _null_ when **changeType** is _Remove_.
+ - **changeCategory** - Describes if the property change was the result of a change in value (_User_) or a difference in referenced API versions (_System_). Values are: _System_ and _User_.
+
+- **changeAttributes** - Array of metadata related to the change:
+ - **changesCount** - The number of properties changed as part of this change record.
+ - **correlationId** - Contains the ID for tracking related events. Each deployment has a correlation ID, and all actions in a single template will share the same correlation ID.
+ - **timestamp** - The datetime of when the change was detected.
+ - **previousResourceSnapshotId** - Contains the ID of the resource snapshot that was used as the previous state of the resource.
+ - **newResourceSnapshotId** - Contains the ID of the resource snapshot that was used as the new state of the resource.
+
+## Resource Graph Query samples
+
+With Resource Graph, you can query the **ResourceChanges** table to filter or sort by any of the change resource properties:
+
+### All changes in the past one day
+```kusto
+ResourceChanges
+| extend changeTime = todatetime(properties.changeAttributes.timestamp), targetResourceId = tostring(properties.targetResourceId),
+changeType = tostring(properties.changeType), correlationId = properties.changeAttributes.correlationId, 
+changedProperties = properties.changes, changeCount = properties.changeAttributes.changesCount
+| where changeTime > ago(1d)
+| order by changeTime desc
+| project changeTime, targetResourceId, changeType, correlationId, changeCount, changedProperties
```
-With the above request body, the REST API URI for **resourceChangeDetails** is:
-
-```http
-POST https://management.azure.com/providers/Microsoft.ResourceGraph/resourceChangeDetails?api-version=2018-09-01-preview
+### Resources deleted in a specific resource group
+```kusto
+ResourceChanges
+| where resourceGroup == "myResourceGroup"
+| extend changeTime = todatetime(properties.changeAttributes.timestamp), targetResourceId = tostring(properties.targetResourceId),
+changeType = tostring(properties.changeType), correlationId = properties.changeAttributes.correlationId
+| where changeType == "Delete"
+| order by changeTime desc
+| project changeTime, resourceGroup, targetResourceId, changeType, correlationId
```
-The response looks similar to this example:
-
-```json
-{
- "changeId": "{\"beforeId\":\"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\",\"beforeTime\":'2019-05-09T00:00:00.000Z\",\"afterId\":\"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\",\"beforeTime\":'2019-05-10T00:00:00.000Z\"}",
- "beforeSnapshot": {
- "timestamp": "2019-03-29T01:32:05.993Z",
- "content": {
- "sku": {
- "name": "Standard_LRS",
- "tier": "Standard"
- },
- "kind": "Storage",
- "id": "/subscriptions/{subscriptionId}/resourceGroups/MyResourceGroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount",
- "name": "mystorageaccount",
- "type": "Microsoft.Storage/storageAccounts",
- "location": "westus",
- "tags": {},
- "properties": {
- "networkAcls": {
- "bypass": "AzureServices",
- "virtualNetworkRules": [],
- "ipRules": [],
- "defaultAction": "Allow"
- },
- "supportsHttpsTrafficOnly": false,
- "encryption": {
- "services": {
- "file": {
- "enabled": true,
- "lastEnabledTime": "2018-07-27T18:37:21.8333895Z"
- },
- "blob": {
- "enabled": true,
- "lastEnabledTime": "2018-07-27T18:37:21.8333895Z"
- }
- },
- "keySource": "Microsoft.Storage"
- },
- "provisioningState": "Succeeded",
- "creationTime": "2018-07-27T18:37:21.7708872Z",
- "primaryEndpoints": {
- "blob": "https://mystorageaccount.blob.core.windows.net/",
- "queue": "https://mystorageaccount.queue.core.windows.net/",
- "table": "https://mystorageaccount.table.core.windows.net/",
- "file": "https://mystorageaccount.file.core.windows.net/"
- },
- "primaryLocation": "westus",
- "statusOfPrimary": "available"
- }
- }
- },
- "afterSnapshot": {
- "timestamp": "2019-03-29T01:54:24.42Z",
- "content": {
- "sku": {
- "name": "Standard_LRS",
- "tier": "Standard"
- },
- "kind": "Storage",
- "id": "/subscriptions/{subscriptionId}/resourceGroups/MyResourceGroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount",
- "name": "mystorageaccount",
- "type": "Microsoft.Storage/storageAccounts",
- "location": "westus",
- "tags": {},
- "properties": {
- "networkAcls": {
- "bypass": "AzureServices",
- "virtualNetworkRules": [],
- "ipRules": [],
- "defaultAction": "Allow"
- },
- "supportsHttpsTrafficOnly": true,
- "encryption": {
- "services": {
- "file": {
- "enabled": true,
- "lastEnabledTime": "2018-07-27T18:37:21.8333895Z"
- },
- "blob": {
- "enabled": true,
- "lastEnabledTime": "2018-07-27T18:37:21.8333895Z"
- }
- },
- "keySource": "Microsoft.Storage"
- },
- "provisioningState": "Succeeded",
- "creationTime": "2018-07-27T18:37:21.7708872Z",
- "primaryEndpoints": {
- "blob": "https://mystorageaccount.blob.core.windows.net/",
- "queue": "https://mystorageaccount.queue.core.windows.net/",
- "table": "https://mystorageaccount.table.core.windows.net/",
- "file": "https://mystorageaccount.file.core.windows.net/"
- },
- "primaryLocation": "westus",
- "statusOfPrimary": "available"
- }
- }
- }
-}
+### Changes to a specific property value
+```kusto
+ResourceChanges
+| extend provisioningStateChange = properties.changes["properties.provisioningState"], changeTime = todatetime(properties.changeAttributes.timestamp), targetResourceId = tostring(properties.targetResourceId), changeType = tostring(properties.changeType)
+| where isnotempty(provisioningStateChange)and provisioningStateChange.newValue == "Succeeded"
+| order by changeTime desc
+| project changeTime, targetResourceId, changeType, provisioningStateChange.previousValue, provisioningStateChange.newValue
```
-**beforeSnapshot** and **afterSnapshot** each give the time the snapshot was taken and the
-properties at that time. The change happened at some point between these snapshots. Looking at the
-previous example, we can see that the property that changed was **supportsHttpsTrafficOnly**.
-
-To compare the results, either use the **changes** property in **resourceChanges** or evaluate the
-**content** portion of each snapshot in **resourceChangeDetails** to determine the difference. If
-you compare the snapshots, the **timestamp** always shows as a difference despite being expected.
+### Query the latest resource configuration for resources created in the last seven days
+```kusto
+ResourceChanges
+| extend targetResourceId = tostring(properties.targetResourceId), changeType = tostring(properties.changeType), changeTime = todatetime(properties.changeAttributes.timestamp)
+| where changeTime > ago(7d) and changeType == "Create"
+| project targetResourceId, changeType, changeTime
+| join ( Resources | extend targetResourceId=id) on targetResourceId
+| order by changeTime desc
+| project changeTime, changeType, id, resourceGroup, type, properties
+```
## Next steps
hdinsight Hdinsight 40 Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-40-component-versioning.md
The OSS component versions associated with HDInsight 4.0 are listed in the follo
| Apache Zeppelin | 0.8.0 |
-This table lists certain HDInsight 4.0 cluster types that have retired.
+This table lists certain HDInsight 4.0 cluster types that have retired or will be retired soon.
| Cluster Type | Framework version | Support expiration date | Retirement date | ||-||--| | HDInsight 4.0 Spark | 2.3 | June 30, 2020 | June 30, 2020 | | HDInsight 4.0 Kafka | 1.1 | Dec 31, 2020 | Dec 31, 2020 |
+| HDInsight 4.0 Kafka | 2.1.0 * | Sep 30, 2022 | Oct 1, 2022 |
+
+* Customers cannot create new Kafka 2.1.0 clusters but existing 2.1.0 clusters will not be impacted and will get basic support till September 30, 2022.
## Next steps
healthcare-apis Fhir Rest Api Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-rest-api-capabilities.md
After you've found the record you want to restore, use the `PUT` operation to re
## Patch and Conditional Patch
-Patch is a valuable RESTful operation when you need to update only a portion of the FHIR resource. Using Patch allows you to specify the element(s) that you want to update in the resource without having to update the entire record. FHIR defines three types of ways to Patch resources in FHIR: JSON Patch, XML Patch, and FHIR Path Patch. Azure API for FHIR supports JSON Patch and Conditional JSON Patch (which allows you to Patch a resource based on a search criteria instead of an ID). To walk through some examples of using JSON Patch, refer to the sample [REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PatchRequests.http).
+Patch is a valuable RESTful operation when you need to update only a portion of the FHIR resource. Using Patch allows you to specify the element(s) that you want to update in the resource without having to update the entire record. FHIR defines three types of ways to Patch resources in FHIR: JSON Patch, XML Patch, and FHIR Path Patch. Azure API for FHIR supports JSON Patch and Conditional JSON Patch (which allows you to Patch a resource based on a search criteria instead of an ID). To walk through some examples of using JSON Patch, refer to the sample [REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/FhirPatchRequests.http).
> [!NOTE] > When using `PATCH` against STU3, and if you are requesting a History bundle, the patched resource's `Bundle.entry.request.method` is mapped to `PUT`. This is because STU3 doesn't contain a definition for the `PATCH` verb in the [HTTPVerb value set](http://hl7.org/fhir/STU3/valueset-http-verb.html).
In this article, you learned about some of the REST capabilities of Azure API fo
>[!div class="nextstepaction"] >[Overview of search in Azure API for FHIR](overview-of-search.md)
-(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Fhir Rest Api Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-rest-api-capabilities.md
After you've found the record you want to restore, use the `PUT` operation to re
## Patch and Conditional Patch
-Patch is a valuable RESTful operation when you need to update only a portion of the FHIR resource. Using Patch allows you to specify the element(s) that you want to update in the resource without having to update the entire record. FHIR defines three types of ways to Patch resources in FHIR: JSON Patch, XML Patch, and FHIR Path Patch. The FHIR service support JSON Patch and Conditional JSON Patch (which allows you to Patch a resource based on a search criteria instead of an ID). To walk through some examples of using JSON Patch, refer to the sample [REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PatchRequests.http).
+Patch is a valuable RESTful operation when you need to update only a portion of the FHIR resource. Using Patch allows you to specify the element(s) that you want to update in the resource without having to update the entire record. FHIR defines three types of ways to Patch resources in FHIR: JSON Patch, XML Patch, and FHIR Path Patch. The FHIR service support JSON Patch and Conditional JSON Patch (which allows you to Patch a resource based on a search criteria instead of an ID). To walk through some examples of using JSON Patch, refer to the sample [REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/FhirPatchRequests.http).
> [!NOTE] > When using `PATCH` against STU3, and if you are requesting a History bundle, the patched resource's `Bundle.entry.request.method` is mapped to `PUT`. This is because STU3 doesn't contain a definition for the `PATCH` verb in the [HTTPVerb value set](http://hl7.org/fhir/STU3/valueset-http-verb.html).
iot-central Concepts Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-architecture.md
# Azure IoT Central architecture
-This article provides an overview of the key elements in an IoT Central solution architecture.
--
-An IoT Central application:
--- Lets you manage the IoT devices in your solution.-- Lets you view and analyze the data from your devices.-- Can export to and integrate with other services that are part of the solution.
+IoT Central is a ready-made environment for IoT solution development. It's an application platform as a service (aPaaS) IoT solution and its primary interface is a web UI. There's also a [REST API](#extend-with-rest-api) that lets you interact with your application programmatically.
-## IoT Central
+This article provides an overview of the key elements in an IoT Central solution architecture.
-IoT Central is a ready-made environment for IoT solution development. It's a platform as a service (PaaS) IoT solution and its primary interface is a web UI. There's also a [REST API](#rest-api) that lets you interact with your application programmatically.
-This section describes the key capabilities of an IoT Central application.
+Key capabilities in an IoT Central application include:
### Manage devices
IoT Central lets you manage the fleet of [IoT devices](#devices) that are sendin
In an IoT Central application, you can view and analyze data for individual devices or for aggregated data from multiple devices:
+- Use [mapping](howto-map-data.md) to transform complex device telemetry into structured data inside IoT Central.
- Use device templates to define [custom views](howto-set-up-template.md#views) for individual devices of specific types. For example, you can plot temperature over time for an individual thermostat or show the live location of a delivery truck. - Use the built-in [analytics](tutorial-use-device-groups.md) to view aggregate data for multiple devices. For example, you can see the total occupancy across multiple retail stores or identifying the stores with the highest or lowest occupancy rates. - Create custom [dashboards](howto-manage-dashboards.md) to help you manage your devices. For example, you can add maps, tiles, and charts to show device telemetry.
In an IoT Central application you can manage the following security aspects of y
- [User management](howto-manage-users-roles.md): Manage the users that can sign in to the application and the roles that determine what permissions those users have. - [Organizations](howto-create-organizations.md): Define a hierarchy to manage which users can see which devices in your IoT Central application.
-### REST API
-
-Build integrations that let other applications and services manage your application. For example, programmatically [manage the devices](howto-control-devices-with-rest-api.md) in your application or synchronize [user information](howto-manage-users-roles-with-rest-api.md) with an external system.
- ## Devices Devices collect data from sensors to send as a stream of telemetry to an IoT Central application. For example, a refrigeration unit sends a stream of temperature values or a delivery truck streams its location.
A device can use properties to report its state, such as whether a valve is open
IoT Central can also control devices by calling commands on the device. For example, instructing a device to download and install a firmware update.
-The [telemetry, properties, and commands](concepts-telemetry-properties-commands.md) that a device implements are collectively known as the device capabilities. You define these capabilities in a model that's shared between the device and the IoT Central application. In IoT Central, this model is part of the device template that defines a specific type of device.
+The [telemetry, properties, and commands](concepts-telemetry-properties-commands.md) that a device implements are collectively known as the device capabilities. You define these capabilities in a model that's shared between the device and the IoT Central application. In IoT Central, this model is part of the device template that defines a specific type of device. To learn more, see [Associate a device with a device template](concepts-get-connected.md#associate-a-device-with-a-device-template).
The [device implementation](tutorial-connect-device.md) should follow the [IoT Plug and Play conventions](../../iot-develop/concepts-convention.md) to ensure that it can communicate with IoT Central. For more information, see the various language [SDKs and samples](../../iot-develop/libraries-sdks.md).
Local gateway devices are useful in several scenarios, such as:
Gateway devices typically require more processing power than a standalone device. One option to implement a gateway device is to use [Azure IoT Edge and apply one of the standard IoT Edge gateway patterns](concepts-iot-edge.md). You can also run your own custom gateway code on a suitable device.
-## Data export
+## Export data
+
+Although IoT Central has built-in analytics features, you can export data to other services and applications.
+
+[Transformations](howto-transform-data-internally.md) in an IoT Central data export definition let you manipulate the format and structure of the device data before it's exported to a destination.
-Although IoT Central has built-in analytics features, you can export data to other services and applications. Reasons to export data include:
+Reasons to export data include:
### Storage and analysis
For long-term storage and control over archiving and retention policies, you can
You may need to [transform or do computations](howto-transform-data.md) on your data before it can be used either in IoT Central or another service. For example, you could add local weather information to the location data reported by a delivery truck.
+## Extend with REST API
+
+Build integrations that let other applications and services manage your application. For example, programmatically [manage the devices](howto-control-devices-with-rest-api.md) in your application or synchronize [user information](howto-manage-users-roles-with-rest-api.md) with an external system.
+ ## Next steps Now that you've learned about the architecture of Azure IoT Central, the suggested next step is to learn about [scalability and high availability](concepts-scalability-availability.md) in Azure IoT Central.
iot-central Tutorial In Store Analytics Create App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-in-store-analytics-create-app.md
To create a custom theme:
:::image type="content" source="media/tutorial-in-store-analytics-create-app/dashboard-expand.png" alt-text="Azure IoT Central left pane.":::
-1. Select **Administration > Customize your application**.
+1. Select **Customization > App appearance**.
1. Use the **Change** button to choose an image to upload as the **Application logo**. Optionally, specify a value for **Logo alt text**.
To create a custom theme:
To update the application image:
-1. Select **Administration > Your Application**.
+1. Select **Customization > App appearance.**
1. Use the **Select image** button to choose an image to upload as the application image. This image appears on the application tile in the **My Apps** page of the IoT Central application manager.
iot-central Tutorial In Store Analytics Customize Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-in-store-analytics-customize-dashboard.md
To customize the image tile that displays a brand image on the dashboard:
1. Select **Edit** on the dashboard toolbar.
-1. Select **Configure** on the image tile that displays the Northwind brand image.
+1. Select **Edit** on the image tile that displays the Northwind brand image.
:::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/brand-image-edit.png" alt-text="Azure IoT Central edit brand image.":::
iot-central Tutorial Iot Central Connected Logistics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-iot-central-connected-logistics.md
If you're not going to continue to use this application, delete the application
Learn more about : > [!div class="nextstepaction"]
-> [Connected logistics concepts](./architecture-connected-logistics.md)
+> [IoT Central data integration](../core/overview-iot-central-solution-builder.md)
iot-central Tutorial Iot Central Digital Distribution Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-iot-central-digital-distribution-center.md
If you're not going to continue to use this application, delete the application
## Next steps
-Learn more about digital distribution center solution architecture:
+Learn more about :
> [!div class="nextstepaction"]
-> [digital distribution center concept](./architecture-digital-distribution-center.md)
+> [IoT Central data integration](../core/overview-iot-central-solution-builder.md)
iot-central Tutorial Iot Central Smart Inventory Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-iot-central-smart-inventory-management.md
If you're not going to continue to use this application, delete the application
## Next steps
-Learn more about smart inventory management:
+Learn more about :
> [!div class="nextstepaction"]
-> [Smart inventory management concept](./architecture-smart-inventory-management.md)
+> [IoT Central data integration](../core/overview-iot-central-solution-builder.md)
+
iot-central Tutorial Micro Fulfillment Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-micro-fulfillment-center.md
If you're not going to continue to use this application, delete the application
## Next steps
-Learn more about:
+Learn more about :
> [!div class="nextstepaction"]
-> [micro-fulfillment center solution architecture](./architecture-micro-fulfillment-center.md)
+> [IoT Central data integration](../core/overview-iot-central-solution-builder.md)
+
iot-develop Quickstart Devkit Mxchip Az3166 Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-mxchip-az3166-iot-hub.md
description: Use Azure RTOS embedded software to connect an MXCHIP AZ3166 device
+ms.devlang: c
Last updated 06/09/2021
iot-hub Iot Hub Java Java Device Management Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-java-java-device-management-getstarted.md
+ms.devlang: java
Last updated 08/20/2019
iot-hub Iot Hub Node Node Schedule Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-node-node-schedule-jobs.md
+ms.devlang: javascript
Last updated 08/16/2019
iot-hub Iot Hub Node Node Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-node-node-twin-getstarted.md
description: How to use Azure IoT Hub device twins to add tags and then use an I
+ms.devlang: javascript
Last updated 08/26/2019
iot-hub Iot Hub Python Python Device Management Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-python-python-device-management-get-started.md
description: How to use IoT Hub device management to initiate a remote device re
+ms.devlang: python
Last updated 01/17/2020
iot-hub Quickstart Control Device Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/quickstart-control-device-android.md
description: In this quickstart, you run two sample Java applications. One appli
+ms.devlang: java
Last updated 06/21/2019
iot-hub Tutorial Device Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-device-twins.md
+ms.devlang: javascript
Last updated 10/13/2021
key-vault Quick Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-java.md
Last updated 12/18/2020
+ms.devlang: java
# Quickstart: Azure Key Vault Certificate client library for Java (Certificates)
key-vault Tutorial Javascript Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-javascript-virtual-machine.md
Last updated 12/10/2021
+ms.devlang: javascript
# Customer intent: As a developer I want to use Azure Key vault to store secrets for my app, so that they are kept secure.
key-vault Tutorial Net Create Vault Azure Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-net-create-vault-azure-web-app.md
Last updated 05/06/2020
+ms.devlang: csharp
#Customer intent: As a developer, I want to use Azure Key Vault to store secrets for my app to help keep them secure.
key-vault Tutorial Net Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-net-virtual-machine.md
Last updated 03/17/2021
+ms.devlang: csharp
#Customer intent: As a developer I want to use Azure Key Vault to store secrets for my app, so that they are kept secure.
key-vault Tutorial Python Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-python-virtual-machine.md
Last updated 07/20/2020
+ms.devlang: python
# Customer intent: As a developer I want to use Azure Key vault to store secrets for my app, so that they are kept secure.
key-vault Quick Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-java.md
Last updated 01/05/2021
+ms.devlang: java
# Quickstart: Azure Key Vault Key client library for Java
key-vault Quick Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-java.md
Last updated 10/20/2019
+ms.devlang: java
# Quickstart: Azure Key Vault Secret client library for Java
load-testing How To Compare Multiple Test Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-compare-multiple-test-runs.md
Title: Compare load testing runs to find regressions
+ Title: Compare load test runs to find regressions
-description: 'Learn how you can visually compare multiple test runs with Azure Load Testing to better understand performance regressions.'
+description: 'Learn how you can visually compare multiple test runs with Azure Load Testing to identify and analyze performance regressions.'
Previously updated : 11/30/2021 Last updated : 02/16/2022
-# Identify performance regressions by comparing load test runs
+# Identify performance regressions by comparing test runs in Azure Load Testing Preview
-In this article, you'll learn how to identify performance regressions by visually comparing multiple load test runs in the Azure Load Testing Preview dashboard.
+In this article, you'll learn how you can identify performance regressions by comparing test runs in the Azure Load Testing Preview dashboard. The dashboard overlays the client-side and server-side metric graphs for each run, which allows you to quickly analyze performance issues.
-A test run contains client-side and server-side metrics. The test engine reports client-side metrics, such as the number of virtual users. The server-side metrics provide application- and resource-specific information.
+You can compare load test runs for the following scenarios:
-By overlaying multiple metrics charts, you can more easily pinpoint performance changes and identify which application component is causing problems.
-
-There are two entry points for comparing load test runs in the Azure portal:
--- Starting from the test runs page, select multiple results to compare.-- Starting from a specific test run, select other results to compare the runs with.
+- [Identify performance regressions](#identify-performance-regressions) between application builds or configurations. You could run a load test at each development sprint to ensure that the previous sprint didn't introduce performance issues.
+- [Identify which application component is responsible](#identify-the-root-cause) for a performance problem (root cause analysis). For example, an application redesign might result in slower application response times. Comparing load test runs might reveal that the root cause was a lack of database resources.
> [!IMPORTANT] > Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
There are two entry points for comparing load test runs in the Azure portal:
- An Azure Load Testing resource with a test plan that has multiple test runs. To create a Load Testing resource, see [Create and run a load test](./quickstart-create-and-run-load-test.md).
- > [!NOTE]
- > If you want to compare a test run, it needs to be in a *Done*, *Stopped*, or *Failed* state.
-
-## Compare test runs from the test runs page
+## Select test runs
+
+To compare test runs in Azure Load Testing, you'll first have to select up to five runs within a load test. You can only compare runs that belong to the same load test.
-In this section, you'll compare multiple results by selecting runs from the test runs page.
+A test run needs to be in the *Done*, *Stopped*, or *Failed* state to compare it.
+
+Use the following steps to select the test runs:
1. Sign in to the [Azure portal](https://portal.azure.com) by using the credentials for your Azure subscription.
In this section, you'll compare multiple results by selecting runs from the test
You can also use the filters to find your load test.
-1. In the list of tests, select the test whose runs you want to compare.
+1. Select the test whose runs you want to compare by selecting its name.
-1. Select two or more test runs, and then select **Compare**.
+1. Select two or more test runs by selecting the corresponding checkboxes in the list.
:::image type="content" source="media/how-to-compare-multiple-test-runs/compare-test-results-from-list.png" alt-text="Screenshot that shows a list of test runs and the 'Compare' button.":::
- > [!NOTE]
- > You can choose a maximum of five test runs to compare.
+ You can choose a maximum of five test runs to compare.
+
+## Compare multiple test runs
- The selected test runs are presented in the dashboard. Each run is shown as an overlay in the different charts.
+After you've selected the test runs you want to compare, you can visually compare the client-side and server-side metrics for each test run in the load test dashboard.
+
+1. Select the **Compare** button to open the load test dashboard.
+
+ Each test run is shown as an overlay in the different graphs.
:::image type="content" source="media/how-to-compare-multiple-test-runs/compare-screen.png" alt-text="Screenshot of the 'Compare' page, displaying a comparison of two test runs.":::
- You can use filters to customize the graphs. There are separate filters for the client and server metrics.
+1. Optionally, use the filters to customize the graphs.
+
+ :::image type="content" source="media/how-to-compare-multiple-test-runs/compare-client-side-filters.png" alt-text="Screenshot of the client-side filter controls on the load test dashboard.":::
> [!TIP]
- > The time filter is based on the relative duration of the tests. A value of zero indicates the beginning of the test, and the maximum value marks the duration of the longest test run. For client-side metrics, test runs show only data for the duration of the test.
+ > The time filter is based on the duration of the tests. A value of zero indicates the start of the test, and the maximum value marks the duration of the longest test run.
-## Compare test runs from the run details page
+## Identify performance regressions
-In this section, you'll use the test run details page and add other test runs to compare.
+You can compare multiple test runs to identify performance regressions. For example, before deploying a new application version in production, you can verify that the performance hasn't degraded.
-1. Go to the test run details page, and then select **Compare**.
+Use the client-side metrics, such as requests per second or response time, on the load test dashboard to quickly spot performance changes between different load test runs:
- :::image type="content" source="media/how-to-compare-multiple-test-runs/test-run-details.png" alt-text="Screenshot of the 'Test run details' page, displaying the 'Compare' button.":::
+1. Hover over the client-side metrics graphs to compare the values across the different test runs.
-1. On the **Compare** page, select two or more test runs that you want to compare.
+ In the following screenshot, you notice that the metric values for **Requests per second** and **Response Time** are significantly different. This difference indicates that the application performance dropped significantly between the two test runs.
- :::image type="content" source="media/how-to-compare-multiple-test-runs/choose-runs-to-compare.png" alt-text="Screenshot of the 'Compare' page, displaying test runs to be compared.":::
+ :::image type="content" source="media/how-to-compare-multiple-test-runs/compare-client-side-metrics.png" alt-text="Screenshot of the client-side metrics, highlighting the difference in requests per second and response time.":::
- > [!NOTE]
- > You can choose a maximum of five test runs to compare.
+1. Optionally, use the **Requests** filter to compare a specific application request in the JMeter script.
-1. Select **Compare**.
+ :::image type="content" source="media/how-to-compare-multiple-test-runs/compare-client-side-requests-filter.png" alt-text="Screenshot of the client-side 'requests' filter, which allows you to filter specific application requests.":::
- The selected test runs are presented in the dashboard. Each run is shown as an overlay in the different charts.
+## Identify the root cause
- :::image type="content" source="media/how-to-compare-multiple-test-runs/compare-screen.png" alt-text="Screenshot of the 'Compare' page, displaying a comparison of two test runs.":::
+When there's a performance issue, you can use the server-side metrics to analyze what the root cause of the problem is. Azure Load Testing can [capture server-side resource metrics](./how-to-update-rerun-test.md) for Azure-hosted applications.
- You can use filters to customize the graphs. There are separate filters for the client and server metrics.
+1. Hover over the server-side metrics graphs to compare the values across the different test runs.
-## Next steps
+ In the following screenshot, you notice from the **Response time** and **Requests** that the application performance has degraded. You can also see that for one test run, the database **RU Consumption** peeks at 100%. This indicates that the root cause is likely the database **Provisioned Throughput**.
+
+ :::image type="content" source="media/how-to-compare-multiple-test-runs/compare-server-side-metrics.png" alt-text="Screenshot of the server-side metrics, highlighting the difference in database resource consumption and provisioning throughput.":::
-- For information about high-scale load tests, see [Set up a high-scale load test](./how-to-high-scale-load.md).
+1. Optionally, select **Configure metrics** to add or remove server-side metrics.
+
+ You can add more server-side metrics for the selected Azure app components to further investigate performance problems. The dashboard immediately shows the additional metrics data, and you don't have to rerun the load test.
+
+1. Optionally, use the **Resource** filter to hide or show all metric graphs for an Azure component.
+
+## Next steps
-- To learn about performance test automation, see [Configure automated performance testing](./tutorial-cicd-azure-pipelines.md).
+- Learn more about [exporting the load test results for reporting](./how-to-export-test-results.md).
+- Learn more about [troubleshooting load test execution errors](./how-to-find-download-logs.md).
+- Learn more about [configuring automated performance testing with Azure Pipelines](./tutorial-cicd-azure-pipelines.md).
logic-apps Set Up Devops Deployment Single Tenant Azure Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/set-up-devops-deployment-single-tenant-azure-logic-apps.md
Title: Set up DevOps for single-tenant Azure Logic Apps
-description: How to set up DevOps deployment for workflows in single-tenant Azure Logic Apps.
+ Title: Set up DevOps for Standard logic apps
+description: How to set up DevOps deployment for Standard logic app workflows in single-tenant Azure Logic Apps.
ms.suite: integration Previously updated : 11/02/2021 Last updated : 02/14/2022 # As a developer, I want to automate deployment for workflows hosted in single-tenant Azure Logic Apps by using DevOps tools and processes.
-# Set up DevOps deployment for single-tenant Azure Logic Apps
+# Set up DevOps deployment for Standard logic app workflows in single-tenant Azure Logic Apps
-This article shows how to deploy a single-tenant based logic app project from Visual Studio Code to your infrastructure by using DevOps tools and processes. Based on whether you prefer GitHub or Azure DevOps for deployment, choose the path and tools that work best for your scenario. You can use the included samples that contain example logic app projects plus examples for Azure deployment using either GitHub or Azure DevOps. For more information about DevOps for single-tenant, review [DevOps deployment overview for single-tenant Azure Logic Apps](devops-deployment-single-tenant-azure-logic-apps.md).
+This article shows how to deploy a Standard logic app project to single-tenant Azure Logic Apps from Visual Studio Code to your infrastructure by using DevOps tools and processes. Based on whether you prefer GitHub or Azure DevOps for deployment, choose the path and tools that work best for your scenario. You can use the included samples that contain example logic app projects plus examples for Azure deployment using either GitHub or Azure DevOps. For more information about DevOps for single-tenant, review [DevOps deployment overview for single-tenant Azure Logic Apps](devops-deployment-single-tenant-azure-logic-apps.md).
## Prerequisites - An Azure account with an active subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- A single-tenant based logic app project created with [Visual Studio Code and the Azure Logic Apps (Standard) extension](create-single-tenant-workflows-visual-studio-code.md#prerequisites).
+- A Standard logic app project created with [Visual Studio Code and the Azure Logic Apps (Standard) extension](create-single-tenant-workflows-visual-studio-code.md#prerequisites).
If you haven't already set up your logic app project or infrastructure, you can use the included sample projects to deploy an example app and infrastructure, based on the source and deployment options you prefer to use. For more information about these sample projects and the resources included to run the example logic app, review [Deploy your infrastructure](#deploy-infrastructure).
Both samples include the following resources that a logic app uses to run.
| Azure storage account | Yes, for both stateful and stateless workflows | This Azure resource stores the metadata, keys for access control, state, inputs, outputs, run history, and other information about your workflows. | | Application Insights | Optional | This Azure resource provides monitoring capabilities for your workflows. | | API connections | Optional, if none exist | These Azure resources define any managed API connections that your workflows use to run managed connector operations, such as Office 365, SharePoint, and so on. <p><p>**Important**: In your logic app project, the **connections.json** file contains metadata, endpoints, and keys for any managed API connections and Azure functions that your workflows use. To use different connections and functions in each environment, make sure that you parameterize the **connections.json** file and update the endpoints. <p><p>For more information, review [API connection resources and access policies](#api-connection-resources). |
-| Azure Resource Manager (ARM) template | Optional | This Azure resource defines a baseline infrastructure deployment that you can reuse or [export](../azure-resource-manager/templates/template-tutorial-export-template.md). The template also includes the required access policies, for example, to use managed API connections. <p><p>**Important**: Exporting the ARM template won't include all the related parameters for any API connection resources that your workflows use. For more information, review [Find API connection parameters](#find-api-connection-parameters). |
+| Azure Resource Manager (ARM) template | Optional | This Azure resource defines a baseline infrastructure deployment that you can reuse or [export](../azure-resource-manager/templates/template-tutorial-export-template.md). |
|||| <a name="api-connection-resources"></a>
The following diagram shows the dependencies between your logic app project and
![Conceptual diagram showing infrastructure dependencies for a logic app project in the single-tenant Azure Logic Apps model.](./media/set-up-devops-deployment-single-tenant-azure-logic-apps/infrastructure-dependencies.png)
-<a name="find-api-connection-parameters"></a>
+<a name="deploy-logic-app-resources"></a>
-### Find API connection parameters
+## Deploy logic app resources (zip deploy)
+
+After you push your logic app project to your source repository, you can set up build and release pipelines that deploy logic apps to infrastructure either inside or outside Azure.
+
+### Build your project
-If your workflows use managed API connections, using the export template capability won't include all related parameters. In an ARM template, every [API connection resource definition](logic-apps-azure-resource-manager-templates-overview.md#connection-resource-definitions) has the following general format:
+To set up a build pipeline based on your logic app project type, complete the corresponding actions in the following table:
+
+| Project type | Description and steps |
+|--|--|
+| Nuget-based | The NuGet-based project structure is based on the .NET Framework. To build these projects, make sure to follow the build steps for .NET Standard. For more information, review the documentation for [Create a NuGet package using MSBuild](/nuget/create-packages/creating-a-package-msbuild). |
+| Bundle-based | The extension bundle-based project isn't language-specific and doesn't require any language-specific build steps. You can use any method to zip your project files. <br><br>**Important**: Make sure that your .zip file contains the actual build artifacts, including all workflow folders, configuration files such as host.json, connections.json, and any other related files. |
+|||
+
+### Before release to Azure
+
+The managed API connections inside your logic app project's **connections.json** file are created specifically for local use in Visual Studio Code. Before you can release your project artiffacts from Visual Studio Code to Azure, you have to update these artifacts. To use the managed API connections in Azure, you have to update their authentication methods so that they're in the correct format to use in Azure.
+
+#### Update authentication type
+
+For each managed API connection that uses authentication, you have to update the **authentication** object from the local format in Visual Studio Code to the Azure portal format, as shown by the first and second code examples, respectively:
+
+**Visual Studio Code format**
```json {
- "type": "Microsoft.Web/connections",
- "apiVersion": "2016ΓÇô06ΓÇô01",
- "location": "[parameters('location')]",
- "name": "[parameters('connectionName')]",
- "properties": {}
+ "managedApiConnections": {
+ "sql": {
+ "api": {
+ "id": "/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/providers/Microsoft.Web/locations/westus/managedApis/sql"
+ },
+ "connection": {
+ "id": "/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/ase/providers/Microsoft.Web/connections/sql-8"
+ },
+ "connectionRuntimeUrl": "https://xxxxxxxxxxxxxx.01.common.logic-westus.azure-apihub.net/apim/sql/xxxxxxxxxxxxxxxxxxxxxxxxx/",
+ "authentication": {
+ "type": "Raw",
+ "scheme": "Key",
+ "parameter": "@appsetting('sql-connectionKey')"
+ }
+ }
} ```
-To find the values that you need to use in the `properties` object for completing the connection resource definition, you can use the following API for a specific connector:
+**Azure portal format**
-`GET https://management.azure.com/subscriptions/{subscription-ID}/providers/Microsoft.Web/locations/{location}/managedApis/{connector-name}?api-version=2016-06-01`
+```json
+{
+ "managedApiConnections": {
+ "sql": {
+ "api": {
+ "id": "/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/providers/Microsoft.Web/locations/westus/managedApis/sql"
+ },
+ "connection": {
+ "id": "/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/ase/providers/Microsoft.Web/connections/sql-8"
+ },
+ "connectionRuntimeUrl": "https://xxxxxxxxxxxxxx.01.common.logic-westus.azure-apihub.net/apim/sql/xxxxxxxxxxxxxxxxxxxxxxxxx/",
+ "authentication": {
+ "type": "ManagedServiceIdentity",
+ }
+ }
+}
+```
-In the response, find the `connectionParameters` object, which contains all the information necessary for you to complete resource definition for that specific connector. The following example shows an example resource definition for a SQL managed connection:
+#### Create API connections as needed
+
+If you're deploying your logic app workflow to an Azure region or subscription different from your local development environment, you must also make sure to create these managed API connections before deployment. Azure Resource Manager template (ARM template) deployment is the easiest way to create managed API connections.
+
+The following example shows a SQL managed API connection resource definition in an ARM template:
```json {
In the response, find the `connectionParameters` object, which contains all the
"properties": { "displayName": "sqltestconnector", "api": {
- "id": "/subscriptions/{subscription-ID}/providers/Microsoft.Web/locations/{location}/managedApis/sql"
+ "id": "/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/providers/Microsoft.Web/locations/{Azure-region-location}/managedApis/sql"
}, "parameterValues": { "authType": "windows",
In the response, find the `connectionParameters` object, which contains all the
} ```
-As an alternative, you can review the network trace for when you create a connection in the Logic Apps designer. Find the `PUT` call to the managed API for the connector as previously described, and review the request body for all the information you need.
+To find the values that you need to use in the **properties** object for completing the connection resource definition, you can use the following API for a specific connector:
-## Deploy logic app resources (zip deploy)
+`GET https://management.azure.com/subscriptions/{Azure-subscription-ID}/providers/Microsoft.Web/locations/{Azure-region-location}/managedApis/{connector-name}?api-version=2016-06-01`
-After you push your logic app project to your source repository, you can set up build and release pipelines that deploy logic apps to infrastructure inside or outside Azure.
+In the response, find the **connectionParameters** object, which contains all the information necessary for you to complete resource definition for that specific connector. The following example shows an example resource definition for a SQL managed connection:
-### Build your project
-
-To set up a build pipeline based on your logic app project type, complete the corresponding actions listed in the following table:
+```json
+{
+ "type": "Microsoft.Web/connections",
+ "apiVersion": "2016ΓÇô06ΓÇô01",
+ "location": "[parameters('location')]",
+ "name": "[parameters('connectionName')]",
+ "properties": {
+ "displayName": "sqltestconnector",
+ "api": {
+ "id": "/subscriptions/{Azure-subscription-ID}/providers/Microsoft.Web/locations/{Azure-region-location}/managedApis/sql"
+ },
+ "parameterValues": {
+ "authType": "windows",
+ "database": "TestDB",
+ "password": "TestPassword",
+ "server": "TestServer",
+ "username": "TestUserName"
+ }
+ }
+}
+```
-| Project type | Description and steps |
-|--|--|
-| Nuget-based | The NuGet-based project structure is based on the .NET Framework. To build these projects, make sure to follow the build steps for .NET Standard. For more information, review the [Create a NuGet package using MSBuild](/nuget/create-packages/creating-a-package-msbuild) documentation. |
-| Bundle-based | The extension bundle-based project isn't language-specific and doesn't require any language-specific build steps. You can use any method to zip your project files. <p><p>**Important**: Make sure that your .zip file contains the actual build artifacts, including all workflow folders, configuration files such as host.json, connections.json, and any other related files. |
-|||
+As an alternative, you can capture and review the network trace for when you create a connection using the workflow designer in Azure Logic Apps. Find the `PUT` call that's sent to the connector's managed API as previously described, and review the request body for all the necessary information.
### Release to Azure
-To set up a release pipeline that deploys to Azure, choose the associated option for GitHub, Azure DevOps, or Azure CLI.
+To set up a release pipeline that deploys to Azure, follow the associated steps for GitHub, Azure DevOps, or Azure CLI.
> [!NOTE] > Azure Logic Apps currently doesn't support Azure deployment slots.
az logicapp deployment source config-zip --name MyLogicAppName
-### Release to containers
-
-If you containerize your logic app, deployment works mostly the same as any other container you deploy and manage.
+### After release to Azure
-For examples that show how to implement an end-to-end container build and deployment pipeline, review [CI/CD for Containers](https://azure.microsoft.com/solutions/architecture/cicd-for-containers/).
+Each API connection has access policies. After the zip deployment completes, you must open your logic app resource in the Azure portal, and create access policies for each API connection to set up permissions for the deployed logic app. The zip deployment doesn't create app settings for you. So, after deployment, you must create these app settings based on the **local.settings.json** file in your local Visual Studio Code project.
## Next steps
machine-learning How To Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-image-models.md
Each of the tasks (and some models) have a set of parameters in the `model_setti
| Task | Parameter name | Default | | |- | | |Image classification (multi-class and multi-label) | `valid_resize_size`<br>`valid_crop_size` | 256<br>224 |
-|Object detection, instance segmentation| `min_size`<br>`max_size`<br>`box_score_thresh`<br>`box_nms_thresh`<br>`box_detections_per_img` | 600<br>1333<br>0.3<br>0.5<br>100 |
-|Object detection using `yolov5`| `img_size`<br>`model_size`<br>`box_score_thresh`<br>`box_iou_thresh` | 640<br>medium<br>0.1<br>0.5 |
+|Object detection, instance segmentation| `min_size`<br>`max_size`<br>`box_score_thresh`<br>`nms_iou_thresh`<br>`box_detections_per_img` | 600<br>1333<br>0.3<br>0.5<br>100 |
+|Object detection using `yolov5`| `img_size`<br>`model_size`<br>`box_score_thresh`<br>`nms_iou_thresh` | 640<br>medium<br>0.1<br>0.5 |
For a detailed description on task specific hyperparameters, please refer to [Hyperparameters for computer vision tasks in automated machine learning](reference-automl-images-hyperparameters.md).
machine-learning How To Create Component Pipelines Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipelines-cli.md
You define input data directories for your pipeline in the pipeline YAML file us
:::image type="content" source="media/how-to-create-component-pipelines-cli/inputs-and-outputs.png" alt-text="Image showing how the inputs and outputs paths map to the jobs inputs and outputs paths" lightbox="media/how-to-create-component-pipelines-cli/inputs-and-outputs.png":::
-1. The `inputs.pipeline_sample_input_data` path creates a key identifier and uploads the input data from the `local_path` directory. This identifier`${{inputs.pipeline_sample_input_data}}` is then used as the value of the `jobs.componentA_job.inputs.componentA_input` key.
-1. The `jobs.componentA_job.outputs.componentA_output` path as an identifier (`${{jobs.componentA_job.outputs.componentA_output`}}) that's used as the value for the next step's `jobs.componentB_job.inputs.componentB_input` key.
-1. As with Component A, the output of Component B is used as the input to Component C.
-1. The pipeline's `outputs.final_pipeline_output` key is the source of the identifier used as the value for the `jobs.componentC_job.outputs.componentC_output` key. In other words, Component C's output is the pipeline's final output.
+1. The `inputs.pipeline_sample_input_data` path (line 6) creates a key identifier and uploads the input data from the `local_path` directory (line 8). This identifier `${{inputs.pipeline_sample_input_data}}` is then used as the value of the `jobs.componentA_job.inputs.componentA_input` key (line 19). In other words, the pipeline's `pipeline_sample_input_data` input is passed to the `componentA_input` input of Component A.
+1. The `jobs.componentA_job.outputs.componentA_output` path (line 21) is used with the identifier `${{jobs.componentA_job.outputs.componentA_output}}` as the value for the next step's `jobs.componentB_job.inputs.componentB_input` key (line 27).
+1. As with Component A, the output of Component B (line 29) is used as the input to Component C (line 35).
+1. The pipeline's `outputs.final_pipeline_output` key (line 11) is the source of the identifier used as the value for the `jobs.componentC_job.outputs.componentC_output` key (line 37). In other words, Component C's output is the pipeline's final output.
Studio's visualization of this pipeline looks like this:
machine-learning How To Manage Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-rest.md
You can explore the REST API using the general pattern of:
| subscriptions/YOUR-SUBSCRIPTION-ID/ | subscriptions/abcde123-abab-abab-1234-0123456789abc/ | | resourceGroups/YOUR-RESOURCE-GROUP/ | resourceGroups/MyResourceGroup/ | | providers/operation-provider/ | providers/Microsoft.MachineLearningServices/ |
-| provider-resource-path/ | workspaces/MLWorkspace/MyWorkspace/FirstExperiment/runs/1/ |
+| provider-resource-path/ | workspaces/MyWorkspace/experiments/FirstExperiment/runs/1/ |
| operations-endpoint/ | artifacts/metadata/ |
machine-learning Reference Automl Images Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-images-hyperparameters.md
This table summarizes hyperparameters specific to the `yolov5` algorithm.
| `model_size` | Model size. <br> Must be `small`, `medium`, `large`, or `xlarge`. <br><br> *Note: training run may get into CUDA OOM if the model size is too big*. | `medium` | | `multi_scale` | Enable multi-scale image by varying image size by +/- 50% <br> Must be 0 or 1. <br> <br> *Note: training run may get into CUDA OOM if no sufficient GPU memory*. | 0 | | `box_score_thresh` | During inference, only return proposals with a score greater than `box_score_thresh`. The score is the multiplication of the objectness score and classification probability. <br> Must be a float in the range [0, 1]. | 0.1 |
-| `nms_iou_thresh` | IoU threshold used during inference in non-maximum suppression post processing. <br> Must be a float in the range [0, 1]. | 0.5 |
-
+| `nms_iou_thresh` | IOU threshold used during inference in non-maximum suppression post processing. <br> Must be a float in the range [0, 1]. | 0.5 |
## Model agnostic hyperparameters
The following hyperparameters are for object detection and instance segmentation
| `min_size` | Minimum size of the image to be rescaled before feeding it to the backbone. <br> Must be a positive integer. <br> <br> *Note: training run may get into CUDA OOM if the size is too big*.| 600 | | `max_size` | Maximum size of the image to be rescaled before feeding it to the backbone. <br> Must be a positive integer.<br> <br> *Note: training run may get into CUDA OOM if the size is too big*. | 1333 | | `box_score_thresh` | During inference, only return proposals with a classification score greater than `box_score_thresh`. <br> Must be a float in the range [0, 1].| 0.3 |
-| `box_nms_thresh` | Non-maximum suppression (NMS) threshold for the prediction head. Used during inference. <br>Must be a float in the range [0, 1]. | 0.5 |
+| `nms_iou_thresh` | IOU (intersection over union) threshold used in non-maximum suppression (NMS) for the prediction head. Used during inference. <br>Must be a float in the range [0, 1]. | 0.5 |
| `box_detections_per_img` | Maximum number of detections per image, for all classes. <br> Must be a positive integer.| 100 | | `tile_grid_size` | The grid size to use for tiling each image. <br>*Note: tile_grid_size must not be None to enable [small object detection](how-to-use-automl-small-object-detect.md) logic*<br> A tuple of two integers passed as a string. Example: --tile_grid_size "(3, 2)" | No Default | | `tile_overlap_ratio` | Overlap ratio between adjacent tiles in each dimension. <br> Must be float in the range of [0, 1) | 0.25 |
managed-instance-apache-cassandra Materialized Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/materialized-views.md
az managed-cassandra datacenter update \
--resource-group $resourceGroupName \ --cluster-name $clusterName \ --data-center-name $dataCenterName \
- --base64-encoded-cassandra-yaml-fragment "$ENCODED_FRAGMENT"
+ --base64-encoded-cassandra-yaml-fragment $ENCODED_FRAGMENT
``` ## Next steps
media-services Asset Create Asset Upload Portal Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/asset-create-asset-upload-portal-quickstart.md
- Title: Use portal to upload, encode, and stream content
-description: This quickstart shows you how to use portal to upload, encode, and stream content with Azure Media Services.
- Previously updated : 01/14/2022-----
-# Quickstart: Upload, encode, and stream content with portal
--
-This quickstart shows you how to use the Azure portal to upload, encode, and stream content with Azure Media Services.
-
-## Overview
-
-* To start managing, encrypting, encoding, analyzing, and streaming media content in Azure, you need to [create a Media Services account](account-create-how-to.md).
-
- > [!NOTE]
- > If you previously uploaded a video into the Media Services account using Media Services v3 API or the content was generated based on a live output, you will not see the **Encode**, **Analyze**, or **Encrypt** buttons in the Azure portal. Use the Media Services v3 APIs to perform these tasks.
-
- Review the following:
- * [Assets concept](assets-concept.md)
- * [Cloud upload and storage](storage-account-concept.md)
- * [Naming conventions for resource names](media-services-apis-overview.md#naming-conventions)
-
-* Once you upload your high-quality digital media file into an asset (an input asset), you can process it (encode or analyze). The processed content goes into another asset (output asset).
- * [Encode](encode-concept.md) your uploaded file into formats that can be played on a wide variety of browsers and devices.
- * [Analyze](analyze-video-audio-files-concept.md) your uploaded file.
-
- Presently, when using the Azure portal, you can perform the operations such as generating TTML and WebVTT closed caption files. Files in these formats can be used to make the audio and video files accessible to people with hearing or visual disability. You can also extract keywords from your content.
-
- For a rich experience that enables you to extract insights from your audio and video files, use Media Services v3 presets. For more information, see [Tutorial: Analyze videos with Media Services v3](analyze-videos-tutorial.md). If you require detailed insights, use [Video Analyzer for Media](../../azure-video-analyzer/video-analyzer-for-media-docs/index.yml) directly.
-
-* After the content gets processed, you can deliver media content to the client players. To make the videos in the output asset available to the clients for playback, you have to create a [streaming locator](stream-streaming-locators-concept.md). When creating a streaming locator, you need to specify a [streaming policy](stream-streaming-policy-concept.md). Streaming policies enable you to define streaming protocols and encryption options (if any) for your streaming locators. For information on packaging and filtering content, see [Packaging and delivery](encode-dynamic-packaging-concept.md) and [Filters](filters-concept.md).
-
-* You can protect your content by encrypting it with Advanced Encryption Standard (AES-128) or/and any of the three major DRM systems like Microsoft PlayReady, Google Widevine, and Apple FairPlay. For information on how to configure the content protection, see [Quickstart: Use portal to encrypt content](drm-encrypt-content-how-to.md).
-
-## Prerequisites
--
-Follow the steps to [Create a Media Services account](account-create-how-to.md).
-
-## Upload a new video
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Locate and select your Media Services account.
-1. In the left navigation pane, select **Assets** under **Media Services**.
-1. Select **Upload** at the top of the window.
-1. Choose a **Storage account** from the pull-down menu.
-1. Browse the file that you want to upload. An **Asset name** gets created for your media. If necessary, you can edit this **Asset name**.
-
- > [!TIP]
- > If you want to choose multiple files, add them to one folder in Windows File Explorer. When browsing to **Upload files**, select all the files. This creates multiple assets.
-
-1. Select the desired option at the bottom of the **Upload new assets** window.
-1. Navigate to your **Assets** resource window. After a successful upload, a new asset gets added to the list.
-
-## Add transform
-
-1. Under the **Media Services** services, select **Transforms + jobs**.
-1. Select **Add transform**.
-1. In the **Add a transform** window, enter the details.
-1. If your media is a video, select **Encoding** as your **Transform type**. Select a **Built-in preset name** from the pull-down menu. For more information, see [EncoderNamedPreset](/rest/api/media/transforms/create-or-update#encodernamedpreset).
-1. Select **Add**.
-
-## Encode (Add job)
-
-1. Select either **Assets** or **Transforms + jobs**.
-1. Select **Add job** at the top of the resource window.
-1. In **Create a job** window, enter the details. Select **Create**.
-1. Navigate to **Transforms + jobs**. Select the **Transform name** to check the job status. A job goes through multiple states like **Scheduled** , **Queued**, **Processing**, and **Final**. If the job encounters an error, you get the **Error** state.
-1. Navigate to your **Assets** resource window. After the job gets created successfully, it generates an output asset that contains the encoded content.
-
-## Publish and stream
-
-To publish an asset, you need to add a streaming locator to your asset and run the streaming endpoint.
-
-### Add streaming locator
-
-1. Under Media Services, select **Assets**.
-1. Select the output asset.
-1. Select **New streaming locator**.
-1. In **Add streaming locator** window, enter the details. Select a predefined **Streaming policy**. For more information, see [streaming policies](stream-streaming-policy-concept.md).
-1. If you want your stream to be encrypted, [Create a content key policy](drm-encrypt-content-how-to.md#create-a-content-key-policy) and select it in the **Add streaming locator** window.
-1. Select **Add**. This action publishes the asset and generates the streaming URLs.
-
-### Start streaming endpoint
-1. Once the asset gets published, you can stream it right in the portal. You can also copy the streaming URL and use it in your client player. Make sure the [streaming endpoint](stream-streaming-endpoint-concept.md) is running. When you first create a Media Services account, a default streaming endpoint gets created and remains in a stopped state. **Start** the streaming endpoint to stream your content. You're only billed when your streaming endpoint is in the running state.
-1. Select the output asset.
-1. Select **Start streaming endpoint?**. Select **Start** to run the streaming endpoint. The status of **default** streaming endpoint changes from **Stopped** to **Running**. Your billing will start now. You can now use the streaming URLs to deliver content.
-1. Select **Reload player**.
-
-### Stop streaming endpoint
-
-1. Navigate to **Media Services** and select **Streaming endpoints**.
-1. Select your streaming endpoint **Name**. In this quickstart, we are using the **default** streaming endpoint. The current state is **Running**.
-1. Select **Stop**. A **Stop streaming endpoint?** window gets opened. Select **Yes**. Now, the **default** streaming endpoint is in a **Stopped** state. You cannot use the streaming URLs to deliver the content.
-
-## Cleanup resources
-
-If you intend to try the other quickstarts, you should hold on to the resources created for this quickstart. Otherwise, sign in to the Azure portal, browse to your resource group, select the resource group under which you followed this quickstart, and delete all the resources.
media-services Concept Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/concept-managed-identities.md
Previously updated : 05/17/2021 Last updated : 02/17/2022
A common challenge for developers is the management of secrets and credentials to secure communication between different services. On Azure, managed identities eliminate the need for developers having to manage credentials by providing an identity for the Azure resource in Azure AD and using it to obtain Azure Active Directory (Azure AD) tokens. + ## Media Services Managed Identity scenarios There are three scenarios where Managed Identities can be used with Media
media-services Encode Recommended On Premises Live Encoders https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/encode-recommended-on-premises-live-encoders.md
keywords: encoding;encoders;media
Previously updated : 11/10/2020 Last updated : 02/17/2022
To play back content, both an audio and video stream must be present. Playback o
- Whenever possible, use a hardwired internet connection. - When you're determining bandwidth requirements, double the streaming bitrates. Although not mandatory, this simple rule helps to mitigate the impact of network congestion. - When using software-based encoders, close out any unnecessary programs.-- Changing your encoder configuration after it has started pushing has negative effects on the event. Configuration changes can cause the event to become unstable.
+- Changing your encoder configuration after it has started pushing has negative effects on the event. Configuration changes can cause the event to become unstable. If you change your encoder configuration, you need to reset [Live Events](https://docs.microsoft.com/rest/api/media/live-events/reset) and restart the live event in order for the change to take place. If you stop and start the live event without resetting it, the live event will preserve the previous configuration.
- Always test and validate newer versions of encoder software for continued compatibility with Azure Media Services. Microsoft does not re-validate encoders on this list, and most validations are done by the software vendors directly as a "self-certification." - Ensure that you give yourself ample time to set up your event. For high-scale events, we recommend starting the setup an hour before your event. - Use the H.264 video and AAC-LC audio codec output.
To play back content, both an audio and video stream must be present. Playback o
- Use strict CBR encoding recommended for optimum adaptive bitrate performance. > [!IMPORTANT]
-> Watch the physical condition of the machine (CPU / Memory / etc) as uploading fragments to cloud involves CPU and IO operations. If you change any settings in the encoder, be certain reset the channels / live event for the change to take effect.
+> Watch the physical condition of the machine (CPU / Memory / etc) as uploading fragments to cloud involves CPU and IO operations.
+> If you change any encoder configurations, reset [Live Events](https://docs.microsoft.com/rest/api/media/live-events/reset) the channels and the live event for the change to take place. If you stop and start the live event without resetting it, the live event will preserve the previous configuration.
## See also
media-services Live Event Streaming Best Practices Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/live-event-streaming-best-practices-guide.md
+
+ Title: Media Services live streaming best practices guide
+description: This article describes best practices for achieving low-latency live streams with Azure Media Services.
+++++++ Last updated : 02/14/2022+++
+# Media Services live streaming best practices guide
+
+Customers often ask how they can reduce the latency of their live stream. There are many factors that
+determine the end-to-end latency of a stream. Here are some that you should consider:
+
+1. Delays on the contribution encoder side. When customers use an
+ encoding software such as OBS Studio, Wirecast, or others to send an
+ RTMP live stream to Media Services. Settings on this software is critical in affecting the end-to-end latency of a live
+ stream.
+
+2. Delays in the live streaming pipeline within Azure Media Services.
+
+3. CDN performance
+
+4. Buffering algorithms of the video player and network conditions on
+ the client side
+
+5. Timing of provisioning
+
+## Contribution encoder
+
+As a customer, you are in control of the settings of the source encoder
+settings before the RTMP stream reaches Media Services. Here are some
+recommendations for the settings that would give you the lowest possible
+latency:
+
+1. **Pick the same region physically** **closest to your contribution
+ encoder for your Media Services account.** This will ensure
+ that you have a great network connection to the Media Services
+ account.
+
+2. **Use a consistent fragment size.** We recommend a GOP size of 2
+ seconds. The default on some encoders, such as OBS, is 8 seconds.
+ Make sure that you change this setting.
+
+3. **Use the GPU encoder if your encoding software allows you to do
+ that.** This would allow you to offload CPU work to the GPU.
+
+4. **Use an encoding profile that is optimized for low-latency.** For
+ example, with OBS Studio, if you use the Nvidia H.264 encoder, you
+ may see the ΓÇ£zero latencyΓÇ¥ preset.
+
+5. **Send content that is no higher in resolution than what you plan to
+ stream.** For example, if you're using 720p standard encoding live
+ events, you send files that are already at 720p.
+
+6. **Keep your framerate at 30fps or lower unless using pass-through
+ live events.** While we support 60 fps input for live events, our
+ encoding live event output is still not above 30 fps.
+
+## Configuration of the Azure Media Services live event
+
+Here are some configurations that will help you reduce the latency in
+our pipeline:
+
+1. **Use the ΓÇÿLowLatencyΓÇÖ StreamOption on the live event.**
+
+2. **We recommend that you choose CMAF output for both HLS and DASH
+ playback.** This allows you to share the same fragments for both
+ formats. It increases your cache hit ratio when CDN is used. For example:
+
+
+| Type | Format | URL example |
+||||
+|HLS CMAF (recommended) | format=m3u8-cmaf | `https://amsv3account-usw22.streaming.media.azure.net/21b17732-0112-4d76-b526-763dcd843449/ignite.ism/manifest(format=m3u8-cmaf)` |
+| MPEG-DASH CMAF (recommended) | format=mpd-time-cmaf | `https://amsv3account-usw22.streaming.media.azure.net/21b17732-0112-4d76-b526-763dcd843449/ignite.ism/manifest(format=mpd-time-cmaf)` |
+
+3. **If you must choose TS output, use an HLS packing ratio of 1.** This
+allows us to pack only one fragment into one HLS segment. You won't
+get the full benefits of LL-HLS in native Apple players.
+
+## Player optimizations
+
+**When choosing and configuring a video player, make sure you use settings that are optimized for lower latency.**
+
+Media Services supports different streaming protocols outputs ΓÇô DASH,
+HLS with TS output and HLS with CMAF fragments. Depending on the
+playerΓÇÖs implementation, buffering decisions impact the latency a
+viewer observes. Poor network conditions or default algorithms that
+favor quality and stability of playback could cause players to decide to
+buffer more content upfront to prevent interruptions during playback.
+These buffers before and during the playback sessions would add to the
+end-to-end latency.
+
+When Azure Media Player is used, the *Low Latency Heuristics* profile
+optimizes the player to have the lowest possible latency on the player
+side.
+
+## CDN choice
+
+Streaming endpoints are the origin servers that deliver the live and VOD
+streaming content to the CDN or to the customer directly. If a live
+event expects a large audience, or the audience is geographically
+located far away from the streaming endpoint (origin) serving the
+content, it's *important* for the customer to shield the origin using a
+Content Delivery Network (CDN).
+
+We recommend using Azure CDN which is provided by Verizon (Standard or
+Premium). We've optimized the integration experience so that a
+customer could configure this CDN with a single select in the Azure portal. Be sure to turn on Origin Shield and Streaming Optimizations for
+your CDN endpoint whenever you start your streaming endpoint.
+
+Our customers also have good experiences bringing their own CDN. Ensure that measures are taken on the CDN to shield the origin from
+excessive traffic.
+
+## Streaming endpoint scaling
+
+> [!NOTE]
+> A **standard streaming endpoint/origins** is a *shared* resource
+that allows customers with low traffic volumes to stream content at
+a lower cost. You would **not** use a standard streaming endpoint to
+scale streaming units if you expect large traffic volumes or you plan to
+use a CDN.
+
+A **premium streaming endpoint/origin** offers more flexibility and
+isolation for customers to scale by adding or removing *dedicated*
+streaming units. A *streaming unit* is a compute resource allocated to a
+streaming endpoint. Each streaming unit can stream approximately 200
+Mbps of traffic.
+
+While you can concurrently stream many live events at once using
+the same streaming endpoint, the maximum default streaming units needed
+for one streaming endpoint is 10. You can open a support ticket to
+request more than the default 10.
+
+## Determine the premium streaming units needed
+
+There are three steps to determine the number of streaming endpoints and
+streaming units needed:
+
+1. Determine the total egress needed.
+
+2. Divide the total egress by 200, which is the maximum Mbps each streaming unit can stream.
+
+### Determine the total egress needed
+
+Determine the total egress needed by using the following formula.
+
+*Total egress needed = average bandwidth x number of concurrent viewers
+x percent* *handled by the streaming endpoint.*
+
+LetΓÇÖs take a look at each of the multipliers in turn.
+
+**Average bandwidth.** What is the *average* bitrate you plan to stream?
+In other words, if you're going to have multiple bitrates available
+what bit rate is the average of all the bitrates you're planning for?
+You can estimate this using one of the following methods:
+
+For a live event that *includes encoding*:
+
+ - If you donΓÇÖt know what your *average* bandwidth is going to be, you
+ could use our top bitrates as an estimate. Our *top* bitrates are:
+
+ - 5.5Mbps for the 1080p encoded live events, therefore, your
+ average bitrate is going to be somewhere around 3.5Mbps.
+
+ - Look at the encoding preset used for encoding the live event, for
+ example, the AdaptiveStreaming(H.264) preset. See this [output
+ example](encode-autogen-bitrate-ladder.md#output).
+
+For a live event that is simply using pass-through and not encoding:
+
+ - Check the encoding bitrate ladder used by your local encoder.
+
+**Number of concurrent viewers.** How many concurrent viewers are
+expected? This could be hard to estimate, but do your best based on your
+customer data. Are you streaming a conference to a global audience? Are
+you planning to live stream to sell a set of products to your customers?
+
+**Percent of traffic** **handled by** **the streaming endpoint.** This
+can also be expressed as ΓÇ£the percent of traffic NOT handled by the CDNΓÇ¥
+since that is the number that actually goes into the formula. So, with
+that in mind, what is the CDN offload you expect? If the CDN is expected
+to handle 90% of the live traffic, then only 10% of the traffic would be
+expected on the streaming endpoint. The number used in the formula is
+.10 which is the percentage of traffic expected on the streaming
+endpoint.
+
+### Determine the number of premium streaming units needed
+
+Premium streaming units needed = Average bandwidth x \# of viewers x
+Percentage of traffic not handled by the CDN / 200 Mbps
+
+### Example
+
+You've recently released a new product and want to present it to your
+established customers. You want low latency because you donΓÇÖt want to
+frustrate your already busy audience, so you'll use premium streaming
+endpoints and a CDN.
+
+You have approximately 100,000 customers, but they probably arenΓÇÖt all
+going to watch your live event. You guess that in the best case, only 1%
+of them will attend, which brings your expected concurrent viewers to
+1,000.
+
+*Number of concurrent users =* *1,000*
+
+You've decided that you're going to use Media Services to encode your
+live stream and won't be using pass-through. You donΓÇÖt know what the
+average bandwidth is going to be, but you do know that you'll deliver
+in 1080p (*top* bitrate of 5.5 Mbps), so your *average* bandwidth is
+estimated to be 3.5 Mbps for your calculations.
+
+*Average bandwidth =* *3.5*
+
+Since your audience is dispersed worldwide, you expect that the CDN will
+handle most (90%) of the live traffic. Therefore, the premium streaming
+endpoints will only handle 10% of the traffic.
+
+*Percent handled by the streaming endpoint =* *10% = 0.1*
+
+Using the formula provided above:
+
+*Total egress needed = average bandwidth x number of concurrent viewers
+x percent handled by the streaming endpoint.*
+
+*total egress needed* = 3.5 x 1,000 x 0.1
+
+*total egress needed* = 350 Mbps
+
+Dividing the total egress by 200 you determine that you need 1.75
+premium streaming units.
+
+*premium streaming units needed* = *total egress needed*/200Mpbs
+
+*premium streaming units needed* = 1.75
+
+We'll round up this number to 2, giving us 2 units needed.
+
+### Use the portal to estimate your needs
+
+The Azure portal can help you simplify the calculations. On the
+streaming page, you can use the calculator provided to see the estimated
+audience reach when you change the average bandwidth, CDN hit ratio and
+number of streaming units.
+
+1. From the media services account page, select **Steaming endpoints** from
+ the menu.
+
+2. Add a new streaming endpoint by selecting **Add streaming endpoint**.
+
+3. Give the streaming endpoint a name.
+
+4. Select **Premium streaming endpoint** for the streaming endpoint type.
+
+5. Since you're just getting an estimate at this point, donΓÇÖt start
+ the streaming endpoint after creation. Select **No**.
+
+6. Select *Standard Verizon* or *Premium Verizon* for your CDN pricing
+ tier. The profile name will change accordingly. Leave the name as it
+ is for this exercise.
+
+7. For the CDN profile, select **Create New**.
+
+8. Select **Create**. Once the endpoint has been deployed, the streaming
+ endpoints screen will appear.
+
+9. Select the streaming endpoint you just created. The streaming
+ endpoint screen will appear with audience reach estimates.
+
+10. The default setting for the streaming endpoint with 1 streaming unit
+ shows that it's estimated to stream to 571 concurrent viewers at
+ 3.5 Mbps using 90% of the CDN and 10% of the streaming endpoint.
+
+11. Change the percentage of the **Egress source** from 90% from CDN cache
+ to 0%. The calculator will estimate that you'll be able to stream
+ to 57 concurrent viewers at 3.5 Mbps at 200 Mbps **without** a CDN.
+
+12. Now change the **Egress source** back to 90%.
+
+13. Then, change the **streaming units** to 2. The calculator will estimate
+ that you'll be able to stream to 1143 concurrent viewers at
+ 3.5 Mbps with 4000Mpbs with the CDN handling 90% of the traffic.
+
+14. Select **Save**.
+
+15. You can start the streaming endpoint and try sending traffic to it.
+ The metrics at the bottom of the screen will track actual traffic.
+
+## Timing
+
+You may want to provision streaming units 1 hour ahead of the expected
+peak usage to ensure streaming units are ready.
media-services Live Event Types Comparison Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/live-event-types-comparison-reference.md
na Previously updated : 08/31/2020 Last updated : 02/17/2022
If the source frame rate on input is >30 fps, the frame rate will be reduced to
For both *Default720p* and *Default1080p* presets, audio is encoded to stereo AAC-LC at 128 kbps. The sampling rate follows that of the audio track in the contribution feed.
+> [!NOTE]
+> If the sampling rate is low, such as 8khz, the encoded output will be lower than 128kbps.
+ ## Implicit properties of the live encoder The previous section describes the properties of the live encoder that can be controlled explicitly, via the preset - such as the number of layers, resolutions, and bitrates. This section clarifies the implicit properties.
media-services Security Access Storage Managed Identity Cli Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/security-access-storage-managed-identity-cli-tutorial.md
[!INCLUDE [media services api v3 logo](./includes/v3-hr.md)] + If you would like to access a storage account when the storage account is configured to block requests from unknown IP addresses, the Media Services account must be granted access to the Storage account. Follow the steps below to create a Managed Identity for the Media Services account and grant this identity access to storage using the Media Services CLI. :::image type="content" source="media/diagrams/managed-identities-scenario-storage-permissions-media-services-account.svg" alt-text="Media Services account uses a Managed Identity to access storage":::
media-services Security Encrypt Data Managed Identity Cli Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/security-encrypt-data-managed-identity-cli-tutorial.md
[!INCLUDE [media services api v3 logo](./includes/v3-hr.md)] + If you'd like Media Services to encrypt data using a key from your Key Vault, the Media Services account must be granted *access* to the Key Vault. Follow the steps below to create a Managed Identity for the Media Services account and grant this identity access to your Key Vault using the Media Services CLI. :::image type="content" source="media/diagrams/managed-identities-scenario-keyvault-media-services-account.svg" alt-text="Media Services account uses Key Vault with a Managed Identity":::
media-services Transform Create Copy Video Audio How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-create-copy-video-audio-how-to.md
This article shows how to create a `CopyVideo/CopyAudio` transform.
+This transform allows you have input video / input audio streams copied from the input asset to the output asset without any changes. This can be of value with multi bitrate encoding output where the input video and/or audio would be part of the output. It simply writes the manifest and other files needed to stream content.
+ ## Prerequisites Follow the steps in [Create a Media Services account](./account-create-how-to.md) to create the needed Media Services account and resource group to create an asset.
media-services Transform Create Thumbnail Sprites How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-create-thumbnail-sprites-how-to.md
[!INCLUDE [media services api v3 logo](./includes/v3-hr.md)]
-How do I create thumbnail sprites? You can create a transform for a job that will generate thumbnail sprites for your videos. This article shows you how with the Media Services 2020-05-01 v3 API.
+This article shows you how with the Media Services 2020-05-01 v3 API.
+
+You can use Media Encoder Standard to generate a thumbnail sprite, which is a JPEG file that contains multiple small resolution thumbnails stitched together into a single (large) image, together with a VTT file. This VTT file specifies the time range in the input video that each thumbnail represents, together with the size and coordinates of that thumbnail within the large JPEG file. Video players use the VTT file and sprite image to show a 'visual' seekbar, providing a viewer with visual feedback when scrubbing back and forward along the video timeline.
Add the code snippets for your preferred development language.
media-services Video On Demand Simple Portal Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/video-on-demand-simple-portal-quickstart.md
+
+ Title: Quickstart Video on Demand with Media Services
+description: This article shows you how to do the basic steps for delivering video on demand (VOD) with Azure Media Services.
+++++++ Last updated : 02/16/2022+++
+# Quickstart Basic Video On Demand (VOD) with Media Services
+
+This article shows you how to do the basic steps for delivering a basic video on demand (VOD) application with Azure Media Services and a GitHub repository. All the steps happen with your web browser from our documentation, the Azure portal, and GitHub.
+
+## Prerequisites
+
+- [Create a Media Services account](account-create-how-to.md). When you set up the Media Services account, a storage account, a user managed identity, and a default streaming endpoint will also be created.
+- One MP4 video to use for this exercise.
+- Create a GitHub account if you don't have one already, and stay logged in.
+- Create an Azure [Static Web App](/azure/static-web-apps/get-started-portal?tabs=vanilla-javascript).
+
+> [!NOTE]
+> You will be switching between several browser tabs or windows during this process. The below steps assume that you have your browser set to open tabs. Keep them all open.
+
+## Upload videos
+
+You should have a media services account, a storage account, and a default streaming endpoint.
+
+1. In the portal, navigate to the Media Services account that you just created.
+1. Select **Assets**. Assets are the containers that are used to house your media content.
+1. Select **Upload**. The Upload new assets screen will appear.
+1. Select the storage account you created for the