Updates from: 02/18/2022 02:07:05
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Self Asserted Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/self-asserted-technical-profile.md
Previously updated : 01/14/2022 Last updated : 02/17/2022
You can also call a REST API technical profile with your business logic, overwri
| Attribute | Required | Description | | | -- | -- |
-| setting.operatingMode <sup>1</sup>| No | For a sign-in page, this property controls the behavior of the username field, such as input validation and error messages. Expected values: `Username` or `Email`. |
+| setting.operatingMode <sup>1</sup>| No | For a sign-in page, this property controls the behavior of the username field, such as input validation and error messages. Expected values: `Username` or `Email`. Check out the [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/technical-profiles/self-asserted#operating-mode) of this metadata. |
| AllowGenerationOfClaimsWithNullValues| No| Allow to generate a claim with null value. For example, in a case user doesn't select a checkbox.| | ContentDefinitionReferenceId | Yes | The identifier of the [content definition](contentdefinitions.md) associated with this technical profile. | | EnforceEmailVerification | No | For sign-up or profile edit, enforces email verification. Possible values: `true` (default), or `false`. |
-| setting.retryLimit | No | Controls the number of times a user can try to provide the data that is checked against a validation technical profile. For example, a user tries to sign-up with an account that already exists and keeps trying until the limit reached.
+| setting.retryLimit | No | Controls the number of times a user can try to provide the data that is checked against a validation technical profile. For example, a user tries to sign-up with an account that already exists and keeps trying until the limit reached. Check out the [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/technical-profiles/self-asserted#retry-limit) of this metadata.|
| SignUpTarget <sup>1</sup>| No | The sign-up target exchange identifier. When the user clicks the sign-up button, Azure AD B2C executes the specified exchange identifier. |
-| setting.showCancelButton | No | Displays the cancel button. Possible values: `true` (default), or `false` |
-| setting.showContinueButton | No | Displays the continue button. Possible values: `true` (default), or `false` |
-| setting.showSignupLink <sup>2</sup>| No | Displays the sign-up button. Possible values: `true` (default), or `false` |
-| setting.forgotPasswordLinkLocation <sup>2</sup>| No| Displays the forgot password link. Possible values: `AfterLabel` (default) displays the link directly after the label or after the password input field when there is no label, `AfterInput` displays the link after the password input field, `AfterButtons` displays the link on the bottom of the form after the buttons, or `None` removes the forgot password link.|
-| setting.enableRememberMe <sup>2</sup>| No| Displays the [Keep me signed in](session-behavior.md?pivots=b2c-custom-policy#enable-keep-me-signed-in-kmsi) checkbox. Possible values: `true` , or `false` (default). |
-| setting.inputVerificationDelayTimeInMilliseconds <sup>3</sup>| No| Improves user experience, by waiting for the user to stop typing, and then validate the value. Default value 2000 milliseconds. |
+| setting.showCancelButton | No | Displays the cancel button. Possible values: `true` (default), or `false`. Check out the [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/technical-profiles/self-asserted#show-the-cancel-button) of this metadata.|
+| setting.showContinueButton | No | Displays the continue button. Possible values: `true` (default), or `false`. Check out the [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/technical-profiles/self-asserted#show-the-continue-button) of this metadata. |
+| setting.showSignupLink <sup>2</sup>| No | Displays the sign-up button. Possible values: `true` (default), or `false`. Check out the [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/technical-profiles/self-asserted#show-sign-up-link) of this metadata. |
+| setting.forgotPasswordLinkLocation <sup>2</sup>| No| Displays the forgot password link. Possible values: `AfterLabel` (default) displays the link directly after the label or after the password input field when there is no label, `AfterInput` displays the link after the password input field, `AfterButtons` displays the link on the bottom of the form after the buttons, or `None` removes the forgot password link. Check out the [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/technical-profiles/self-asserted#forgot-password-link-location) of this metadata.|
+| setting.enableRememberMe <sup>2</sup>| No| Displays the [Keep me signed in](session-behavior.md?pivots=b2c-custom-policy#enable-keep-me-signed-in-kmsi) checkbox. Possible values: `true` , or `false` (default). [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/technical-profiles/self-asserted#enable-remember-me-kmsi) of this metadata. |
+| setting.inputVerificationDelayTimeInMilliseconds <sup>3</sup>| No| Improves user experience, by waiting for the user to stop typing, and then validate the value. Default value 2000 milliseconds. Check out the [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/technical-profiles/self-asserted#input-verification-delay-time-in-milliseconds) of this metadata. |
| IncludeClaimResolvingInClaimsHandling  | No | For input and output claims, specifies whether [claims resolution](claim-resolver-overview.md) is included in the technical profile. Possible values: `true`, or `false` (default). If you want to use a claims resolver in the technical profile, set this to `true`. | |setting.forgotPasswordLinkOverride <sup>4</sup>| No | A password reset claims exchange to be executed. For more information, see [Self-service password reset](add-password-reset-policy.md). | Notes:+ 1. Available for content definition [DataUri](contentdefinitions.md#datauri) type of `unifiedssp`, or `unifiedssd`. 1. Available for content definition [DataUri](contentdefinitions.md#datauri) type of `unifiedssp`, or `unifiedssd`. [Page layout version](page-layout.md) 1.1.0 and above. 1. Available for [page layout version](page-layout.md) 1.2.0 and above.
active-directory Mobile App Quickstart Portal Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mobile-app-quickstart-portal-android.md
+
+ Title: "Quickstart: Add sign in with Microsoft to an Android app | Azure"
+
+description: In this quickstart, learn how Android applications can call an API that requires access tokens issued by the Microsoft identity platform.
+++++++ Last updated : 02/15/2022+++
+#Customer intent: As an application developer, I want to learn how Android native apps can call protected APIs that require login and access tokens using the Microsoft identity platform.
++
+# Quickstart: Sign in users and call the Microsoft Graph API from an Android app
++
+In this quickstart, you download and run a code sample that demonstrates how an Android application can sign in users and get an access token to call the Microsoft Graph API.
+
+See [How the sample works](#how-the-sample-works) for an illustration.
+
+Applications must be represented by an app object in Azure Active Directory so that the Microsoft identity platform can provide tokens to your application.
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* Android Studio
+* Android 16+
+
+### Step 1: Configure your application in the Azure portal
+For the code sample in this quickstart to work, add a **Redirect URI** compatible with the Auth broker.
+> [!div id="makechanges" class="nextstepaction" class="configure-app-button"]
+> [Make these changes for me]()
+
+> [!div id="appconfigured" class="alert alert-info"]
+> ![Already configured](media/quickstart-v2-android/green-check.png) Your application is configured with these attributes
+
+### Step 2: Download the project
+
+Run the project using Android Studio.
+> [!div class="nextstepaction"]
+> [Download the code sample](https://github.com/Azure-Samples/ms-identity-android-java/archive/master.zip)
++
+### Step 3: Your app is configured and ready to run
+
+We have configured your project with values of your app's properties and it's ready to run.
+The sample app starts on the **Single Account Mode** screen. A default scope, **user.read**, is provided by default, which is used when reading your own profile data during the Microsoft Graph API call. The URL for the Microsoft Graph API call is provided by default. You can change both of these if you wish.
+
+![MSAL sample app showing single and multiple account usage](./media/quickstart-v2-android/quickstart-sample-app.png)
+
+Use the app menu to change between single and multiple account modes.
+
+In single account mode, sign in using a work or home account:
+
+1. Select **Get graph data interactively** to prompt the user for their credentials. You'll see the output from the call to the Microsoft Graph API in the bottom of the screen.
+2. Once signed in, select **Get graph data silently** to make a call to the Microsoft Graph API without prompting the user for credentials again. You'll see the output from the call to the Microsoft Graph API in the bottom of the screen.
+
+In multiple account mode, you can repeat the same steps. Additionally, you can remove the signed-in account, which also removes the cached tokens for that account.
+
+> [!div class="sxs-lookup"]
+> > [!NOTE]
+> > `Enter_the_Supported_Account_Info_Here`
+
+## How the sample works
+![Screenshot of the sample app](media/quickstart-v2-android/android-intro.svg)
++
+The code is organized into fragments that show how to write a single and multiple accounts MSAL app. The code files are organized as follows:
+
+| File | Demonstrates |
+|||
+| MainActivity | Manages the UI |
+| MSGraphRequestWrapper | Calls the Microsoft Graph API using the token provided by MSAL |
+| MultipleAccountModeFragment | Initializes a multi-account application, loads a user account, and gets a token to call the Microsoft Graph API |
+| SingleAccountModeFragment | Initializes a single-account application, loads a user account, and gets a token to call the Microsoft Graph API |
+| res/auth_config_multiple_account.json | The multiple account configuration file |
+| res/auth_config_single_account.json | The single account configuration file |
+| Gradle Scripts/build.grade (Module:app) | The MSAL library dependencies are added here |
+
+We'll now look at these files in more detail and call out the MSAL-specific code in each.
+
+### Adding MSAL to the app
+
+MSAL ([com.microsoft.identity.client](https://javadoc.io/doc/com.microsoft.identity.client/msal)) is the library used to sign in users and request tokens used to access an API protected by Microsoft identity platform. Gradle 3.0+ installs the library when you add the following to **Gradle Scripts** > **build.gradle (Module: app)** under **Dependencies**:
+
+```java
+dependencies {
+ ...
+ implementation 'com.microsoft.identity.client:msal:2.+'
+ ...
+}
+```
+
+This instructs Gradle to download and build MSAL from maven central.
+
+You must also add references to maven to the **allprojects** > **repositories** portion of the **build.gradle (Module: app)** like so:
+
+```java
+allprojects {
+ repositories {
+ mavenCentral()
+ google()
+ mavenLocal()
+ maven {
+ url 'https://pkgs.dev.azure.com/MicrosoftDeviceSDK/DuoSDK-Public/_packaging/Duo-SDK-Feed/maven/v1'
+ }
+ maven {
+ name "vsts-maven-adal-android"
+ url "https://identitydivision.pkgs.visualstudio.com/_packaging/AndroidADAL/maven/v1"
+ credentials {
+ username System.getenv("ENV_VSTS_MVN_ANDROIDADAL_USERNAME") != null ? System.getenv("ENV_VSTS_MVN_ANDROIDADAL_USERNAME") : project.findProperty("vstsUsername")
+ password System.getenv("ENV_VSTS_MVN_ANDROIDADAL_ACCESSTOKEN") != null ? System.getenv("ENV_VSTS_MVN_ANDROIDADAL_ACCESSTOKEN") : project.findProperty("vstsMavenAccessToken")
+ }
+ }
+ jcenter()
+ }
+}
+```
+
+### MSAL imports
+
+The imports that are relevant to the MSAL library are `com.microsoft.identity.client.*`. For example, you'll see `import com.microsoft.identity.client.PublicClientApplication;` which is the namespace for the `PublicClientApplication` class, which represents your public client application.
+
+### SingleAccountModeFragment.java
+
+This file demonstrates how to create a single account MSAL app and call a Microsoft Graph API.
+
+Single account apps are only used by a single user. For example, you might just have one account that you sign into your mapping app with.
+
+#### Single account MSAL initialization
+
+In `auth_config_single_account.json`, in `onCreateView()`, a single account `PublicClientApplication` is created using the config information stored in the `auth_config_single_account.json` file. This is how you initialize the MSAL library for use in a single-account MSAL app:
+
+```java
+...
+// Creates a PublicClientApplication object with res/raw/auth_config_single_account.json
+PublicClientApplication.createSingleAccountPublicClientApplication(getContext(),
+ R.raw.auth_config_single_account,
+ new IPublicClientApplication.ISingleAccountApplicationCreatedListener() {
+ @Override
+ public void onCreated(ISingleAccountPublicClientApplication application) {
+ /**
+ * This test app assumes that the app is only going to support one account.
+ * This requires "account_mode" : "SINGLE" in the config json file.
+ **/
+ mSingleAccountApp = application;
+ loadAccount();
+ }
+
+ @Override
+ public void onError(MsalException exception) {
+ displayError(exception);
+ }
+ });
+```
+
+#### Sign in a user
+
+In `SingleAccountModeFragment.java`, the code to sign in a user is in `initializeUI()`, in the `signInButton` click handler.
+
+Call `signIn()` before trying to acquire tokens. `signIn()` behaves as though `acquireToken()` is called, resulting in an interactive prompt for the user to sign in.
+
+Signing in a user is an asynchronous operation. A callback is passed that calls the Microsoft Graph API and update the UI once the user signs in:
+
+```java
+mSingleAccountApp.signIn(getActivity(), null, getScopes(), getAuthInteractiveCallback());
+```
+
+#### Sign out a user
+
+In `SingleAccountModeFragment.java`, the code to sign out a user is in `initializeUI()`, in the `signOutButton` click handler. Signing a user out is an asynchronous operation. Signing the user out also clears the token cache for that account. A callback is created to update the UI once the user account is signed out:
+
+```java
+mSingleAccountApp.signOut(new ISingleAccountPublicClientApplication.SignOutCallback() {
+ @Override
+ public void onSignOut() {
+ updateUI(null);
+ performOperationOnSignOut();
+ }
+
+ @Override
+ public void onError(@NonNull MsalException exception) {
+ displayError(exception);
+ }
+});
+```
+
+#### Get a token interactively or silently
+
+To present the fewest number of prompts to the user, you'll typically get a token silently. Then, if there's an error, attempt to get to token interactively. The first time the app calls `signIn()`, it effectively acts as a call to `acquireToken()`, which will prompt the user for credentials.
+
+Some situations when the user may be prompted to select their account, enter their credentials, or consent to the permissions your app has requested are:
+
+* The first time the user signs in to the application
+* If a user resets their password, they'll need to enter their credentials
+* If consent is revoked
+* If your app explicitly requires consent
+* When your application is requesting access to a resource for the first time
+* When MFA or other Conditional Access policies are required
+
+The code to get a token interactively, that is with UI that will involve the user, is in `SingleAccountModeFragment.java`, in `initializeUI()`, in the `callGraphApiInteractiveButton` click handler:
+
+```java
+/**
+ * If acquireTokenSilent() returns an error that requires an interaction (MsalUiRequiredException),
+ * invoke acquireToken() to have the user resolve the interrupt interactively.
+ *
+ * Some example scenarios are
+ * - password change
+ * - the resource you're acquiring a token for has a stricter set of requirement than your Single Sign-On refresh token.
+ * - you're introducing a new scope which the user has never consented for.
+ **/
+mSingleAccountApp.acquireToken(getActivity(), getScopes(), getAuthInteractiveCallback());
+```
+
+If the user has already signed in, `acquireTokenSilentAsync()` allows apps to request tokens silently as shown in `initializeUI()`, in the `callGraphApiSilentButton` click handler:
+
+```java
+/**
+ * Once you've signed the user in,
+ * you can perform acquireTokenSilent to obtain resources without interrupting the user.
+ **/
+ mSingleAccountApp.acquireTokenSilentAsync(getScopes(), AUTHORITY, getAuthSilentCallback());
+```
+
+#### Load an account
+
+The code to load an account is in `SingleAccountModeFragment.java` in `loadAccount()`. Loading the user's account is an asynchronous operation, so callbacks to handle when the account loads, changes, or an error occurs is passed to MSAL. The following code also handles `onAccountChanged()`, which occurs when an account is removed, the user changes to another account, and so on.
+
+```java
+private void loadAccount() {
+ ...
+
+ mSingleAccountApp.getCurrentAccountAsync(new ISingleAccountPublicClientApplication.CurrentAccountCallback() {
+ @Override
+ public void onAccountLoaded(@Nullable IAccount activeAccount) {
+ // You can use the account data to update your UI or your app database.
+ updateUI(activeAccount);
+ }
+
+ @Override
+ public void onAccountChanged(@Nullable IAccount priorAccount, @Nullable IAccount currentAccount) {
+ if (currentAccount == null) {
+ // Perform a cleanup task as the signed-in account changed.
+ performOperationOnSignOut();
+ }
+ }
+
+ @Override
+ public void onError(@NonNull MsalException exception) {
+ displayError(exception);
+ }
+ });
+```
+
+#### Call Microsoft Graph
+
+When a user is signed in, the call to Microsoft Graph is made via an HTTP request by `callGraphAPI()` that is defined in `SingleAccountModeFragment.java`. This function is a wrapper that simplifies the sample by doing some tasks such as getting the access token from the `authenticationResult` and packaging the call to the MSGraphRequestWrapper, and displaying the results of the call.
+
+```java
+private void callGraphAPI(final IAuthenticationResult authenticationResult) {
+ MSGraphRequestWrapper.callGraphAPIUsingVolley(
+ getContext(),
+ graphResourceTextView.getText().toString(),
+ authenticationResult.getAccessToken(),
+ new Response.Listener<JSONObject>() {
+ @Override
+ public void onResponse(JSONObject response) {
+ /* Successfully called graph, process data and send to UI */
+ ...
+ }
+ },
+ new Response.ErrorListener() {
+ @Override
+ public void onErrorResponse(VolleyError error) {
+ ...
+ }
+ });
+}
+```
+
+### auth_config_single_account.json
+
+This is the configuration file for an MSAL app that uses a single account.
+
+See [Understand the Android MSAL configuration file ](msal-configuration.md) for an explanation of these fields.
+
+Note the presence of `"account_mode" : "SINGLE"`, which configures this app to use a single account.
+
+`"client_id"` is preconfigured to use an app object registration that Microsoft maintains.
+`"redirect_uri"`is preconfigured to use the signing key provided with the code sample.
+
+```json
+{
+ "client_id" : "0984a7b6-bc13-4141-8b0d-8f767e136bb7",
+ "authorization_user_agent" : "DEFAULT",
+ "redirect_uri" : "msauth://com.azuresamples.msalandroidapp/1wIqXSqBj7w%2Bh11ZifsnqwgyKrY%3D",
+ "account_mode" : "SINGLE",
+ "broker_redirect_uri_registered": true,
+ "authorities" : [
+ {
+ "type": "AAD",
+ "audience": {
+ "type": "AzureADandPersonalMicrosoftAccount",
+ "tenant_id": "common"
+ }
+ }
+ ]
+}
+```
+
+### MultipleAccountModeFragment.java
+
+This file demonstrates how to create a multiple account MSAL app and call a Microsoft Graph API.
+
+An example of a multiple account app is a mail app that allows you to work with multiple user accounts such as a work account and a personal account.
+
+#### Multiple account MSAL initialization
+
+In the `MultipleAccountModeFragment.java` file, in `onCreateView()`, a multiple account app object (`IMultipleAccountPublicClientApplication`) is created using the config information stored in the `auth_config_multiple_account.json file`:
+
+```java
+// Creates a PublicClientApplication object with res/raw/auth_config_multiple_account.json
+PublicClientApplication.createMultipleAccountPublicClientApplication(getContext(),
+ R.raw.auth_config_multiple_account,
+ new IPublicClientApplication.IMultipleAccountApplicationCreatedListener() {
+ @Override
+ public void onCreated(IMultipleAccountPublicClientApplication application) {
+ mMultipleAccountApp = application;
+ loadAccounts();
+ }
+
+ @Override
+ public void onError(MsalException exception) {
+ ...
+ }
+ });
+```
+
+The created `MultipleAccountPublicClientApplication` object is stored in a class member variable so that it can be used to interact with the MSAL library to acquire tokens and load and remove the user account.
+
+#### Load an account
+
+Multiple account apps usually call `getAccounts()` to select the account to use for MSAL operations. The code to load an account is in the `MultipleAccountModeFragment.java` file, in `loadAccounts()`. Loading the user's account is an asynchronous operation. So a callback handles the situations when the account is loaded, changes, or an error occurs.
+
+```java
+/**
+ * Load currently signed-in accounts, if there's any.
+ **/
+private void loadAccounts() {
+ if (mMultipleAccountApp == null) {
+ return;
+ }
+
+ mMultipleAccountApp.getAccounts(new IPublicClientApplication.LoadAccountsCallback() {
+ @Override
+ public void onTaskCompleted(final List<IAccount> result) {
+ // You can use the account data to update your UI or your app database.
+ accountList = result;
+ updateUI(accountList);
+ }
+
+ @Override
+ public void onError(MsalException exception) {
+ displayError(exception);
+ }
+ });
+}
+```
+
+#### Get a token interactively or silently
+
+Some situations when the user may be prompted to select their account, enter their credentials, or consent to the permissions your app has requested are:
+
+* The first time users sign in to the application
+* If a user resets their password, they'll need to enter their credentials
+* If consent is revoked
+* If your app explicitly requires consent
+* When your application is requesting access to a resource for the first time
+* When MFA or other Conditional Access policies are required
+
+Multiple account apps should typically acquire tokens interactively, that is with UI that involves the user, with a call to `acquireToken()`. The code to get a token interactively is in the `MultipleAccountModeFragment.java` file in `initializeUI()`, in the `callGraphApiInteractiveButton` click handler:
+
+```java
+/**
+ * Acquire token interactively. It will also create an account object for the silent call as a result (to be obtained by getAccount()).
+ *
+ * If acquireTokenSilent() returns an error that requires an interaction,
+ * invoke acquireToken() to have the user resolve the interrupt interactively.
+ *
+ * Some example scenarios are
+ * - password change
+ * - the resource you're acquiring a token for has a stricter set of requirement than your SSO refresh token.
+ * - you're introducing a new scope which the user has never consented for.
+ **/
+mMultipleAccountApp.acquireToken(getActivity(), getScopes(), getAuthInteractiveCallback());
+```
+
+Apps shouldn't require the user to sign in every time they request a token. If the user has already signed in, `acquireTokenSilentAsync()` allows apps to request tokens without prompting the user, as shown in the `MultipleAccountModeFragment.java` file, in`initializeUI()` in the `callGraphApiSilentButton` click handler:
+
+```java
+/**
+ * Performs acquireToken without interrupting the user.
+ *
+ * This requires an account object of the account you're obtaining a token for.
+ * (can be obtained via getAccount()).
+ */
+mMultipleAccountApp.acquireTokenSilentAsync(getScopes(),
+ accountList.get(accountListSpinner.getSelectedItemPosition()),
+ AUTHORITY,
+ getAuthSilentCallback());
+```
+
+#### Remove an account
+
+The code to remove an account, and any cached tokens for the account, is in the `MultipleAccountModeFragment.java` file in `initializeUI()` in the handler for the remove account button. Before you can remove an account, you need an account object, which you obtain from MSAL methods like `getAccounts()` and `acquireToken()`. Because removing an account is an asynchronous operation, the `onRemoved` callback is supplied to update the UI.
+
+```java
+/**
+ * Removes the selected account and cached tokens from this app (or device, if the device is in shared mode).
+ **/
+mMultipleAccountApp.removeAccount(accountList.get(accountListSpinner.getSelectedItemPosition()),
+ new IMultipleAccountPublicClientApplication.RemoveAccountCallback() {
+ @Override
+ public void onRemoved() {
+ ...
+ /* Reload account asynchronously to get the up-to-date list. */
+ loadAccounts();
+ }
+
+ @Override
+ public void onError(@NonNull MsalException exception) {
+ displayError(exception);
+ }
+ });
+```
+
+### auth_config_multiple_account.json
+
+This is the configuration file for a MSAL app that uses multiple accounts.
+
+See [Understand the Android MSAL configuration file ](msal-configuration.md) for an explanation of the various fields.
+
+Unlike the [auth_config_single_account.json](#auth_config_single_accountjson) configuration file, this config file has `"account_mode" : "MULTIPLE"` instead of `"account_mode" : "SINGLE"` because this is a multiple account app.
+
+`"client_id"` is preconfigured to use an app object registration that Microsoft maintains.
+`"redirect_uri"`is preconfigured to use the signing key provided with the code sample.
+
+```json
+{
+ "client_id" : "0984a7b6-bc13-4141-8b0d-8f767e136bb7",
+ "authorization_user_agent" : "DEFAULT",
+ "redirect_uri" : "msauth://com.azuresamples.msalandroidapp/1wIqXSqBj7w%2Bh11ZifsnqwgyKrY%3D",
+ "account_mode" : "MULTIPLE",
+ "broker_redirect_uri_registered": true,
+ "authorities" : [
+ {
+ "type": "AAD",
+ "audience": {
+ "type": "AzureADandPersonalMicrosoftAccount",
+ "tenant_id": "common"
+ }
+ }
+ ]
+}
+```
++
+## Next steps
+
+Move on to the Android tutorial in which you build an Android app that gets an access token from the Microsoft identity platform and uses it to call the Microsoft Graph API.
+
+> [!div class="nextstepaction"]
+> [Tutorial: Sign in users and call the Microsoft Graph from an Android application](tutorial-v2-android.md)
active-directory Mobile App Quickstart Portal Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mobile-app-quickstart-portal-ios.md
+
+ Title: "Quickstart: Add sign in with Microsoft to an iOS or macOS app | Azure"
+
+description: In this quickstart, learn how an iOS or macOS app can sign in users, get an access token from the Microsoft identity platform, and call the Microsoft Graph API.
+++++++ Last updated : 02/15/2022++++
+#Customer intent: As an application developer, I want to learn how to sign in users and call Microsoft Graph from my iOS or macOS application.
++
+# Quickstart: Sign in users and call the Microsoft Graph API from an iOS or macOS app
+
+In this quickstart, you download and run a code sample that demonstrates how a native iOS or macOS application can sign in users and get an access token to call the Microsoft Graph API.
+
+The quickstart applies to both iOS and macOS apps. Some steps are needed only for iOS apps and will be indicated as such.
+
+## Prerequisites
+
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* XCode 10+
+* iOS 10+
+* macOS 10.12+
+
+## How the sample works
+
+![Shows how the sample app generated by this quickstart works](media/quickstart-v2-ios/ios-intro.svg)
+
+#### Step 1: Configure your application
+For the code sample for this quickstart to work, add a **Redirect URI** compatible with the Auth broker.
+> [!div id="makechanges" class="nextstepaction" class="configure-app-button"]
+> [Make this change for me]()
+
+> [!div id="appconfigured" class="alert alert-info"]
+> ![Already configured](media/quickstart-v2-ios/green-check.png) Your application is configured with these attributes
+
+#### Step 2: Download the sample project
+> [!div class="nextstepaction"]
+> [Download the code sample for iOS]()
+
+> [!div class="nextstepaction"]
+> [Download the code sample for macOS]()
+
+#### Step 3: Install dependencies
+
+1. Extract the zip file.
+2. In a terminal window, navigate to the folder with the downloaded code sample and run `pod install` to install the latest MSAL library.
+
+#### Step 4: Your app is configured and ready to run
+We have configured your project with values of your app's properties and it's ready to run.
+> [!NOTE]
+> `Enter_the_Supported_Account_Info_Here`
+
+1. If you're building an app for [Azure AD national clouds](/graph/deployments#app-registration-and-token-service-root-endpoints), replace the line starting with 'let kGraphEndpoint' and 'let kAuthority' with correct endpoints. For global access, use default values:
+
+ ```swift
+ let kGraphEndpoint = "https://graph.microsoft.com/"
+ let kAuthority = "https://login.microsoftonline.com/common"
+ ```
+
+1. Other endpoints are documented [here](/graph/deployments#app-registration-and-token-service-root-endpoints). For example, to run the quickstart with Azure AD Germany, use following:
+
+ ```swift
+ let kGraphEndpoint = "https://graph.microsoft.de/"
+ let kAuthority = "https://login.microsoftonline.de/common"
+ ```
+
+3. Open the project settings. In the **Identity** section, enter the **Bundle Identifier** that you entered into the portal.
+4. Right-click **Info.plist** and select **Open As** > **Source Code**.
+5. Under the dict root node, replace `Enter_the_bundle_Id_Here` with the ***Bundle Id*** that you used in the portal. Notice the `msauth.` prefix in the string.
+
+ ```xml
+ <key>CFBundleURLTypes</key>
+ <array>
+ <dict>
+ <key>CFBundleURLSchemes</key>
+ <array>
+ <string>msauth.Enter_the_Bundle_Id_Here</string>
+ </array>
+ </dict>
+ </array>
+ ```
+
+6. Build and run the app!
+
+## More Information
+
+Read these sections to learn more about this quickstart.
+
+### Get MSAL
+
+MSAL ([MSAL.framework](https://github.com/AzureAD/microsoft-authentication-library-for-objc)) is the library used to sign in users and request tokens used to access an API protected by Microsoft identity platform. You can add MSAL to your application using the following process:
+
+```
+$ vi Podfile
+```
+
+Add the following to this podfile (with your project's target):
+
+```
+use_frameworks!
+
+target 'MSALiOS' do
+ pod 'MSAL'
+end
+```
+
+Run CocoaPods installation command:
+
+`pod install`
+
+### Initialize MSAL
+
+You can add the reference for MSAL by adding the following code:
+
+```swift
+import MSAL
+```
+
+Then, initialize MSAL using the following code:
+
+```swift
+let authority = try MSALAADAuthority(url: URL(string: kAuthority)!)
+
+let msalConfiguration = MSALPublicClientApplicationConfig(clientId: kClientID, redirectUri: nil, authority: authority)
+self.applicationContext = try MSALPublicClientApplication(configuration: msalConfiguration)
+```
+
+> |Where: | Description |
+> |||
+> | `clientId` | The Application ID from the application registered in *portal.azure.com* |
+> | `authority` | The Microsoft identity platform. In most of cases this will be `https://login.microsoftonline.com/common` |
+> | `redirectUri` | The redirect URI of the application. You can pass 'nil' to use the default value, or your custom redirect URI. |
+
+### For iOS only, additional app requirements
+
+Your app must also have the following in your `AppDelegate`. This lets MSAL SDK handle token response from the Auth broker app when you do authentication.
+
+```swift
+func application(_ app: UIApplication, open url: URL, options: [UIApplication.OpenURLOptionsKey : Any] = [:]) -> Bool {
+
+ return MSALPublicClientApplication.handleMSALResponse(url, sourceApplication: options[UIApplication.OpenURLOptionsKey.sourceApplication] as? String)
+}
+```
+
+> [!NOTE]
+> On iOS 13+, if you adopt `UISceneDelegate` instead of `UIApplicationDelegate`, place this code into the `scene:openURLContexts:` callback instead (See [Apple's documentation](https://developer.apple.com/documentation/uikit/uiscenedelegate/3238059-scene?language=objc)).
+> If you support both UISceneDelegate and UIApplicationDelegate for compatibility with older iOS, MSAL callback needs to be placed into both places.
+
+```swift
+func scene(_ scene: UIScene, openURLContexts URLContexts: Set<UIOpenURLContext>) {
+
+ guard let urlContext = URLContexts.first else {
+ return
+ }
+
+ let url = urlContext.url
+ let sourceApp = urlContext.options.sourceApplication
+
+ MSALPublicClientApplication.handleMSALResponse(url, sourceApplication: sourceApp)
+}
+```
+
+Finally, your app must have an `LSApplicationQueriesSchemes` entry in your ***Info.plist*** alongside the `CFBundleURLTypes`. The sample comes with this included.
+
+ ```xml
+ <key>LSApplicationQueriesSchemes</key>
+ <array>
+ <string>msauthv2</string>
+ <string>msauthv3</string>
+ </array>
+ ```
+
+### Sign in users & request tokens
+
+MSAL has two methods used to acquire tokens: `acquireToken` and `acquireTokenSilent`.
+
+#### acquireToken: Get a token interactively
+
+Some situations require users to interact with Microsoft identity platform. In these cases, the end user may be required to select their account, enter their credentials, or consent to your app's permissions. For example,
+
+* The first time users sign in to the application
+* If a user resets their password, they'll need to enter their credentials
+* When your application is requesting access to a resource for the first time
+* When MFA or other Conditional Access policies are required
+
+```swift
+let parameters = MSALInteractiveTokenParameters(scopes: kScopes, webviewParameters: self.webViewParamaters!)
+self.applicationContext!.acquireToken(with: parameters) { (result, error) in /* Add your handling logic */}
+```
+
+> |Where:| Description |
+> |||
+> | `scopes` | Contains the scopes being requested (that is, `[ "user.read" ]` for Microsoft Graph or `[ "<Application ID URL>/scope" ]` for custom web APIs (`api://<Application ID>/access_as_user`) |
+
+#### acquireTokenSilent: Get an access token silently
+
+Apps shouldn't require their users to sign in every time they request a token. If the user has already signed in, this method allows apps to request tokens silently.
+
+```swift
+self.applicationContext!.getCurrentAccount(with: nil) { (currentAccount, previousAccount, error) in
+
+ guard let account = currentAccount else {
+ return
+ }
+
+ let silentParams = MSALSilentTokenParameters(scopes: self.kScopes, account: account)
+ self.applicationContext!.acquireTokenSilent(with: silentParams) { (result, error) in /* Add your handling logic */}
+}
+```
+
+> |Where: | Description |
+> |||
+> | `scopes` | Contains the scopes being requested (that is, `[ "user.read" ]` for Microsoft Graph or `[ "<Application ID URL>/scope" ]` for custom web APIs (`api://<Application ID>/access_as_user`) |
+> | `account` | The account a token is being requested for. This quickstart is about a single account application. If you want to build a multi-account app you'll need to define logic to identify which account to use for token requests using `accountsFromDeviceForParameters:completionBlock:` and passing correct `accountIdentifier` |
++
+## Next steps
+
+Move on to the step-by-step tutorial in which you build an iOS or macOS app that gets an access token from the Microsoft identity platform and uses it to call the Microsoft Graph API.
+
+> [!div class="nextstepaction"]
+> [Tutorial: Sign in users and call Microsoft Graph from an iOS or macOS app](tutorial-v2-ios.md)
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 01/31/2022 Last updated : 02/16/2022
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on January 31st, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
+>This information last updated on February 16th, 2022.<br/>You can also download a CSV version of this table [here](https://download.microsoft.com/download/e/3/e/e3e9faf2-f28b-490a-9ada-c6089a1fc5b0/Product%20names%20and%20service%20plan%20identifiers%20for%20licensing.csv).
><br/> | Product name | String ID | GUID | Service plans included | Service plans included (friendly names) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| COMMON AREA PHONE | MCOCAP | 295a8eb0-f78d-45c7-8b5b-1eed5ed02dff | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MICROSOFT TEAMS (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) | | Common Area Phone for GCC | MCOCAP_GOV | b1511558-69bd-4e1b-8270-59ca96dba0f3 | MCOEV_GOV (db23fce2-a974-42ef-9002-d78dd42a0f22)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>MCOSTANDARD_GOV (a31ef4a2-f787-435e-8335-e47eb0cafc94) | Microsoft 365 Phone System for Government (db23fce2-a974-42ef-9002-d78dd42a0f22)<br/>Microsoft Teams for Government (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>Skype for Business Online (Plan 2) for Government (a31ef4a2-f787-435e-8335-e47eb0cafc94) | | Common Data Service Database Capacity | CDS_DB_CAPACITY | e612d426-6bc3-4181-9658-91aa906b0ac0 | CDS_DB_CAPACITY (360bcc37-0c11-4264-8eed-9fa7a3297c9b)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Common Data Service for Apps Database Capacity (360bcc37-0c11-4264-8eed-9fa7a3297c9b)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) |
+| Common Data Service Database Capacity for Government | CDS_DB_CAPACITY_GOV | eddf428b-da0e-4115-accf-b29eb0b83965 | CDS_DB_CAPACITY_GOV (1ddffef6-4f69-455e-89c7-d5d72105f915)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8) | Common Data Service for Apps Database Capacity for Government (1ddffef6-4f69-455e-89c7-d5d72105f915)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)|
| Common Data Service Log Capacity | CDS_LOG_CAPACITY | 448b063f-9cc6-42fc-a0e6-40e08724a395 | CDS_LOG_CAPACITY (dc48f5c5-e87d-43d6-b884-7ac4a59e7ee9)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Common Data Service for Apps Log Capacity (dc48f5c5-e87d-43d6-b884-7ac4a59e7ee9)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | COMMUNICATIONS CREDITS | MCOPSTNC | 47794cd0-f0e5-45c5-9033-2eb6b5fc84e0 | MCOPSTNC (505e180f-f7e0-4b65-91d4-00d670bbd18c) | COMMUNICATIONS CREDITS (505e180f-f7e0-4b65-91d4-00d670bbd18c) | | Dynamics 365 - Additional Database Storage (Qualified Offer) | CRMSTORAGE | 328dc228-00bc-48c6-8b09-1fbc8bc3435d | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CRMSTORAGE (77866113-0f3e-4e6e-9666-b1e25c6f99b0) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Dynamics CRM Online Storage Add-On (77866113-0f3e-4e6e-9666-b1e25c6f99b0) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| EXCHANGE ONLINE ESSENTIALS (ExO P1 BASED) | EXCHANGEESSENTIALS | 7fc0182e-d107-4556-8329-7caaa511197b | EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c) | EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)| | EXCHANGE ONLINE ESSENTIALS | EXCHANGE_S_ESSENTIALS | e8f81a67-bd96-4074-b108-cf193eb9433b | EXCHANGE_S_ESSENTIALS (1126bef5-da20-4f07-b45e-ad25d2581aa8)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c) | EXCHANGE ESSENTIALS (1126bef5-da20-4f07-b45e-ad25d2581aa8)<br/>TO-DO (PLAN 1) (5e62787c-c316-451f-b873-1d05acd4d12c) | | EXCHANGE ONLINE KIOSK | EXCHANGEDESKLESS | 80b2d799-d2ba-4d2a-8842-fb0d0f3a4b82 | EXCHANGE_S_DESKLESS (4a82b400-a79f-41a4-b4e2-e94f5787b113) | EXCHANGE ONLINE KIOSK (4a82b400-a79f-41a4-b4e2-e94f5787b113) |
+| Exchange Online (Plan 1) for GCC | EXCHANGESTANDARD_GOV | f37d5ebf-4bf1-4aa2-8fa3-50c51059e983 | EXCHANGE_S_STANDARD_GOV (e9b4930a-925f-45e2-ac2a-3f7788ca6fdd)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117) | Exchange Online (Plan 1) for Government (e9b4930a-925f-45e2-ac2a-3f7788ca6fdd)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117) |
| EXCHANGE ONLINE POP | EXCHANGETELCO | cb0a98a8-11bc-494c-83d9-c1b1ac65327e | EXCHANGE_B_STANDARD (90927877-dcff-4af6-b346-2332c0b15bb7) | EXCHANGE ONLINE POP (90927877-dcff-4af6-b346-2332c0b15bb7) |
+| Exchange Online Protection | EOP_ENTERPRISE | 45a2423b-e884-448d-a831-d9e139c52d2f | EOP_ENTERPRISE (326e2b78-9d27-42c9-8509-46c827743a17) | Exchange Online Protection (326e2b78-9d27-42c9-8509-46c827743a17) |
| INTUNE | INTUNE_A | 061f9ace-7d42-4136-88ac-31dc755f143f | INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | | Microsoft Dynamics AX7 User Trial | AX7_USER_TRIAL | fcecd1f9-a91e-488d-a918-a96cdb6ce2b0 | ERP_TRIAL_INSTANCE (e2f705fd-2468-4090-8c58-fad6e6b1e724)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 Operations Trial Environment (e2f705fd-2468-4090-8c58-fad6e6b1e724)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | Microsoft Azure Multi-Factor Authentication | MFA_STANDALONE | cb2020b1-d8f6-41c0-9acd-8ff3d6d7831b | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| MICROSOFT 365 APPS FOR BUSINESS | O365_BUSINESS | cdd28e44-67e3-425e-be4c-737fab2899d3 | FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>OFFICE_BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>OFFICE 365 BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | | MICROSOFT 365 APPS FOR BUSINESS | SMB_BUSINESS | b214fe43-f5a3-4703-beeb-fa97188220fc | FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>OFFICE_BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>OFFICE 365 BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | | MICROSOFT 365 APPS FOR ENTERPRISE | OFFICESUBSCRIPTION | c2273bd0-dff7-4215-9ef5-2c7bcfb06425 | FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) |
+| Microsoft 365 Apps for Faculty | OFFICESUBSCRIPTION_FACULTY | 12b8c807-2e20-48fc-b453-542b6ee9d171 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>RMS_S_BASIC (31cf2cfc-6b0d-4adc-a336-88b724ed8122)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>ONEDRIVESTANDARD (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Azure Rights Management Service (31cf2cfc-6b0d-4adc-a336-88b724ed8122)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>OneDrive for Business (Plan 1) (13696edf-5a08-49f6-8134-03083ed8ba30)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91) |
| MICROSOFT 365 AUDIO CONFERENCING FOR GCC | MCOMEETADV_GOC | 2d3091c7-0712-488b-b3d8-6b97bde6a1f5 | EXCHANGE_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MCOMEETADV_GOV (f544b08d-1645-4287-82de-8d91f37c02a1) | EXCHANGE FOUNDATION FOR GOVERNMENT (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MICROSOFT 365 AUDIO CONFERENCING FOR GOVERNMENT (f544b08d-1645-4287-82de-8d91f37c02a1) | | MICROSOFT 365 BUSINESS BASIC | O365_BUSINESS_ESSENTIALS | 3b555118-da6a-4418-894f-7df1e2096870 | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | To-Do (Plan 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | | MICROSOFT 365 BUSINESS BASIC | SMB_BUSINESS_ESSENTIALS | dab7782a-93b1-4074-8bb1-0e61318bea0b | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) | TO-DO (PLAN 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) | | MICROSOFT 365 BUSINESS STANDARD | O365_BUSINESS_PREMIUM | f245ecc8-75af-4f8e-b61f-27d8114de5f3 | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>O365_SB_Relationship_Management (5bfe124c-bbdc-4494-8835-f1297d457d79)<br/>OFFICE_BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653)| To-Do (Plan 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>OUTLOOK CUSTOMER MANAGER (5bfe124c-bbdc-4494-8835-f1297d457d79)<br/>OFFICE 365 BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | | MICROSOFT 365 BUSINESS STANDARD - PREPAID LEGACY | SMB_BUSINESS_PREMIUM | ac5cef5d-921b-4f97-9ef3-c99076e5470f | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>O365_SB_Relationship_Management (5bfe124c-bbdc-4494-8835-f1297d457d79)<br/>OFFICE_BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) | To-Do (Plan 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>OUTLOOK CUSTOMER MANAGER (5bfe124c-bbdc-4494-8835-f1297d457d79)<br/>OFFICE 365 BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) | | MICROSOFT 365 BUSINESS PREMIUM | SPB | cbdc14ab-d96c-4c30-b9f4-6ada7cdc1d46 | AAD_SMB (de377cbc-0019-4ec2-b77c-3f223947e102)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE_S_ARCHIVE_ADDON (176a09a6-7ec5-4039-ac02-b2791c6ba793)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>INTUNE_SMBIZ (8e9ff0ff-aa7a-4b20-83c1-2f636b600ac2)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>O365_SB_Relationship_Management (5bfe124c-bbdc-4494-8835-f1297d457d79)<br/>OFFICE_BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>STREAM_O365_E1 (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>WINBIZ (8e229017-d77b-43d5-9305-903395523b99)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | AZURE ACTIVE DIRECTORY (de377cbc-0019-4ec2-b77c-3f223947e102)<br/>TO-DO (PLAN 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE ONLINE ARCHIVING FOR EXCHANGE ONLINE (176a09a6-7ec5-4039-ac02-b2791c6ba793)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>MICROSOFT INTUNE (8e9ff0ff-aa7a-4b20-83c1-2f636b600ac2)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MICROSOFT BOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>OUTLOOK CUSTOMER MANAGER (5bfe124c-bbdc-4494-8835-f1297d457d79)<br/>OFFICE 365 BUSINESS (094e7854-93fc-4d55-b2c0-3ab5369ebdc1)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>AZURE INFORMATION PROTECTION PREMIUM P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>MICROSOFT STREAM FOR O365 E1 SKU (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>WINDOWS 10 BUSINESS (8e229017-d77b-43d5-9305-903395523b99)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) |
+| Microsoft 365 Business Voice | BUSINESS_VOICE_MED2 | a6051f20-9cbc-47d2-930d-419183bf6cf1 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOPSTN1 (4ed3ff63-69d7-4fb7-b984-5aec7f605ca8)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Domestic Calling Plan (4ed3ff63-69d7-4fb7-b984-5aec7f605ca8)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) |
| Microsoft 365 Business Voice (US) | BUSINESS_VOICE_MED2_TELCO | 08d7bce8-6e16-490e-89db-1d508e5e9609 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOPSTN1 (4ed3ff63-69d7-4fb7-b984-5aec7f605ca8)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Domestic Calling Plan (4ed3ff63-69d7-4fb7-b984-5aec7f605ca8)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | | Microsoft 365 Business Voice (without calling plan) | BUSINESS_VOICE_DIRECTROUTING | d52db95a-5ecb-46b6-beb0-190ab5cda4a8 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | | Microsoft 365 Business Voice (without Calling Plan) for US | BUSINESS_VOICE_DIRECTROUTING_MED | 8330dae3-d349-44f7-9cad-1b23c64baabe | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| MICROSOFT 365 AUDIO CONFERENCING FOR GCC | MCOMEETADV_GOV | 2d3091c7-0712-488b-b3d8-6b97bde6a1f5 | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MCOMEETADV_GOV (f544b08d-1645-4287-82de-8d91f37c02a1) | EXCHANGE FOUNDATION FOR GOVERNMENT (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>MICROSOFT 365 AUDIO CONFERENCING FOR GOVERNMENT (f544b08d-1645-4287-82de-8d91f37c02a1) | | Microsoft 365 E5 Suite features | M365_E5_SUITE_COMPONENTS | 99cc8282-2f74-4954-83b7-c6a9a1999067 | Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>MICROSOFTENDPOINTDLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>INSIDER_RISK (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>ML_CLASSIFICATION (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7) | Information Protection and Governance Analytics ΓÇô Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Microsoft Endpoint DLP (64bfac92-2b17-4482-b5e5-a0304429de3e)<br/>Microsoft Insider Risk Management (d587c7a3-bda9-4f99-8776-9bcf59c84f75)<br/>Microsoft ML-based classification (d2d51368-76c9-4317-ada2-a12c004c432f)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7) | | Microsoft 365 F1 | M365_F1_COMM | 50f60901-3181-4b75-8a2c-4c8e4c1d5a72 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>DYN365_CDS_O365_F1 (ca6e61ec-d4f4-41eb-8b88-d96e0e14323f)<br/>EXCHANGE_S_DESKLESS (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>STREAM_O365_K (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>MCOIMP (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/> RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>DYN365_CDS_O365_F1 (ca6e61ec-d4f4-41eb-8b88-d96e0e14323f)<br/>EXCHANGE_S_DESKLESS (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>STREAM_O365_K (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>MCOIMP (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) |
+| Microsoft 365 F3 GCC | M365_F1_GOV | 2a914830-d700-444a-b73c-e3f31980d833 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM_GOV (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>DYN365_CDS_O365_F1_GCC (29007dd3-36c0-4cc2-935d-f5bca2c2c473)<br/>CDS_O365_F1_GCC (5e05331a-0aec-437e-87db-9ef5934b5771)<br/>EXCHANGE_S_DESKLESS_GOV (88f4d7ef-a73b-4246-8047-516022144c9f)<br/>FORMS_GOV_F1 (bfd4133a-bbf3-4212-972b-60412137c428)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>STREAM_O365_K_GOV (d65648f1-9504-46e4-8611-2658763f28b8)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708- 6ee03664b117)<br/>PROJECTWORKMANAGEMENT_GOV (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>OFFICEMOBILE_SUBSCRIPTION_GOV (4ccb60ee-9523-48fd-8f63-4b090f1ad77a)<br/>POWERAPPS_O365_S1_GOV (49f06c3d-da7d-4fa0-bcce-1458fdd18a59)<br/>FLOW_O365_S1_GOV (5d32692e-5b24-4a59-a77e-b2a8650e25c1)<br/>SHAREPOINTDESKLESS_GOV (b1aeb897-3a19-46e2-8c27-a609413cf193)<br/>MCOIMP_GOV (8a9f17f1-5872-44e8-9b11-3caade9dc90f)<br/>BPOS_S_TODO_FIRSTLINE (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>WHITEBOARD_FIRSTLINE1 (36b29273-c6d0-477a-aca6-6fbe24f538e3) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 for GCC (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>Azure Rights Management (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>Common Data Service - O365 F1 GCC (29007dd3-36c0-4cc2-935d-f5bca2c2c473)<br/>Common Data Service for Teams_F1 GCC (5e05331a-0aec-437e-87db-9ef5934b5771)<br/>Exchange Online (Kiosk) for Government (88f4d7ef-a73b-4246-8047-516022144c9f)<br/>Forms for Government (Plan F1) (bfd4133a-bbf3-4212-972b-60412137c428)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft Stream for O365 for Government (F1) (d65648f1-9504-46e4-8611-2658763f28b8)<br/>Microsoft Teams for Government (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Planner for Government (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>Office for the Web for Government (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>Office Mobile Apps for Office 365 for GCC (4ccb60ee-9523-48fd-8f63-4b090f1ad77a)<br/>Power Apps for Office 365 F3 for Government (49f06c3d-da7d-4fa0-bcce-1458fdd18a59)<br/>Power Automate for Office 365 F3 for Government (5d32692e-5b24-4a59-a77e-b2a8650e25c1)<br/>SharePoint KioskG (b1aeb897-3a19-46e2-8c27-a609413cf193)<br/>Skype for Business Online (Plan 1) for Government (8a9f17f1-5872-44e8-9b11-3caade9dc90f)<br/>To-Do (Firstline) (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>Whiteboard (Firstline) (36b29273-c6d0-477a-aca6-6fbe24f538e3) |
| MICROSOFT 365 G3 GCC | M365_G3_GOV | e823ca47-49c4-46b3-b38d-ca11d5abe3d2 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>RMS_S_PREMIUM_GOV (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>DYN365_CDS_O365_P2_GCC (06162da2-ebf9-4954-99a0-00fee96f95cc)<br/>CDS_O365_P2_GCC (a70bbf38-cdda-470d-adb8-5804b8770f41)<br/>EXCHANGE_S_ENTERPRISE_GOV (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>FORMS_GOV_E3 (24af5f65-d0f3-467b-9f78-ea798c4aeffc)<br/>CONTENT_EXPLORER (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>CONTENTEXPLORER_STANDARD (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2_GOV (6e5b7995-bd4f-4cbd-9d19-0e32010c72f0)<br/>OFFICESUBSCRIPTION_GOV (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>STREAM_O365_E3_GOV (2c1ada27-dbaa-46f9-bda6-ecb94445f758)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>PROJECTWORKMANAGEMENT_GOV (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>POWERAPPS_O365_P2_GOV (0a20c815-5e81-4727-9bdc-2b5a117850c3)<br/>FLOW_O365_P2_GOV (c537f360-6a00-4ace-a7f5-9128d0ac1e4b)<br/>SHAREPOINTENTERPRISE_GOV (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>MCOSTANDARD_GOV (a31ef4a2-f787-435e-8335-e47eb0cafc94) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AZURE RIGHTS MANAGEMENT (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>AZURE RIGHTS MANAGEMENT PREMIUM FOR GOVERNMENT (1b66aedf-8ca1-4f73-af76-ec76c6180f98)<br/>COMMON DATA SERVICE - O365 P2 GCC (06162da2-ebf9-4954-99a0-00fee96f95cc)<br/>COMMON DATA SERVICE FOR TEAMS_P2 GCC (a70bbf38-cdda-470d-adb8-5804b8770f41)<br/>EXCHANGE PLAN 2G (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>FORMS FOR GOVERNMENT (PLAN E3) (24af5f65-d0f3-467b-9f78-ea798c4aeffc)<br/>INFORMATION PROTECTION AND GOVERNANCE ANALYTICS ΓÇô PREMIUM (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>INFORMATION PROTECTION AND GOVERNANCE ANALYTICS ΓÇô STANDARD (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>INFORMATION PROTECTION FOR OFFICE 365 ΓÇô STANDARD (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>INSIGHTS BY MYANALYTICS FOR GOVERNMENT (6e5b7995-bd4f-4cbd-9d19-0e32010c72f0)<br/>MICROSOFT 365 APPS FOR ENTERPRISE G (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>MICROSOFT Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFT BOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>MICROSOFT STREAM FOR O365 FOR GOVERNMENT (E3) (2c1ada27-dbaa-46f9-bda6-ecb94445f758)<br/>MICROSOFT TEAMS FOR GOVERNMENT (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>OFFICE 365 PLANNER FOR GOVERNMENT (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>OFFICE FOR THE WEB (GOVERNMENT) (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>POWER APPS FOR OFFICE 365 FOR GOVERNMENT (0a20c815-5e81-4727-9bdc-2b5a117850c3)<br/>POWER AUTOMATE FOR OFFICE 365 FOR GOVERNMENT (c537f360-6a00-4ace-a7f5-9128d0ac1e4b)<br/>SHAREPOINT PLAN 2G (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) FOR GOVERNMENT (a31ef4a2-f787-435e-8335-e47eb0cafc94) | | MICROSOFT 365 PHONE SYSTEM | MCOEV | e43b5b99-8dfb-405f-9987-dc307f34bcbd | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | | MICROSOFT 365 PHONE SYSTEM FOR DOD | MCOEV_DOD | d01d9287-694b-44f3-bcc5-ada78c8d953e | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Defender for Endpoint Server | MDATP_Server | 509e8ab6-0274-4cda-bcbd-bd164fd562c4 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | | MICROSOFT DYNAMICS CRM ONLINE BASIC | CRMPLAN2 | 906af65a-2970-46d5-9b58-4e9aa50f0657 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>CRMPLAN2 (bf36ca64-95c6-4918-9275-eb9f4ce2c04f)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT DYNAMICS CRM ONLINE BASIC (bf36ca64-95c6-4918-9275-eb9f4ce2c04f)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | | Microsoft Defender for Identity | ATA | 98defdf7-f6c1-44f5-a1f6-943b6764e7a5 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>ADALLOM_FOR_AATP (61d18b02-6889-479f-8f36-56e6e0fe5792) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>SecOps Investigation for MDI (61d18b02-6889-479f-8f36-56e6e0fe5792) |
+| Microsoft Defender for Office 365 (Plan 1) GCC | ATP_ENTERPRISE_GOV | d0d1ca43-b81a-4f51-81e5-a5b1ad7bb005 | ATP_ENTERPRISE_GOV (493ff600-6a2b-4db6-ad37-a7d4eb214516) | Microsoft Defender for Office 365 (Plan 1) for Government (493ff600-6a2b-4db6-ad37-a7d4eb214516) |
| Microsoft Defender for Office 365 (Plan 2) GCC | THREAT_INTELLIGENCE_GOV | 56a59ffb-9df1-421b-9e61-8b568583474d | MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>ATP_ENTERPRISE_GOV (493ff600-6a2b-4db6-ad37-a7d4eb214516)<br/>THREAT_INTELLIGENCE_GOV (900018f1-0cdb-4ecb-94d4-90281760fdc6) | Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft Defender for Office 365 (Plan 1) for Government (493ff600-6a2b-4db6-ad37-a7d4eb214516)<br/>Microsoft Defender for Office 365 (Plan 2) for Government (900018f1-0cdb-4ecb-94d4-90281760fdc6) | | MICROSOFT DYNAMICS CRM ONLINE | CRMSTANDARD | d17b27af-3f49-4822-99f9-56a661538792 | CRMSTANDARD (f9646fb2-e3b2-4309-95de-dc4833737456)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MDM_SALES_COLLABORATION (3413916e-ee66-4071-be30-6f94d4adfeda)<br/>NBPROFESSIONALFORCRM (3e58e97c-9abe-ebab-cd5f-d543d1529634)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | MICROSOFT DYNAMICS CRM ONLINE PROFESSIONAL(f9646fb2-e3b2-4309-95de-dc4833737456)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT DYNAMICS MARKETING SALES COLLABORATION - ELIGIBILITY CRITERIA APPLY (3413916e-ee66-4071-be30-6f94d4adfeda)<br/>MICROSOFT SOCIAL ENGAGEMENT PROFESSIONAL - ELIGIBILITY CRITERIA APPLY (3e58e97c-9abe-ebab-cd5f-d543d1529634)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | | MS IMAGINE ACADEMY | IT_ACADEMY_AD | ba9a34de-4489-469d-879c-0f0f145321cd | IT_ACADEMY_AD (d736def0-1fde-43f0-a5be-e3f8b2de6e41) | MS IMAGINE ACADEMY (d736def0-1fde-43f0-a5be-e3f8b2de6e41) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Teams Rooms Standard | MEETING_ROOM | 6070a4c8-34c6-4937-8dfb-39bbc6397a60 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) | | Microsoft Teams Trial | MS_TEAMS_IW | 74fbf1bb-47c6-4796-9623-77dc7371723b | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MCO_TEAMS_IW (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Teams (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Office 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>Power Automate for Office 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>SharePoint Kiosk (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) | | Microsoft Threat Experts - Experts on Demand | EXPERTS_ON_DEMAND | 9fa2f157-c8e4-4351-a3f2-ffa506da1406 | EXPERTS_ON_DEMAND (b83a66d4-f05f-414d-ac0f-ea1c5239c42b) | Microsoft Threat Experts - Experts on Demand (b83a66d4-f05f-414d-ac0f-ea1c5239c42b) |
+| Microsoft Workplace Analytics | WORKPLACE_ANALYTICS | 3d957427-ecdc-4df2-aacd-01cc9d519da8 | WORKPLACE_ANALYTICS (f477b0f0-3bb1-4890-940c-40fcee6ce05f)<br/>WORKPLACE_ANALYTICS_INSIGHTS_BACKEND (ff7b261f-d98b-415b-827c-42a3fdf015af)<br/>WORKPLACE_ANALYTICS_INSIGHTS_USER (b622badb-1b45-48d5-920f-4b27a2c0996c) | Microsoft Workplace Analytics (f477b0f0-3bb1-4890-940c-40fcee6ce05f)<br/>Microsoft Workplace Analytics Insights Backend (ff7b261f-d98b-415b-827c-42a3fdf015af)<br/>Microsoft Workplace Analytics Insights User (b622badb-1b45-48d5-920f-4b27a2c0996c) |
| Multi-Geo Capabilities in Office 365 | OFFICE365_MULTIGEO | 84951599-62b7-46f3-9c9d-30551b2ad607 | EXCHANGEONLINE_MULTIGEO (897d51f1-2cfa-4848-9b30-469149f5e68e)<br/>SHAREPOINTONLINE_MULTIGEO (735c1d98-dd3f-4818-b4ed-c8052e18e62d)<br/>TEAMSMULTIGEO (41eda15d-6b52-453b-906f-bc4a5b25a26b) | Exchange Online Multi-Geo (897d51f1-2cfa-4848-9b30-469149f5e68e)<br/>SharePoint Multi-Geo (735c1d98-dd3f-4818-b4ed-c8052e18e62d)<br/>Teams Multi-Geo (41eda15d-6b52-453b-906f-bc4a5b25a26b) |
-| Teams Rooms Premium | MTR_PREM | 4fb214cb-a430-4a91-9c91-4976763aa78f | MMR_P1 (bdaa59a3-74fd-4137-981a-31d4f84eb8a0)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Meeting Room Managed Services (bdaa59a3-74fd-4137-981a-31d4f84eb8a0)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) |
+| Nonprofit Portal | NONPROFIT_PORTAL | aa2695c9-8d59-4800-9dc8-12e01f1735af | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>NONPROFIT_PORTAL (7dbc2d88-20e2-4eb6-b065-4510b38d6eb2) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Nonprofit Portal (7dbc2d88-20e2-4eb6-b065-4510b38d6eb2)|
+| Office 365 A1 for faculty | STANDARDWOFFPACK_FACULTY | 94763226-9b3c-4e75-a931-5c89701abe66 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>DYN365_CDS_O365_P1 (40b010bb-0b69-4654-ac5e-ba161433f4b4)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_STANDARD 9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>PROJECT_O365_P1 (a55dfd10-0864-46d9-a3cd-da5991a3e0e2)<br/>SCHOOL_DATA_SYNC_P1 (c33802dd-1b50-4b9a-8bb9-f13d2cdeadac)<br/>SHAREPOINTSTANDARD_EDU (0a4983bb-d3e5-4a09-95d8-b2d0127b3df5)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Common Data Service - O365 P1 (40b010bb-0b69-4654-ac5e-ba161433f4b4)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro Plan 2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Nucleus (db4d623d-b514-490b-b7ef-8885eee514de)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Office Mobile Apps for Office 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Project for Office (Plan E1) (a55dfd10-0864-46d9-a3cd-da5991a3e0e2)<br/>School Data Sync (Plan 1) (c33802dd-1b50-4b9a-8bb9-f13d2cdeadac)<br/>SharePoint (Plan 1) for Education (0a4983bb-d3e5-4a09-95d8-b2d0127b3df5)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) |
+| Office 365 A1 Plus for faculty | STANDARDWOFFPACK_IW_FACULTY | 78e66a63-337a-4a9a-8959-41c6654dfb56 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>DYN365_CDS_O365_P1 (40b010bb-0b69-4654-ac5e-ba161433f4b4)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>PROJECT_O365_P1 (a55dfd10-0864-46d9-a3cd-da5991a3e0e2)<br/>SCHOOL_DATA_SYNC_P1 (c33802dd-1b50-4b9a-8bb9-f13d2cdeadac)<br/>SHAREPOINTSTANDARD_EDU (0a4983bb-d3e5-4a09-95d8-b2d0127b3df5)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>VIVA_LEARNING_SEEDED (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Common Data Service - O365 P1 (40b010bb-0b69-4654-ac5e-ba161433f4b4)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro Plan 2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Project for Office (Plan E1) (a55dfd10-0864-46d9-a3cd-da5991a3e0e2)<br/>School Data Sync (Plan 1) (c33802dd-1b50-4b9a-8bb9-f13d2cdeadac)<br/>SharePoint (Plan 1) for Education (0a4983bb-d3e5-4a09-95d8-b2d0127b3df5)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Viva Learning Seeded (b76fb638-6ba6-402a-b9f9-83d28acb3d86)<br/>Whiteboard (Plan 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) |
+| Office 365 A1 for students | STANDARDWOFFPACK_STUDENT | 314c4481-f395-4525-be8b-2ec4bb1e9d91 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>DYN365_CDS_O365_P1 (40b010bb-0b69-4654-ac5e-ba161433f4b4)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>PROJECT_O365_P1 (a55dfd10-0864-46d9-a3cd-da5991a3e0e2)<br/>SCHOOL_DATA_SYNC_P1 (c33802dd-1b50-4b9a-8bb9-f13d2cdeadac)<br/>SHAREPOINTSTANDARD_EDU (0a4983bb-d3e5-4a09-95d8-b2d0127b3df5)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Common Data Service - O365 P1 (40b010bb-0b69-4654-ac5e-ba161433f4b4)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/> Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro Plan 2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Office Mobile Apps for Office 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Project for Office (Plan E1) (a55dfd10-0864-46d9-a3cd-da5991a3e0e2)<br/>School Data Sync (Plan 1) (c33802dd-1b50-4b9a-8bb9-f13d2cdeadac)<br/>SharePoint (Plan 1) for Education (0a4983bb-d3e5-4a09-95d8-b2d0127b3df5)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Whiteboard (Plan 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) |
+| Office 365 A1 Plus for students | STANDARDWOFFPACK_IW_STUDENT | e82ae690-a2d5-4d76-8d30-7c6e01e6022e | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/> DYN365_CDS_O365_P1 (40b010bb-0b69-4654-ac5e-ba161433f4b4)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>PROJECT_O365_P1 (a55dfd10-0864-46d9-a3cd-da5991a3e0e2)<br/>SCHOOL_DATA_SYNC_P1 (c33802dd-1b50-4b9a-8bb9-f13d2cdeadac)<br/>SHAREPOINTSTANDARD_EDU (0a4983bb-d3e5-4a09-95d8-b2d0127b3df5)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Common Data Service - O365 P1 (40b010bb-0b69-4654-ac5e-ba161433f4b4)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro Plan 2 (54fc630f-5a40-48ee-8965-af0503c1386e)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 E3 (9e700747-8b1d-45e5-ab8d-ef187ceec15 6)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Project for Office (Plan E1) (a55dfd10-0864-46d9-a3cd-da5991a3e0e2)<br/>School Data Sync (Plan 1) (c33802dd-1b50-4b9a-8bb9-f13d2cdeadac)<br/>SharePoint (Plan 1) for Education (0a4983bb-d3e5-4a09-95d8-b2d0127b3df5)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Whiteboard (Plan 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) |
| Office 365 A3 for faculty | ENTERPRISEPACKPLUS_FACULTY | e578b273-6db4-4691-bba0-8d691f4da603 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>CDS_O365_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>POWER_VIRTUAL_AGENTS_O365_P2 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/> YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Common Data Service - O365 P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Common Data Service for Teams_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro Plan 3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office for the web (Education) (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Power Virtual Agents for Office 365 P2 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint Plan 2 for EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) | | Office 365 A3 for students | ENTERPRISEPACKPLUS_STUDENT | 98b6e773-24d4-4c0d-a968-6e787a1f8204 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>CDS_O365_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>POWER_VIRTUAL_AGENTS_O365_P2 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for Education (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Common Data Service - O365 P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Common Data Service for Teams_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for Enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro Plan 3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office for the Web for Education (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Power Virtual Agents for Office 365 P2 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint (Plan 2) for Education (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) | | Office 365 A5 for faculty| ENTERPRISEPREMIUM_FACULTY | a4585165-0533-458a-97e3-c400570268c4 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>OFFICE_FORMS_PLAN_3 (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Flow for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Forms (Plan 3) (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Office for the web (Education) (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>PowerApps for Office 365 Plan 3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint Plan 2 for EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Office 365 E5 | ENTERPRISEPREMIUM | c7df2760-2c81-4ef7-b578-5b5392b571df | DYN365_CDS_O365_P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>CDS_O365_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MIP_S_Exchange (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>GRAPH_CONNECTORS_SEARCH_INDEX (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>EXCEL_PREMIUM (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>FORMS_PLAN_E5 (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>POWER_VIRTUAL_AGENTS_O365_P3 (ded3d325-1bdc-453e-8432-5bac26d7a014)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>PROJECT_O365_P3 (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Common Data Service - O365 P3 (28b0fa46-c39a-4188-89e2-58e979a6b014)<br/>Common Data Service for Teams_P3 (afa73018-811e-46e9-988f-f75d2b1b8430)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Data Classification in Microsoft 365 (cd31b152-6326-4d1b-ae1b-997b625182e6)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Graph Connectors Search with Index (a6520331-d7d4-4276-95f5-15c0933bc757)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics ΓÇô Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 ΓÇô Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>M365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Apps for enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Microsoft Excel Advanced Analytics (531ee2f8-b1cb-453b-9c21-d2180d014ca5)<br/>Microsoft Forms (Plan E5) (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office for the web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Automate for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>Power Virtual Agents for Office 365 P3 (ded3d325-1bdc-453e-8432-5bac26d7a014)<br/>PowerApps for Office 365 Plan 3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>Project for Office (Plan E5) (b21a6b06-1988-436e-a07b-51ec6d9f52ad)<br/>Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) | | OFFICE 365 E5 WITHOUT AUDIO CONFERENCING | ENTERPRISEPREMIUM_NOPSTNCONF | 26d45bd9-adf1-46cd-a9e1-51e9a5524128 | ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>FORMS_PLAN_E5 (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | OFFICE 365 CLOUD APP SECURITY (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>POWER BI PRO (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>OFFICE 365 ADVANCED EDISCOVERY (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>EXCHANGE ONLINE (PLAN 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW FOR OFFICE 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>MICROSOFT FORMS (PLAN E5) (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>POWERAPPS FOR OFFICE 365 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>MICROSOFT STREAM FOR O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>OFFICE 365 ADVANCED THREAT PROTECTION (PLAN 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | | OFFICE 365 F3 | DESKLESSPACK | 4b585984-651b-448a-9e53-3b10f069cf7f | DYN365_CDS_O365_F1 (ca6e61ec-d4f4-41eb-8b88-d96e0e14323f)<br/>CDS_O365_F1 (90db65a7-bf11-4904-a79f-ef657605145b)<br/>EXCHANGE_S_DESKLESS (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>RMS_S_BASIC (31cf2cfc-6b0d-4adc-a336-88b724ed8122)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>FORMS_PLAN_K (f07046bd-2a3c-4b96-b0be-dea79d7cbfb8)<br/>KAIZALA_O365_P1 (73b2a583-6a59-42e3-8e83-54db46bc3278)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_K (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_S1 (e0287f9f-e222-4f98-9a83-f379e249159a)<br/>FLOW_O365_S1 (bd91b1a4-9f94-4ecf-b45b-3a65e5c8128a)<br/>POWER_VIRTUAL_AGENTS_O365_F1 (ba2fdb48-290b-4632-b46a-e4ecc58ac11a)<br/>PROJECT_O365_F3 (7f6f28c2-34bb-4d4b-be36-48ca2e77e1ec)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>MCOIMP (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_FIRSTLINE (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>WHITEBOARD_FIRSTLINE1 (36b29273-c6d0-477a-aca6-6fbe24f538e3)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Common Data Service - O365 F1 (ca6e61ec-d4f4-41eb-8b88-d96e0e14323f)<br/>Common Data Service for Teams_F1 (90db65a7-bf11-4904-a79f-ef657605145b)<br/>Exchange Online Kiosk (4a82b400-a79f-41a4-b4e2-e94f5787b113)<br/>Microsoft Azure Rights Management Service (31cf2cfc-6b0d-4adc-a336-88b724ed8122)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Forms (Plan F1) (f07046bd-2a3c-4b96-b0be-dea79d7cbfb8)<br/>Microsoft Kaizala Pro Plan 1 (73b2a583-6a59-42e3-8e83-54db46bc3278)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for Office 365 F3 (3ffba0d2-38e5-4d5e-8ec0-98f2b05c09d9)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Office Mobile Apps for Office 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>Power Apps for Office 365 F3 (e0287f9f-e222-4f98-9a83-f379e249159a)<br/>Power Automate for Office 365 F3 (bd91b1a4-9f94-4ecf-b45b-3a65e5c8128a)<br/>Power Virtual Agents for Office 365 F1 (ba2fdb48-290b-4632-b46a-e4ecc58ac11a)<br/>Project for Office (Plan F) (7f6f28c2-34bb-4d4b-be36-48ca2e77e1ec)<br/>SharePoint Kiosk (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>Skype for Business Online (Plan 1) (afc06cb0-b4f4-4473-8286-d644f70d8faf)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Firstline) (80873e7a-cd2a-4e67-b061-1b5381a676a5)<br/>Whiteboard (Firstline) (36b29273-c6d0-477a-aca6-6fbe24f538e3)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) |
+| Office 365 G1 GCC | STANDARDPACK_GOV | 3f4babde-90ec-47c6-995d-d223749065d1 | DYN365_CDS_O365_P1_GCC (8eb5e9bc-783f-4425-921a-c65f45dd72c6)<br/>CDS_O365_P1_GCC (959e5dec-6522-4d44-8349-132c27c3795a)<br/>EXCHANGE_S_STANDARD_GOV (e9b4930a-925f-45e2-ac2a-3f7788ca6fdd)<br/>FORMS_GOV_E1 (f4cba850-4f34-4fd2-a341-0fddfdce1e8f)<br/>MYANALYTICS_P2_GOV (6e5b7995-bd4f-4cbd-9d19-0e32010c72f0)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>STREAM_O365_E1_GOV (15267263-5986-449d-ac5c-124f3b49b2d6)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>PROJECTWORKMANAGEMENT_GOV (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>OFFICEMOBILE_SUBSCRIPTION_GOV (4ccb60ee-9523-48fd-8f63-4b090f1ad77a)<br/>POWERAPPS_O365_P1_GOV (c42aa49a-f357-45d5-9972-bc29df885fee)<br/>FLOW_O365_P1_GOV (ad6c8870-6356-474c-901c-64d7da8cea48)<br/>SharePoint Plan 1G (f9c43823-deb4-46a8-aa65-8b551f0c4f8a)<br/>MCOSTANDARD_GOV (a31ef4a2-f787-435e-8335-e47eb0cafc94)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d) | Common Data Service - O365 P1 GCC (8eb5e9bc-783f-4425-921a-c65f45dd72c6)<br/>Common Data Service for Teams_P1 GCC (959e5dec-6522-4d44-8349-132c27c3795a)<br/>Exchange Online (Plan 1) for Government (e9b4930a-925f-45e2-ac2a-3f7788ca6fdd)<br/>Forms for Government (Plan E1) (f4cba850-4f34-4fd2-a341-0fddfdce1e8f)<br/>Insights by MyAnalytics for Government (6e5b7995-bd4f-4cbd-9d19-0e32010c72f0)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft Stream for O365 for Government (E1) (15267263-5986-449d-ac5c-124f3b49b2d6)<br/>Microsoft Teams for Government(304767db-7d23-49e8-a945- 4a7eb65f9f28)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Planner for Government (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>Office for the Web for Government (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>Office Mobile Apps for Office 365 for GCC (4ccb60ee-9523-48fd-8f63-4b090f1ad77a)<br/> Power Apps for Office 365 for Government (c42aa49a-f357-45d5-9972-bc29df885fee)<br/>Power Automate for Office 365 for Government (ad6c8870-6356-474c-901c-64d7da8cea48)<br/>SharePoint Plan 1G (f9c43823-deb4-46a8-aa65-8b551f0c4f8a)<br/>Skype for Business Online (Plan 2) for Government (a31ef4a2-f787-435e-8335-e47eb0cafc94)<br/>To-Do (Plan 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>Whiteboard (Plan 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d) |
| OFFICE 365 G3 GCC | ENTERPRISEPACK_GOV | 535a3a29-c5f0-42fe-8215-d3b9e1f38c4a | RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>DYN365_CDS_O365_P2_GCC (06162da2-ebf9-4954-99a0-00fee96f95cc)<br/>CDS_O365_P2_GCC (a70bbf38-cdda-470d-adb8-5804b8770f41)<br/>EXCHANGE_S_ENTERPRISE_GOV (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>FORMS_GOV_E3 (24af5f65-d0f3-467b-9f78-ea798c4aeffc)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2_GOV (6e5b7995-bd4f-4cbd-9d19-0e32010c72f0)<br/>OFFICESUBSCRIPTION_GOV (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>STREAM_O365_E3_GOV (2c1ada27-dbaa-46f9-bda6-ecb94445f758)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>PROJECTWORKMANAGEMENT_GOV (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>POWERAPPS_O365_P2_GOV (0a20c815-5e81-4727-9bdc-2b5a117850c3)<br/>FLOW_O365_P2_GOV (c537f360-6a00-4ace-a7f5-9128d0ac1e4b)<br/>SHAREPOINTENTERPRISE_GOV (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>MCOSTANDARD_GOV (a31ef4a2-f787-435e-8335-e47eb0cafc94) | AZURE RIGHTS MANAGEMENT (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>COMMON DATA SERVICE - O365 P2 GCC (06162da2-ebf9-4954-99a0-00fee96f95cc)<br/>COMMON DATA SERVICE FOR TEAMS_P2 GCC (a70bbf38-cdda-470d-adb8-5804b8770f41)<br/>EXCHANGE PLAN 2G (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>FORMS FOR GOVERNMENT (PLAN E3) (24af5f65-d0f3-467b-9f78-ea798c4aeffc)<br/>INFORMATION PROTECTION AND GOVERNANCE ANALYTICS ΓÇô PREMIUM (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>INFORMATION PROTECTION AND GOVERNANCE ANALYTICS ΓÇô STANDARD (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>INFORMATION PROTECTION FOR OFFICE 365 ΓÇô STANDARD (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>INSIGHTS BY MYANALYTICS FOR GOVERNMENT (6e5b7995-bd4f-4cbd-9d19-0e32010c72f0)<br/>MICROSOFT 365 APPS FOR ENTERPRISE G (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>MICROSOFT BOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>MICROSOFT STREAM FOR O365 FOR GOVERNMENT (E3) (2c1ada27-dbaa-46f9-bda6-ecb94445f758)<br/>MICROSOFT TEAMS FOR GOVERNMENT (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>MOBILE DEVICE MANAGEMENT FOR OFFICE 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>OFFICE 365 PLANNER FOR GOVERNMENT (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>OFFICE FOR THE WEB (GOVERNMENT) (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>POWER APPS FOR OFFICE 365 FOR GOVERNMENT (0a20c815-5e81-4727-9bdc-2b5a117850c3)<br/>POWER AUTOMATE FOR OFFICE 365 FOR GOVERNMENT (c537f360-6a00-4ace-a7f5-9128d0ac1e4b)<br/>SHAREPOINT PLAN 2G (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) FOR GOVERNMENT (a31ef4a2-f787-435e-8335-e47eb0cafc94) | | Office 365 G5 GCC | ENTERPRISEPREMIUM_GOV | 8900a2c0-edba-4079-bdf3-b276e293b6a8 | RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>DYN365_CDS_O365_P3_GCC (a7d3fb37-b6df-4085-b509-50810d991a39)<br/>CDS_O365_P3_GCC (bce5e5ca-c2fd-4d53-8ee2-58dfffed4c10)<br/>LOCKBOX_ENTERPRISE_GOV (89b5d3b1-3855-49fe-b46c-87c66dbc1526)<br/>EXCHANGE_S_ENTERPRISE_GOV (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>FORMS_GOV_E5 (843da3a8-d2cc-4e7a-9e90-dc46019f964c)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>OFFICESUBSCRIPTION_GOV (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>MCOMEETADV_GOV (f544b08d-1645-4287-82de-8d91f37c02a1)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>MCOEV_GOV (db23fce2-a974-42ef-9002-d78dd42a0f22)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>ATP_ENTERPRISE_GOV (493ff600-6a2b-4db6-ad37-a7d4eb214516)<br/>THREAT_INTELLIGENCE_GOV (900018f1-0cdb-4ecb-94d4-90281760fdc6)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>EXCHANGE_ANALYTICS_GOV (208120d1-9adb-4daf-8c22-816bd5d237e7)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>STREAM_O365_E5_GOV (92c2089d-9a53-49fe-b1a6-9e6bdf959547)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS_GOV (d1cbfb67-18a8-4792-b643-630b7f19aad1)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>PROJECTWORKMANAGEMENT_GOV (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>POWERAPPS_O365_P3_GOV (0eacfc38-458a-40d3-9eab-9671258f1a3e)<br/>FLOW_O365_P3_GOV (8055d84a-c172-42eb-b997-6c2ae4628246)<br/>BI_AZURE_P_2_GOV (944e9726-f011-4353-b654-5f7d2663db76)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>SHAREPOINTENTERPRISE_GOV (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>MCOSTANDARD_GOV (a31ef4a2-f787-435e-8335-e47eb0cafc94) | RMS_S_ENTERPRISE_GOV (6a76346d-5d6e-4051-9fe3-ed3f312b5597)<br/>DYN365_CDS_O365_P3_GCC (a7d3fb37-b6df-4085-b509-50810d991a39)<br/>CDS_O365_P3_GCC (bce5e5ca-c2fd-4d53-8ee2-58dfffed4c10)<br/>LOCKBOX_ENTERPRISE_GOV (89b5d3b1-3855-49fe-b46c-87c66dbc1526)<br/>EXCHANGE_S_ENTERPRISE_GOV (8c3069c0-ccdb-44be-ab77-986203a67df2)<br/>FORMS_GOV_E5 (843da3a8-d2cc-4e7a-9e90-dc46019f964c)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>OFFICESUBSCRIPTION_GOV (de9234ff-6483-44d9-b15e-dca72fdd27af)<br/>MCOMEETADV_GOV (f544b08d-1645-4287-82de-8d91f37c02a1)<br/>MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>MCOEV_GOV (db23fce2-a974-42ef-9002-d78dd42a0f22)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>ATP_ENTERPRISE_GOV (493ff600-6a2b-4db6-ad37-a7d4eb214516)<br/>THREAT_INTELLIGENCE_GOV (900018f1-0cdb-4ecb-94d4-90281760fdc6)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>EXCHANGE_ANALYTICS_GOV (208120d1-9adb-4daf-8c22-816bd5d237e7)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>STREAM_O365_E5_GOV (92c2089d-9a53-49fe-b1a6-9e6bdf959547)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS_GOV (d1cbfb67-18a8-4792-b643-630b7f19aad1)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>PROJECTWORKMANAGEMENT_GOV (5b4ef465-7ea1-459a-9f91-033317755a51)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>POWERAPPS_O365_P3_GOV (0eacfc38-458a-40d3-9eab-9671258f1a3e)<br/>FLOW_O365_P3_GOV (8055d84a-c172-42eb-b997-6c2ae4628246)<br/>BI_AZURE_P_2_GOV (944e9726-f011-4353-b654-5f7d2663db76)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>SHAREPOINTENTERPRISE_GOV (153f85dd-d912-4762-af6c-d6e0fb4f6692)<br/>MCOSTANDARD_GOV (a31ef4a2-f787-435e-8335-e47eb0cafc94) |
+| Office 365 Advanced Compliance for GCC | EQUIVIO_ANALYTICS_GOV | 1a585bba-1ce3-416e-b1d6-9c482b52fcf6 | LOCKBOX_ENTERPRISE_GOV (89b5d3b1-3855-49fe-b46c-87c66dbc1526)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Content_Explorer (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>ContentExplorer_Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>MICROSOFT_COMMUNICATION_COMPLIANCE (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/> RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>EQUIVIO_ANALYTICS_GOV (d1cbfb67-18a8-4792-b643-630b7f19aad1)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f) | Customer Lockbox for Government (89b5d3b1-3855-49fe-b46c-87c66dbc1526)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection and Governance Analytics -Premium (d9fa6af4-e046-4c89-9226-729a0786685d)<br/>Information Protection and Governance Analytics ΓÇô Standard (2b815d45-56e4-4e3a-b65c-66cb9175b560)<br/>Information Protection for Office 365 ΓÇô Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Communication Compliance (a413a9ff-720c-4822-98ef-2f37c2a21f4c)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Office 365 Advanced eDiscovery for Government (d1cbfb67-18a8-4792-b643-630b7f19aad1)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f) |
| OFFICE 365 MIDSIZE BUSINESS | MIDSIZEPACK | 04a7fb0d-32e0-4241-b4f5-3f7618cd1162 | EXCHANGE_S_STANDARD_MIDMARKET (fc52cc4b-ed7d-472d-bbe7-b081c23ecc56)<br/>MCOSTANDARD_MIDMARKET (b2669e95-76ef-4e7e-a367-002f60a39f3e)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTENTERPRISE_MIDMARKET (6b5b6a67-fc72-4a1f-a2b5-beecf05de761)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>YAMMER_MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) | EXCHANGE ONLINE PLAN 1(fc52cc4b-ed7d-472d-bbe7-b081c23ecc56)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) FOR MIDSIZE(b2669e95-76ef-4e7e-a367-002f60a39f3e)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINT PLAN 1 (6b5b6a67-fc72-4a1f-a2b5-beecf05de761)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>YAMMER_MIDSIZE (41bf139a-4e60-409f-9346-a1361efc6dfb) | | OFFICE 365 SMALL BUSINESS | LITEPACK | bd09678e-b83c-4d3f-aaba-3dad4abd128b | EXCHANGE_L_STANDARD (d42bdbd6-c335-4231-ab3d-c8f348d5aff5)<br/>MCOLITE (70710b6b-3ab4-4a38-9f6d-9f169461650a)<br/>SHAREPOINTLITE (a1f3d0a8-84c0-4ae0-bae4-685917b8ab48)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | EXCHANGE ONLINE (P1) (d42bdbd6-c335-4231-ab3d-c8f348d5aff5)<br/>SKYPE FOR BUSINESS ONLINE (PLAN P1) (70710b6b-3ab4-4a38-9f6d-9f169461650a)<br/>SHAREPOINTLITE (a1f3d0a8-84c0-4ae0-bae4-685917b8ab48)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | | OFFICE 365 SMALL BUSINESS PREMIUM | LITEPACK_P2 | fc14ec4a-4169-49a4-a51e-2c852931814b | EXCHANGE_L_STANDARD (d42bdbd6-c335-4231-ab3d-c8f348d5aff5)<br/>MCOLITE (70710b6b-3ab4-4a38-9f6d-9f169461650a)<br/>OFFICE_PRO_PLUS_SUBSCRIPTION_SMBIZ (8ca59559-e2ca-470b-b7dd-afd8c0dee963)<br/>SHAREPOINTLITE (a1f3d0a8-84c0-4ae0-bae4-685917b8ab48)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | EXCHANGE ONLINE (P1) (d42bdbd6-c335-4231-ab3d-c8f348d5aff5)<br/>SKYPE FOR BUSINESS ONLINE (PLAN P1) (70710b6b-3ab4-4a38-9f6d-9f169461650a)<br/>OFFICE 365 SMALL BUSINESS SUBSCRIPTION (8ca59559-e2ca-470b-b7dd-afd8c0dee963)<br/>SHAREPOINTLITE (a1f3d0a8-84c0-4ae0-bae4-685917b8ab48)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| PowerApps per app baseline access | POWERAPPS_PER_APP_IW | bf666882-9c9b-4b2e-aa2f-4789b0a52ba2 | CDS_PER_APP_IWTRIAL (94a669d1-84d5-4e54-8462-53b0ae2c8be5)<br/>Flow_Per_APP_IWTRIAL (dd14867e-8d31-4779-a595-304405f5ad39)<br/>POWERAPPS_PER_APP_IWTRIAL (35122886-cef5-44a3-ab36-97134eabd9ba) | CDS Per app baseline access (94a669d1-84d5-4e54-8462-53b0ae2c8be5)<br/>Flow per app baseline access (dd14867e-8d31-4779-a595-304405f5ad39)<br/>PowerApps per app baseline access (35122886-cef5-44a3-ab36-97134eabd9ba) | | Power Apps per app plan | POWERAPPS_PER_APP | a8ad7d2b-b8cf-49d6-b25a-69094a0be206 | CDS_PER_APP (9f2f00ad-21ae-4ceb-994b-d8bc7be90999)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>POWERAPPS_PER_APP (b4f657ff-d83e-4053-909d-baa2b595ec97)<br/>Flow_Per_APP (c539fa36-a64e-479a-82e1-e40ff2aa83ee) | CDS PowerApps per app plan (9f2f00ad-21ae-4ceb-994b-d8bc7be90999)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power Apps per App Plan (b4f657ff-d83e-4053-909d-baa2b595ec97)<br/>Power Automate for Power Apps per App Plan (c539fa36-a64e-479a-82e1-e40ff2aa83ee) | | Power Apps per user plan | POWERAPPS_PER_USER | b30411f5-fea1-4a59-9ad9-3db7c7ead579 | DYN365_CDS_P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>POWERAPPS_PER_USER (ea2cf03b-ac60-46ae-9c1d-eeaeb63cec86)<br/>Flow_PowerApps_PerUser (dc789ed8-0170-4b65-a415-eb77d5bb350a) | Common Data Service - P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power Apps per User Plan (ea2cf03b-ac60-46ae-9c1d-eeaeb63cec86)<br/>Power Automate for Power Apps per User Plan (dc789ed8-0170-4b65-a415-eb77d5bb350a) |
+| Power Apps per user plan for Government | POWERAPPS_PER_USER_GCC | 8e4c6baa-f2ff-4884-9c38-93785d0d7ba1 | CDSAICAPACITY_PERUSER (91f50f7b-2204-4803-acac-5cf5668b8b39)<br/>CDSAICAPACITY_PERUSER_NEW (74d93933-6f22-436e-9441-66d205435abb)<br/>DYN365_CDS_P2_GOV (37396c73-2203-48e6-8be1-d882dae53275)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>POWERAPPS_PER_USER_GCC (8f55b472-f8bf-40a9-be30-e29919d4ddfe)<br/>Flow_PowerApps_PerUser_GCC (8e3eb3bd-bc99-4221-81b8-8b8bc882e128) | AI Builder capacity Per User add-on (91f50f7b-2204-4803-acac-5cf5668b8b39)<br/>AI Builder capacity Per User add-on (74d93933-6f22-436e-9441-66d205435abb)<br/>Common Data Service for Government (37396c73-2203-48e6-8be1-d882dae53275)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Power Apps per User Plan for Government (8f55b472-f8bf-40a9-be30-e29919d4ddfe)<br/>Power Automate for Power Apps per User Plan for GCC (8e3eb3bd-bc99-4221-81b8-8b8bc882e128) |
+| Power Apps Plan 1 for Government | POWERAPPS_P1_GOV | eca22b68-b31f-4e9c-a20c-4d40287bc5dd | DYN365_CDS_P1_GOV (ce361df2-f2a5-4713-953f-4050ba09aad8)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>FLOW_P1_GOV (774da41c-a8b3-47c1-8322-b9c1ab68be9f)<br/>POWERAPPS_P1_GOV (5ce719f1-169f-4021-8a64-7d24dcaec15f) | Common Data Service for Government (ce361df2-f2a5-4713-953f-4050ba09aad8)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Power Automate (Plan 1) for Government (774da41c-a8b3-47c1-8322-b9c1ab68be9f)<br/>PowerApps Plan 1 for Government (5ce719f1-169f-4021-8a64-7d24dcaec15f) |
+| Power Apps Portals login capacity add-on Tier 2 (10 unit min) for Government | POWERAPPS_PORTALS_LOGIN_T2_GCC | 26c903d5-d385-4cb1-b650-8d81a643b3c4 | CDS_POWERAPPS_PORTALS_LOGIN_GCC (0f7b9a29-7990-44ff-9d05-a76be778f410)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>POWERAPPS_PORTALS_LOGIN_GCC (bea6aef1-f52d-4cce-ae09-bed96c4b1811) | Common Data Service Power Apps Portals Login Capacity for GCC (0f7b9a29-7990-44ff-9d05-a76be778f410)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Power Apps Portals Login Capacity Add-On for Government (bea6aef1-f52d-4cce-ae09-bed96c4b1811) |
+| Power Apps Portals page view capacity add-on for Government | POWERAPPS_PORTALS_PAGEVIEW_GCC | 15a64d3e-5b99-4c4b-ae8f-aa6da264bfe7 | CDS_POWERAPPS_PORTALS_PAGEVIEW_GCC (352257a9-db78-4217-a29d-8b8d4705b014)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>POWERAPPS_PORTALS_PAGEVIEW_GCC (483d5646-7724-46ac-ad71-c78b7f099d8d) | CDS PowerApps Portals page view capacity add-on for GCC (352257a9-db78-4217-a29d-8b8d4705b014)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Power Apps Portals Page View Capacity Add-On for Government (483d5646-7724-46ac-ad71-c78b7f099d8d) |
| Power Automate per flow plan | FLOW_BUSINESS_PROCESS | b3a42176-0a8c-4c3f-ba4e-f2b37fe5be6b | CDS_Flow_Business_Process (c84e52ae-1906-4947-ac4d-6fb3e5bf7c2e)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_BUSINESS_PROCESS (7e017b61-a6e0-4bdc-861a-932846591f6e) | Common data service for Flow per business process plan (c84e52ae-1906-4947-ac4d-6fb3e5bf7c2e)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow per business process plan (7e017b61-a6e0-4bdc-861a-932846591f6e) | | Power Automate per user plan | FLOW_PER_USER | 4a51bf65-409c-4a91-b845-1121b571cc9d | DYN365_CDS_P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_PER_USER (c5002c70-f725-4367-b409-f0eff4fee6c0) | Common Data Service - P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow per user plan (c5002c70-f725-4367-b409-f0eff4fee6c0) | | Power Automate per user plan dept | FLOW_PER_USER_DEPT | d80a4c5d-8f05-4b64-9926-6574b9e6aee4 | DYN365_CDS_P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/> FLOW_PER_USER (c5002c70-f725-4367-b409-f0eff4fee6c0) | Common Data Service - P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow per user plan (c5002c70-f725-4367-b409-f0eff4fee6c0) |
+| Power Automate per user plan for Government | FLOW_PER_USER_GCC | c8803586-c136-479a-8ff3-f5f32d23a68e | DYN365_CDS_P2_GOV (37396c73-2203-48e6-8be1-d882dae53275)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>FLOW_PER_USER_GCC (769b8bee-2779-4c5a-9456-6f4f8629fd41) | Common Data Service for Government (37396c73-2203-48e6-8be1-d882dae53275)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Power Automate per User Plan for Government (769b8bee-2779-4c5a-9456-6f4f8629fd41) |
| Power Automate per user with attended RPA plan | POWERAUTOMATE_ATTENDED_RPA | eda1941c-3c4f-4995-b5eb-e85a42175ab9 | CDS_ATTENDED_RPA (3da2fd4c-1bee-4b61-a17f-94c31e5cab93)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>POWER_AUTOMATE_ATTENDED_RPA (375cd0ad-c407-49fd-866a-0bff4f8a9a4d) | Common Data Service Attended RPA (3da2fd4c-1bee-4b61-a17f-94c31e5cab93)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power Automate RPA Attended (375cd0ad-c407-49fd-866a-0bff4f8a9a4d) |
+| Power Automate Plan 1 for Government (Qualified Offer) | FLOW_P1_GOV | 2b3b0c87-36af-4d15-8124-04a691cc2546 | DYN365_CDS_P1_GOV (ce361df2-f2a5-4713-953f-4050ba09aad8)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>FLOW_P1_GOV (774da41c-a8b3-47c1-8322-b9c1ab68be9f) | Common Data Service for Government (ce361df2-f2a5-4713-953f-4050ba09aad8)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Power Automate (Plan 1) for Government (774da41c-a8b3-47c1-8322-b9c1ab68be9f) |
| Power Automate unattended RPA add-on | POWERAUTOMATE_UNATTENDED_RPA | 3539d28c-6e35-4a30-b3a9-cd43d5d3e0e2 |CDS_UNATTENDED_RPA (b475952f-128a-4a44-b82a-0b98a45ca7fb)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>POWER_AUTOMATE_UNATTENDED_RPA (0d373a98-a27a-426f-8993-f9a425ae99c5) | Common Data Service Unattended RPA (b475952f-128a-4a44-b82a-0b98a45ca7fb)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power Automate Unattended RPA add-on (0d373a98-a27a-426f-8993-f9a425ae99c5) | | Power BI | POWER_BI_INDIVIDUAL_USER | e2767865-c3c9-4f09-9f99-6eee6eef861a | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>SQL_IS_SSIM (fc0a60aa-feee-4746-a0e3-aecfe81a38dd)<br/>BI_AZURE_P1 (2125cfd7-2110-4567-83c4-c1cd5275163d) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Power BI Information Services Plan 1 (fc0a60aa-feee-4746-a0e3-aecfe81a38dd)<br/>Microsoft Power BI Reporting and Analytics Plan 1 (2125cfd7-2110-4567-83c4-c1cd5275163d) | | Power BI (free) | POWER_BI_STANDARD | a403ebcc-fae0-4ca2-8c8c-7a907fd6c235 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>BI_AZURE_P0 (2049e525-b859-401b-b2a0-e0a31c4b1fe4) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power BI (free) (2049e525-b859-401b-b2a0-e0a31c4b1fe4) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Power BI Pro | POWER_BI_PRO | f8a1db68-be16-40ed-86d5-cb42ce701560 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba) | | Power BI Pro CE | POWER_BI_PRO_CE | 420af87e-8177-4146-a780-3786adaffbca | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba) | | Power BI Pro Dept | POWER_BI_PRO_DEPT | 3a6a908c-09c5-406a-8170-8ebb63c42882 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba) |
+| Power BI Pro for GCC | POWERBI_PRO_GOV | f0612879-44ea-47fb-baf0-3d76d9235576 | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>BI_AZURE_P_2_GOV (944e9726-f011-4353-b654-5f7d2663db76) | Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)</br>Power BI Pro for Government (944e9726-f011-4353-b654-5f7d2663db76) |
| Power Virtual Agent | VIRTUAL_AGENT_BASE | e4e55366-9635-46f4-a907-fc8c3b5ec81f | CDS_VIRTUAL_AGENT_BASE (0a0a23fa-fea1-4195-bb89-b4789cb12f7f)<br/>FLOW_VIRTUAL_AGENT_BASE (4b81a949-69a1-4409-ad34-9791a6ec88aa)<br/>VIRTUAL_AGENT_BASE (f6934f16-83d3-4f3b-ad27-c6e9c187b260) | Common Data Service for Virtual Agent Base (0a0a23fa-fea1-4195-bb89-b4789cb12f7f)<br/>Power Automate for Virtual Agent (4b81a949-69a1-4409-ad34-9791a6ec88aa)<br/>Virtual Agent Base (f6934f16-83d3-4f3b-ad27-c6e9c187b260) | | Power Virtual Agents Viral Trial | CCIBOTS_PRIVPREV_VIRAL | 606b54a9-78d8-4298-ad8b-df6ef4481c80 | DYN365_CDS_CCI_BOTS (cf7034ed-348f-42eb-8bbd-dddeea43ee81)<br/>CCIBOTS_PRIVPREV_VIRAL (ce312d15-8fdf-44c0-9974-a25a177125ee)<br/>FLOW_CCI_BOTS (5d798708-6473-48ad-9776-3acc301c40af) | Common Data Service for CCI Bots (cf7034ed-348f-42eb-8bbd-dddeea43ee81)<br/>Dynamics 365 AI for Customer Service Virtual Agents Viral (ce312d15-8fdf-44c0-9974-a25a177125ee)<br/>Flow for CCI Bots (5d798708-6473-48ad-9776-3acc301c40af) | | PROJECT FOR OFFICE 365 | PROJECTCLIENT | a10d5e58-74da-4312-95c8-76be4e5b75a0 | PROJECT_CLIENT_SUBSCRIPTION (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3) | PROJECT ONLINE DESKTOP CLIENT (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3) | | Project Online Essentials | PROJECTESSENTIALS | 776df282-9fc0-4862-99e2-70e561b9909e | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Forms (Plan E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>Office for the web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Project Online Essentials (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97) |
+| Project Online Essentials for GCC | PROJECTESSENTIALS_GOV | ca1a159a-f09e-42b8-bb82-cb6420f54c8e | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>PROJECT_ESSENTIALS_GOV (fdcb7064-f45c-46fa-b056-7e0e9fdf4bf3)<br/>SHAREPOINTENTERPRISE_GOV (153f85dd-d912-4762-af6c-d6e0fb4f6692) | Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Office for the Web for Government (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>Project Online Essentials for Government (fdcb7064-f45c-46fa-b056-7e0e9fdf4bf3)<br/>SharePoint Plan 2G (153f85dd-d912-4762-af6c-d6e0fb4f6692) |
| PROJECT ONLINE PREMIUM | PROJECTPREMIUM | 09015f9f-377f-4538-bbb5-f75ceb09358a | PROJECT_CLIENT_SUBSCRIPTION (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | PROJECT ONLINE DESKTOP CLIENT (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | | PROJECT ONLINE PREMIUM WITHOUT PROJECT CLIENT | PROJECTONLINE_PLAN_1 | 2db84718-652c-47a7-860c-f10d8abbdae3 | FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | | PROJECT ONLINE WITH PROJECT FOR OFFICE 365 | PROJECTONLINE_PLAN_2 | f82a60b8-1ee3-4cfb-a4fe-1c6a53c2656c | FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>PROJECT_CLIENT_SUBSCRIPTION (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>PROJECT ONLINE DESKTOP CLIENT (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| SKYPE FOR BUSINESS PSTN DOMESTIC AND INTERNATIONAL CALLING | MCOPSTN2 | d3b4fe1f-9992-4930-8acb-ca6ec609365e | MCOPSTN2 (5a10155d-f5c1-411a-a8ec-e99aae125390) | DOMESTIC AND INTERNATIONAL CALLING PLAN (5a10155d-f5c1-411a-a8ec-e99aae125390) | | SKYPE FOR BUSINESS PSTN DOMESTIC CALLING | MCOPSTN1 | 0dab259f-bf13-4952-b7f8-7db8f131b28d | MCOPSTN1 (4ed3ff63-69d7-4fb7-b984-5aec7f605ca8) | DOMESTIC CALLING PLAN (4ed3ff63-69d7-4fb7-b984-5aec7f605ca8) | | SKYPE FOR BUSINESS PSTN DOMESTIC CALLING (120 Minutes)| MCOPSTN5 | 54a152dc-90de-4996-93d2-bc47e670fc06 | MCOPSTN5 (54a152dc-90de-4996-93d2-bc47e670fc06) | DOMESTIC CALLING PLAN (54a152dc-90de-4996-93d2-bc47e670fc06) |
+| Skype for Business PSTN Usage Calling Plan | MCOPSTNPP | 06b48c5f-01d9-4b18-9015-03b52040f51a | MCOPSTN3 (6b340437-d6f9-4dc5-8cc2-99163f7f83d6) | MCOPSTN3 (6b340437-d6f9-4dc5-8cc2-99163f7f83d6) |
+| Teams Rooms Premium | MTR_PREM | 4fb214cb-a430-4a91-9c91-4976763aa78f | MMR_P1 (bdaa59a3-74fd-4137-981a-31d4f84eb8a0)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Meeting Room Managed Services (bdaa59a3-74fd-4137-981a-31d4f84eb8a0)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) |
| TELSTRA CALLING FOR O365 | MCOPSTNEAU2 | de3312e1-c7b0-46e6-a7c3-a515ff90bc86 | MCOPSTNEAU (7861360b-dc3b-4eba-a3fc-0d323a035746) | AUSTRALIA CALLING PLAN (7861360b-dc3b-4eba-a3fc-0d323a035746) | | Universal Print | UNIVERSAL_PRINT | 9f3d9c1d-25a5-4aaa-8e59-23a1e6450a67 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9) | | Visio Plan 1 | VISIO_PLAN1_DEPT | ca7f3140-d88c-455b-9a1c-7f0679e31a76 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ONEDRIVE_BASIC (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>VISIOONLINE (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>OneDrive for business Basic (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>Visio web app (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| VISIO ONLINE PLAN 2 | VISIOCLIENT | c5928f49-12ba-48f7-ada3-0d743a3601d5 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ONEDRIVE_BASIC (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>VISIO_CLIENT_SUBSCRIPTION (663a804f-1c30-4ff0-9915-9db84f0d1cea)<br/>VISIOONLINE (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ONEDRIVE FOR BUSINESS BASIC (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>VISIO DESKTOP APP (663a804f-1c30-4ff0-9915-9db84f0d1cea)<br/>VISIO WEB APP (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) | | VISIO PLAN 2 FOR GCC | VISIOCLIENT_GOV | 4ae99959-6b0f-43b0-b1ce-68146001bdba | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>ONEDRIVE_BASIC_GOV (98709c2e-96b5-4244-95f5-a0ebe139fb8a)<br/>VISIO_CLIENT_SUBSCRIPTION_GOV (f85945f4-7a55-4009-bc39-6a5f14a8eac1)<br/>VISIOONLINE_GOV (8a9ecb07-cfc0-48ab-866c-f83c4d911576) | EXCHANGE FOUNDATION FOR GOVERNMENT (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>ONEDRIVE FOR BUSINESS BASIC FOR GOVERNMENT (98709c2e-96b5-4244-95f5-a0ebe139fb8a)<br/>VISIO DESKTOP APP FOR Government (f85945f4-7a55-4009-bc39-6a5f14a8eac1)<br/>VISIO WEB APP FOR GOVERNMENT (8a9ecb07-cfc0-48ab-866c-f83c4d911576) | |Viva Topics | TOPIC_EXPERIENCES | 4016f256-b063-4864-816e-d818aad600c9 | GRAPH_CONNECTORS_SEARCH_INDEX_TOPICEXP (b74d57b2-58e9-484a-9731-aeccbba954f0)<br/>CORTEX (c815c93d-0759-4bb8-b857-bc921a71be83) | Graph Connectors Search with Index (Viva Topics) (b74d57b2-58e9-484a-9731-aeccbba954f0)<br/>Viva Topics (c815c93d-0759-4bb8-b857-bc921a71be83) |
+| Windows 10/11 Enterprise E5 (Original) | WIN_ENT_E5 | 1e7e1070-8ccb-4aca-b470-d7cb538cb07e | DATAVERSE_FOR_POWERAUTOMATE_DESKTOP (59231cdf-b40d-4534-a93e-14d0cd31d27e)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>POWERAUTOMATE_DESKTOP_FOR_WIN (2d589a15-b171-4e61-9b5f-31d15eeb2872)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365) | Dataverse for PAD (59231cdf-b40d-4534-a93e-14d0cd31d27e)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>PAD for Windows (2d589a15-b171-4e61-9b5f-31d15eeb2872)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10/11 Enterprise (Original) (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/> Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365) |
| Windows 10 Enterprise A3 for faculty | WIN10_ENT_A3_FAC | 8efbe2f6-106e-442f-97d4-a59aa6037e06 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365) | | Windows 10 Enterprise A3 for students | WIN10_ENT_A3_STU | d4ef921e-840b-4b48-9a90-ab6698bc7b31 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365) | | WINDOWS 10 ENTERPRISE E3 | WIN10_PRO_ENT_SUB | cb10e6cd-9da4-4992-867b-67546b1db821 | WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111) | WINDOWS 10 ENTERPRISE (21b439ba-a0ca-424f-a6cc-52f954a5b111) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Windows 365 Enterprise 2 vCPU, 8 GB, 128 GB (Preview) | CPC_LVL_2 | 461cb62c-6db7-41aa-bf3c-ce78236cdb9e | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_2 (3efff3fe-528a-4fc5-b1ba-845802cc764f) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 2 vCPU, 8 GB, 128 GB (3efff3fe-528a-4fc5-b1ba-845802cc764f) | | Windows 365 Enterprise 4 vCPU, 16 GB, 256 GB (Preview) | CPC_LVL_3 | bbb4bf6e-3e12-4343-84a1-54d160c00f40 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CPC_E_4C_16GB_256GB (9ecf691d-8b82-46cb-b254-cd061b2c02fb) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Windows 365 Enterprise 4 vCPU, 16 GB, 256 GB (9ecf691d-8b82-46cb-b254-cd061b2c02fb) | | WINDOWS STORE FOR BUSINESS | WINDOWS_STORE | 6470687e-a428-4b7a-bef2-8a291ad947c9 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDOWS_STORE (a420f25f-a7b3-4ff5-a9d0-5d58f73b537d) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDOWS STORE SERVICE (a420f25f-a7b3-4ff5-a9d0-5d58f73b537d) |
-| Microsoft Workplace Analytics | WORKPLACE_ANALYTICS | 3d957427-ecdc-4df2-aacd-01cc9d519da8 | WORKPLACE_ANALYTICS (f477b0f0-3bb1-4890-940c-40fcee6ce05f)<br/>WORKPLACE_ANALYTICS_INSIGHTS_BACKEND (ff7b261f-d98b-415b-827c-42a3fdf015af)<br/>WORKPLACE_ANALYTICS_INSIGHTS_USER (b622badb-1b45-48d5-920f-4b27a2c0996c) | Microsoft Workplace Analytics (f477b0f0-3bb1-4890-940c-40fcee6ce05f)<br/>Microsoft Workplace Analytics Insights Backend (ff7b261f-d98b-415b-827c-42a3fdf015af)<br/>Microsoft Workplace Analytics Insights User (b622badb-1b45-48d5-920f-4b27a2c0996c) |
+| Windows Store for Business EDU Faculty | WSFB_EDU_FACULTY | c7e9d9e6-1981-4bf3-bb50-a5bdfaa06fb2 | Windows Store for Business EDU Store_faculty (aaa2cd24-5519-450f-a1a0-160750710ca1) | Windows Store for Business EDU Store_faculty (aaa2cd24-5519-450f-a1a0-160750710ca1) |
## Service plans that cannot be assigned at the same time
active-directory B2b Quickstart Invite Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-quickstart-invite-powershell.md
Title: 'Quickstart: Add a guest user with PowerShell - Azure AD'
-description: In this quickstart, you learn how to use PowerShell to send an invitation to an external Azure AD B2B collaboration user.
+description: In this quickstart, you learn how to use PowerShell to send an invitation to an external Azure AD B2B collaboration user. You'll use the Microsoft Graph Identity Sign-ins and the Microsoft Graph Users PowerShell modules.
- Previously updated : 08/28/2018 Last updated : 02/16/2022
# Quickstart: Add a guest user with PowerShell
-There are many ways you can invite external partners to your apps and services with Azure Active Directory B2B collaboration. In the previous quickstart, you saw how to add guest users directly in the Azure Active Directory admin portal. You can also use PowerShell to add guest users, either one at a time or in bulk. In this quickstart, youΓÇÖll use the New-AzureADMSInvitation command to add one guest user to your Azure tenant.
+There are many ways you can invite external partners to your apps and services with Azure Active Directory B2B collaboration. In the previous quickstart, you saw how to add guest users directly in the Azure Active Directory admin portal. You can also use PowerShell to add guest users, either one at a time or in bulk. In this quickstart, youΓÇÖll use the New-MgInvitation command to add one guest user to your Azure tenant.
-If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Prerequisites ### PowerShell Module
-Install the [Azure AD V2 PowerShell for Graph module](/powershell/azure/active-directory/install-adv2) (AzureAD) or the [Azure AD V2 PowerShell for Graph module preview version](/powershell/azure/active-directory/install-adv2?view=azureadps-2.0-preview&preserve-view=true) (AzureADPreview).
+Install the [Microsoft Graph Identity Sign-ins module](/powershell/module/microsoft.graph.identity.signins/?view=graph-powershell-beta) (Microsoft.Graph.Identity.SignIns) and the [Microsoft Graph Users module](/powershell/module/microsoft.graph.users/?view=graph-powershell-beta) (Microsoft.Graph.Users).
### Get a test email account
You need a test email account that you can send the invitation to. The account m
Run the following command to connect to the tenant domain: ```powershell
-Connect-AzureAD -TenantDomain "<Tenant_Domain_Name>"
+Connect-MgGraph -Scopes user.readwrite.all
```
-For example, `Connect-AzureAD -TenantDomain "contoso.onmicrosoft.com"`.
When prompted, enter your credentials. ## Send an invitation
-1. To send an invitation to your test email account, run the following PowerShell command (replace **"Sanda"** and **sanda\@fabrikam.com** with your test email account name and email address):
+1. To send an invitation to your test email account, run the following PowerShell command (replace **"John Doe"** and **john\@contoso.com** with your test email account name and email address):
```powershell
- New-AzureADMSInvitation -InvitedUserDisplayName "Sanda" -InvitedUserEmailAddress sanda@fabrikam.com -InviteRedirectURL https://myapps.microsoft.com -SendInvitationMessage $true
+ New-MgInvitation -InvitedUserDisplayName "John Doe" -InvitedUserEmailAddress John@contoso.com -InviteRedirectUrl "https://myapplications.microsoft.com" -SendInvitationMessage:$true
```
-2. The command sends an invitation to the email address specified. Check the output, which should look similar to the following:
+1. The command sends an invitation to the email address specified. Check the output, which should look similar to the following example:
- ![PowerShell output showing pending user acceptance](media/quickstart-invite-powershell/powershell-azureadmsinvitation-result.png)
+ ![PowerShell output of the invitation command](media/quickstart-invite-powershell/powershell-mginvitation-result.png)
## Verify the user exists in the directory
-1. To verify that the invited user was added to Azure AD, run the following command:
+1. To verify that the invited user was added to Azure AD, run the following command (replace **john\@contoso.com** with your invited email):
```powershell
- Get-AzureADUser -Filter "UserType eq 'Guest'"
+ Get-MgUser -Filter "Mail eq 'John@contoso.com'"
```
-3. Check the output to make sure the user you invited is listed, with a user principal name (UPN) in the format *emailaddress*#EXT#\@*domain*. For example, *sanda_fabrikam.com#EXT#\@contoso.onmicrosoft.com*, where contoso.onmicrosoft.com is the organization from which you sent the invitations.
+1. Check the output to make sure the user you invited is listed, with a user principal name (UPN) in the format *emailaddress*#EXT#\@*domain*. For example, *john_contoso.com#EXT#\@fabrikam.onmicrosoft.com*, where fabrikam.onmicrosoft.com is the organization from which you sent the invitations.
- ![PowerShell output showing guest user added](media/quickstart-invite-powershell/powershell-guest-user-added.png)
+ ![PowerShell output showing guest user added](media/quickstart-invite-powershell/powershell-mginvitation-guest-user-add.png)
## Clean up resources
When no longer needed, you can delete the test user account in the directory. Ru
```powershell Remove-AzureADUser -ObjectId "<UPN>" ```
-For example: `Remove-AzureADUser -ObjectId "sanda_fabrikam.com#EXT#@contoso.onmicrosoft.com"`
+For example: `Remove-AzureADUser -UserId john_contoso.com#EXT#@fabrikam.onmicrosoft.com`
## Next steps
active-directory Tutorial Bulk Invite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/tutorial-bulk-invite.md
Title: Tutorial for bulk inviting B2B collaboration users - Azure AD
-description: In this tutorial, you learn how to use PowerShell and a CSV file to send bulk invitations to external Azure AD B2B collaboration users.
+description: In this tutorial, you learn how to use PowerShell and a CSV file to send bulk invitations to external Azure AD B2B collaboration users. You'll use the Microsoft.Graph.Users PowerShell module.
Previously updated : 03/17/2021 Last updated : 02/16/2022 - # Customer intent: As a tenant administrator, I want to send B2B invitations to multiple external users at the same time so that I can avoid having to send individual invitations to each user.
# Tutorial: Bulk invite Azure AD B2B collaboration users
-If you use Azure Active Directory (Azure AD) B2B collaboration to work with external partners, you can invite multiple guest users to your organization at the same time. In this tutorial, you learn how to use the Azure portal to send bulk invitations to external users. Specifically, you do the following:
+If you use Azure Active Directory (Azure AD) B2B collaboration to work with external partners, you can invite multiple guest users to your organization at the same time. In this tutorial, you learn how to use the Azure portal to send bulk invitations to external users. Specifically, you'll follow these steps:
> [!div class="checklist"] > * Use **Bulk invite users** to prepare a comma-separated value (.csv) file with the user information and invitation preferences
The rows in a downloaded CSV template are as follows:
- **Version number**: The first row containing the version number must be included in the upload CSV. - **Column headings**: The format of the column headings is &lt;*Item name*&gt; [PropertyName] &lt;*Required or blank*&gt;. For example, `Email address to invite [inviteeEmail] Required`. Some older versions of the template might have slight variations.-- **Examples row**: We have included in the template a row of examples of values for each column. You must remove the examples row and replace it with your own entries.
+- **Examples row**: We've included in the template a row of examples of values for each column. You must remove the examples row and replace it with your own entries.
### Additional guidance - The first two rows of the upload template must not be removed or modified, or the upload can't be processed. - The required columns are listed first.-- We don't recommend adding new columns to the template. Any additional columns you add are ignored and not processed.
+- We don't recommend adding new columns to the template. Any columns you add are ignored and not processed.
- We recommend that you download the latest version of the CSV template as often as possible. ## Prerequisites
You need two or more test email accounts that you can send the invitations to. T
7. On the **Bulk invite users** page, under **Upload your csv file**, browse to the file. When you select the file, validation of the .csv file starts. 8. When the file contents are validated, youΓÇÖll see **File uploaded successfully**. If there are errors, you must fix them before you can submit the job. 9. When your file passes validation, select **Submit** to start the Azure bulk operation that adds the invitations.
-10. To view the job status, select **Click here to view the status of each operation**. Or, you can select **Bulk operation results** in the **Activity** section. For details about each line item within the the bulk operation, select the values under the **# Success**, **# Failure**, or **Total Requests** columns. If failures occurred, the reasons for failure will be listed.
+10. To view the job status, select **Click here to view the status of each operation**. Or, you can select **Bulk operation results** in the **Activity** section. For details about each line item within the bulk operation, select the values under the **# Success**, **# Failure**, or **Total Requests** columns. If failures occurred, the reasons for failure will be listed.
![Example of bulk operation results](media/tutorial-bulk-invite/bulk-operation-results.png)
Check to see that the guest users you added exist in the directory either in the
### View guest users with PowerShell
+To view guest users with PowerShell, you'll need the [Microsoft.Graph.Users PowerShell Module](/powershell/module/microsoft.graph.users/?view=graph-powershell-beta). Then sign in using the `Connect-MgGraph` command with an admin account to consent to the required scopes:
+```powershell
+Connect-MgGraph -Scopes "User.Read.All"
+```
+ Run the following command: ```powershell
- Get-AzureADUser -Filter "UserType eq 'Guest'"
+ Get-MgUser -Filter "UserType eq 'Guest'"
``` You should see the users that you invited listed, with a user principal name (UPN) in the format *emailaddress*#EXT#\@*domain*. For example, *lstokes_fabrikam.com#EXT#\@contoso.onmicrosoft.com*, where contoso.onmicrosoft.com is the organization from which you sent the invitations. ## Clean up resources
-When no longer needed, you can delete the test user accounts in the directory in the Azure portal on the Users page by selecting the checkbox next to the guest user and then selecting **Delete**.
+When no longer needed, you can delete the test user accounts in the directory in the Azure portal on the Users page by selecting the checkbox next to the guest user and then selecting **Delete**.
Or you can run the following PowerShell command to delete a user account: ```powershell
- Remove-AzureADUser -ObjectId "<UPN>"
+ Remove-MgUser -UserId "<UPN>"
```
-For example: `Remove-AzureADUser -ObjectId "lstokes_fabrikam.com#EXT#@contoso.onmicrosoft.com"`
+For example: `Remove-MgUser -UserId "lstokes_fabrikam.com#EXT#@contoso.onmicrosoft.com"`
## Next steps
active-directory Add Users Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/add-users-azure-active-directory.md
Previously updated : 05/04/2021 Last updated : 02/16/2022
Add new users or delete existing users from your Azure Active Directory (Azure AD) organization. To add or delete users you must be a User administrator or Global administrator. + ## Add a new user You can create a new user using the Azure Active Directory portal.
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
For more information about how to better secure your organization by using autom
In January 2022, weΓÇÖve added the following 47 new applications in our App gallery with Federation support
-[Jooto](../saas-apps/jooto-tutorial.md), [Proprli](https://app.proprli.com/), [Pace Scheduler](https://www.pacescheduler.com/accounts/login/), [DRTrack](../saas-apps/drtrack-tutorial.md), [Dining Sidekick](../saas-apps/dining-sidekick-tutorial.md), [Cryotos](https://app.cryotos.com/oauth2/authorization/azure-client), [Emergency Management Systems](https://secure.emsystems.com.au/), [Manifestly Checklists](../saas-apps/manifestly-checklists-tutorial.md), [eLearnPOSH](../saas-apps/elearnposh-tutorial.md), [Scuba Analytics](../saas-apps/scuba-analytics-tutorial.md), [Athena Systems Login Platform](../saas-apps/athena-systems-login-platform-tutorial.md), [TimeTrack](../saas-apps/timetrack-tutorial.md), [MiHCM](../saas-apps/mihcm-tutorial.md), [Health Note](https://www.healthnote.com/), [Active Directory SSO for DoubleYou](../saas-apps/active-directory-sso-for-doubleyou-tutorial.md), [Emplifi platform](../saas-apps/emplifi-platform-tutorial.md), [Flexera One](../saas-apps/flexera-one-tutorial.md), [Hypothesis](https://web.hypothes.is/help/authorizing-hypothesis-from-the-azure-ad-app-gallery/), [Recurly](../saas-apps/recurly-tutorial.md), [XpressDox AU Cloud](https://au.xpressdox.com/Authentication/Login.aspx), [Active and Thriving - Perth Airport](../saas-apps/active-and-thriving-perth-airport-tutorial.md), [Zoom for Intune](https://zoom.us/), [UPWARD AGENT](https://app.upward.jp/login/), [Linux Foundation ID](https://openprofile.dev/), [Asset Planner](../saas-apps/asset-planner-tutorial.md), [Kiho](https://v3.kiho.fi/index/sso), [chezie](https://app.chezie.co/), [Excelity HCM](../saas-apps/excelity-hcm-tutorial.md), [yuccaHR](https://app.yuccahr.com/), [Blue Ocean Brain](../saas-apps/blue-ocean-brain-tutorial.md), [EchoSpan](../saas-apps/echospan-tutorial.md), [Archie](../saas-apps/archie-tutorial.md), [Equifax Workforce Solutions](../saas-apps/equifax-workforce-solutions-tutorial.md), [Palantir Foundry](../saas-apps/palantir-foundry-tutorial.md), [ATP SpotLight and ChronicX](../saas-apps/atp-spotlight-and-chronicx-tutorial.md), [DigiSign](https://app.digisign.org/selfcare/sso), [mConnect](https://mconnect.skooler.com/), [BrightHR](https://login.brighthr.com/), [Mural Identity](../saas-apps/mural-identity-tutorial.md), [NordPass SSO](https://app.nordpass.com/login%20use%20%22Log%20in%20to%20business%22%20option), [CloudClarity](https://portal.cloudclarity.app/dashboard), [Twic](../saas-apps/twic-tutorial.md), [Eduhouse Online](https://app.eduhouse.fi/palvelu/kirjaudu/microsoft), [Bealink](../saas-apps/bealink-tutorial.md), [Time Intelligence Bot](https://teams.microsoft.com/), [SentinelOne](https://sentinelone.com/)
+[Jooto](../saas-apps/jooto-tutorial.md), [Proprli](https://app.proprli.com/), [Pace Scheduler](https://www.pacescheduler.com/accounts/login/), [DRTrack](../saas-apps/drtrack-tutorial.md), [Dining Sidekick](../saas-apps/dining-sidekick-tutorial.md), [Cryotos](https://app.cryotos.com/oauth2/authorization/azure-client), [Emergency Management Systems](https://secure.emsystems.com.au/), [Manifestly Checklists](../saas-apps/manifestly-checklists-tutorial.md), [eLearnPOSH](../saas-apps/elearnposh-tutorial.md), [Scuba Analytics](../saas-apps/scuba-analytics-tutorial.md), [Athena Systems Login Platform](../saas-apps/athena-systems-login-platform-tutorial.md), [TimeTrack](../saas-apps/timetrack-tutorial.md), [MiHCM](../saas-apps/mihcm-tutorial.md), [Health Note](https://auth.healthnote.works/oauth), [Active Directory SSO for DoubleYou](../saas-apps/active-directory-sso-for-doubleyou-tutorial.md), [Emplifi platform](../saas-apps/emplifi-platform-tutorial.md), [Flexera One](../saas-apps/flexera-one-tutorial.md), [Hypothesis](https://web.hypothes.is/help/authorizing-hypothesis-from-the-azure-ad-app-gallery/), [Recurly](../saas-apps/recurly-tutorial.md), [XpressDox AU Cloud](https://au.xpressdox.com/Authentication/Login.aspx), [Zoom for Intune](https://zoom.us/), [UPWARD AGENT](https://app.upward.jp/login/), [Linux Foundation ID](https://openprofile.dev/), [Asset Planner](../saas-apps/asset-planner-tutorial.md), [Kiho](https://v3.kiho.fi/index/sso), [chezie](https://app.chezie.co/), [Excelity HCM](../saas-apps/excelity-hcm-tutorial.md), [yuccaHR](https://app.yuccahr.com/), [Blue Ocean Brain](../saas-apps/blue-ocean-brain-tutorial.md), [EchoSpan](../saas-apps/echospan-tutorial.md), [Archie](../saas-apps/archie-tutorial.md), [Equifax Workforce Solutions](../saas-apps/equifax-workforce-solutions-tutorial.md), [Palantir Foundry](../saas-apps/palantir-foundry-tutorial.md), [ATP SpotLight and ChronicX](../saas-apps/atp-spotlight-and-chronicx-tutorial.md), [DigiSign](https://app.digisign.org/selfcare/sso), [mConnect](https://mconnect.skooler.com/), [BrightHR](https://login.brighthr.com/), [Mural Identity](../saas-apps/mural-identity-tutorial.md), [NordPass SSO](https://app.nordpass.com/login%20use%20%22Log%20in%20to%20business%22%20option), [CloudClarity](https://portal.cloudclarity.app/dashboard), [Twic](../saas-apps/twic-tutorial.md), [Eduhouse Online](https://app.eduhouse.fi/palvelu/kirjaudu/microsoft), [Bealink](../saas-apps/bealink-tutorial.md), [Time Intelligence Bot](https://teams.microsoft.com/), [SentinelOne](https://sentinelone.com/)
You can also find the documentation of all the applications from: https://aka.ms/AppsTutorial,
active-directory Deploy Access Reviews https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/deploy-access-reviews.md
The administrative role required to create, manage, or read an access review dep
| Group or application| Global administrator <p>User administrator<p>Identity Governance administrator<p>Privileged Role administrator (only does reviews for Azure AD role-assignable groups)<p>Group owner ([if enabled by an admin]( create-access-review.md#allow-group-owners-to-create-and-manage-access-reviews-of-their-groups-preview))| Global administrator<p>Global reader<p>User administrator<p>Identity Governance administrator<p>Privileged Role administrator<p>Security reader<p>Group owner ([if enabled by an admin]( create-access-review.md#allow-group-owners-to-create-and-manage-access-reviews-of-their-groups-preview)) | |Azure AD roles| Global administrator <p>Privileged Role administrator| Global administrator<p>Global reader<p>User administrator<p>Privileged Role administrator<p> <p>Security reader | | Azure resource roles| User Access Administrator (for the resource)<p>Resource owner| User Access Administrator (for the resource)<p>Resource owner<p>Reader (for the resource) |
-| Access package| Global administrator<p>User administrator<p>Identity Governance administrator| Global administrator<p>Global reader<p>User administrator<p>Identity Governance administrator<p> <p>Security reader |
+| Access package| Global administrator<p>User administrator<p>Identity Governance administrator<p>Catalog owner (for the access package)<p>Access package manager (for the access package)| Global administrator<p>Global reader<p>User administrator<p>Identity Governance administrator<p>Catalog owner (for the access package)<p>Access package manager (for the access package)<p>Security reader |
For more information, see [Administrator role permissions in Azure AD](../roles/permissions-reference.md).
active-directory How To Connect Fed Group Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-fed-group-claims.md
Title: Configure group claims for applications with Azure Active Directory | Microsoft Docs
-description: Information on how to configure group claims for use with Azure AD.
+ Title: Configure group claims for applications by using Azure Active Directory | Microsoft Docs
+description: Get information on how to configure group claims for use with Azure AD.
documentationcenter: ''
-# Configure group claims for applications with Azure Active Directory
+# Configure group claims for applications by using Azure Active Directory
-Azure Active Directory can provide a users group membership information in tokens for use within applications. Two main patterns are supported:
+Azure Active Directory (Azure AD) can provide a user's group membership information in tokens for use within applications. This feature supports two main patterns:
-- Groups identified by their Azure Active Directory object identifier (OID) attribute-- Groups identified by sAMAccountName or GroupSID attributes for Active Directory (AD) synchronized groups and users
+- Groups identified by their Azure AD object identifier (OID) attribute
+- Groups identified by the `sAMAccountName` or `GroupSID` attribute for Active Directory-synchronized groups and users
-> [!IMPORTANT]
-> There are a number of caveats to note for this functionality:
->
-> - Support for use of sAMAccountName and security identifier (SID) attributes synced from on-premises is designed to enable moving existing applications from AD FS and other identity providers. Groups managed in Azure AD do not contain the attributes necessary to emit these claims.
-> - In larger organizations the number of groups a user is a member of may exceed the limit that Azure Active Directory will add to a token. 150 groups for a SAML token, and 200 for a JWT. This can lead to unpredictable results. If your users have large numbers of group memberships, we recommend using the option to restrict the groups emitted in claims to the relevant groups for the application. If for any reason assigning groups to your applications is not possible, we also provide the option of configuring a [group filter](#group-filtering) which can also reduce the number of groups emitted in the claim.
-> - Group claims have a 5-group limit if the token is issued through the implicit flow. Tokens requested via the implicit flow will only have a "hasgroups":true claim if the user is in more than 5 groups.
-> - For new application development, or in cases where the application can be configured for it, and where nested group support isn't required, we recommend that in-app authorization is based on application roles rather than groups. This limits the amount of information that needs to go into the token, is more secure, and separates user assignment from app configuration.
+## Important caveats for this functionality
+
+- Support for use of `sAMAccountName` and security identifier (SID) attributes synced from on-premises is designed to enable moving existing applications from Active Directory Federation Services (AD FS) and other identity providers. Groups managed in Azure AD don't contain the attributes necessary to emit these claims.
+- In larger organizations, the number of groups where a user is a member might exceed the limit that Azure AD will add to a token. Those limits are 150 groups for a SAML token and 200 for a JSON Web Token (JWT). Exceeding a limit can lead to unpredictable results.
+
+ If your users have large numbers of group memberships, we recommend using the option to restrict the groups emitted in claims to the relevant groups for the application. If assigning groups to your applications is not possible, you can configure a [group filter](#group-filtering) to reduce the number of groups emitted in the claim.
+- Group claims have a five-group limit if the token is issued through the implicit flow. Tokens requested via the implicit flow will have a `"hasgroups":true` claim only if the user is in more than five groups.
+- We recommend basing in-app authorization on application roles rather than groups when:
+
+ - You're developing a new application, or an existing application can be configured for it.
+ - Support for nested groups isn't required.
+
+ Using application roles limits the amount of information that needs to go into the token, is more secure, and separates user assignment from app configuration.
## Group claims for applications migrating from AD FS and other identity providers
-Many applications configured to authenticate with AD FS rely on group membership information in the form of Windows AD group attributes. These attributes are the group sAMAccountName, which may be qualified by-domain name, or the Windows Group Security Identifier (GroupSID). When the application is federated with AD FS, AD FS uses the TokenGroups function to retrieve the group memberships for the user.
+Many applications that are configured to authenticate with AD FS rely on group membership information in the form of Windows Server Active Directory group attributes. These attributes are the group `sAMAccountName`, which might be qualified by domain name, or the Windows group security identifier (`GroupSID`). When the application is federated with AD FS, AD FS uses the `TokenGroups` function to retrieve the group memberships for the user.
-An app that has been moved from AD FS needs claims in the same format. Group and role claims may be emitted from Azure Active Directory containing the domain qualified sAMAccountName or the GroupSID synced from Active Directory rather than the group's Azure Active Directory objectID.
+An app that has been moved from AD FS needs claims in the same format. Group and role claims emitted from Azure AD might contain the domain-qualified `sAMAccountName` attribute or the `GroupSID` attribute synced from Active Directory, rather than the group's Azure AD `objectID` attribute.
The supported formats for group claims are: -- **Azure Active Directory Group ObjectId** (Available for all groups)-- **sAMAccountName** (Available for groups synchronized from Active Directory)-- **NetbiosDomain\sAMAccountName** (Available for groups synchronized from Active Directory)-- **DNSDomainName\sAMAccountName** (Available for groups synchronized from Active Directory)-- **On Premises Group Security Identifier** (Available for groups synchronized from Active Directory)
+- **Azure AD group ObjectId**: Available for all groups.
+- **sAMAccountName**: Available for groups synchronized from Active Directory.
+- **NetbiosDomain\sAMAccountName**: Available for groups synchronized from Active Directory.
+- **DNSDomainName\sAMAccountName**: Available for groups synchronized from Active Directory.
+- **On-premises group security identifier**: Available for groups synchronized from Active Directory.
> [!NOTE]
-> sAMAccountName and On Premises Group SID attributes are only available on Group objects synced from Active Directory. They aren't available on groups created in Azure Active Directory or Office365. Applications configured in Azure Active Directory to get synced on-premises group attributes get them for synced groups only.
+> `sAMAccountName` and on-premises `GroupSID` attributes are available only on group objects synced from Active Directory. They aren't available on groups created in Azure AD or Office 365. Applications configured in Azure AD to get synced on-premises group attributes get them for synced groups only.
## Options for applications to consume group information
-Applications can call the MS Graph groups endpoint to obtain group information for the authenticated user. This call ensures that all the groups a user is a member of are available even when there are a large number of groups involved. Group enumeration is then independent of token size limitations.
+Applications can call the Microsoft Graph group's endpoint to obtain group information for the authenticated user. This call ensures that all the groups where a user is a member are available, even when a large number of groups is involved. Group enumeration is then independent of limitations on token size.
+
+However, if an existing application expects to consume group information via claims, you can configure Azure AD with various claim formats. Consider the following options:
-However, if an existing application expects to consume group information via claims, Azure Active Directory can be configured with a number of different claims formats. Consider the following options:
+- When you're using group membership for in-application authorization, it's preferable to use the group `ObjectID` attribute. The group `ObjectID` attribute is immutable and unique in Azure AD. It's available for all groups.
+- If you're using the on-premises group `sAMAccountName` attribute for authorization, use domain-qualified names. It reduces the chance of names clashing. `sAMAccountName` might be unique within an Active Directory domain, but if more than one Active Directory domain is synchronized with an Azure AD tenant, there's a possibility for more than one group to have the same name.
+- Consider using [application roles](../../active-directory/develop/howto-add-app-roles-in-azure-ad-apps.md) to provide a layer of indirection between the group membership and the application. The application then makes internal authorization decisions based on role claims in the token.
+- If the application is configured to get group attributes that are synced from Active Directory and a group doesn't contain those attributes, it won't be included in the claims.
+- Group claims in tokens include nested groups, except when you're using the option to restrict the group claims to groups that are assigned to the application.
-- When using group membership for in-application authorization purposes it is preferable to use the Group ObjectID. The Group ObjectID is immutable and unique in Azure Active Directory and available for all groups.-- If using the on-premises group sAMAccountName for authorization, use domain qualified names; there's less chance of names clashing. sAMAccountName may be unique within an Active Directory domain, but if more than one Active Directory domain is synchronized with an Azure Active Directory tenant there is a possibility for more than one group to have the same name.-- Consider using [Application Roles](../../active-directory/develop/howto-add-app-roles-in-azure-ad-apps.md) to provide a layer of indirection between the group membership and the application. The application then makes internal authorization decisions based on role claims in the token.-- If the application is configured to get group attributes that are synced from Active Directory and a Group doesn't contain those attributes, it won't be included in the claims.-- Group claims in tokens include nested groups except when using the option to restrict the group claims to groups assigned to the application. If a user is a member of GroupB and GroupB is a member of GroupA, then the group claims for the user will contain both GroupA and GroupB. When an organization's users have large numbers of group memberships, the number of groups listed in the token can grow the token size. Azure Active Directory limits the number of groups it will emit in a token to 150 for SAML assertions, and 200 for JWT. If a user is a member of a larger number of groups, the groups are omitted and a link to the Graph endpoint to obtain group information is included instead.
+ If a user is a member of GroupB, and GroupB is a member of GroupA, then the group claims for the user will contain both GroupA and GroupB. When an organization's users have large numbers of group memberships, the number of groups listed in the token can grow the token size. Azure AD limits the number of groups that it will emit in a token to 150 for SAML assertions and 200 for JWT. If a user is a member of a larger number of groups, the groups are omitted. A link to the Microsoft Graph endpoint to obtain group information is included instead.
-## Prerequisites for using Group attributes synchronized from Active Directory
+## Prerequisites for using group attributes synchronized from Active Directory
-Group membership claims can be emitted in tokens for any group if you use the ObjectId format. To use group claims in formats other than the group ObjectId, the groups must be synchronized from Active Directory using Azure AD Connect.
+Group membership claims can be emitted in tokens for any group if you use the `ObjectId` format. To use group claims in formats other than group `ObjectId`, the groups must be synchronized from Active Directory via Azure AD Connect.
-There are two steps to configuring Azure Active Directory to emit group names for Active Directory Groups.
+To configure Azure AD to emit group names for Active Directory groups:
1. **Synchronize group names from Active Directory**
-Before Azure Active Directory can emit the group names or on premises group SID in group or role claims, the required attributes need to be synchronized from Active Directory. You must be running Azure AD Connect version 1.2.70 or later. Earlier versions of Azure AD Connect than 1.2.70 will synchronize the group objects from Active Directory, but will not include the required group name attributes. Upgrade to the current version.
-2. **Configure the application registration in Azure Active Directory to include group claims in tokens**
-Group claims can be configured in the Enterprise Applications section of the portal, or using the Application Manifest in the Application Registrations section. To configure group claims in the application manifest see "Configuring the Azure Active Directory Application Registration for group attributes" below.
+ Before Azure AD can emit the group names or on-premises group SID in group or role claims, you need to synchronize the required attributes from Active Directory. You must be running Azure AD Connect version 1.2.70 or later. Earlier versions of Azure AD Connect than 1.2.70 will synchronize the group objects from Active Directory, but they won't include the required group name attributes.
+
+2. **Configure the application registration in Azure AD to include group claims in tokens**
+
+ You can configure group claims in the **Enterprise Applications** section of the portal, or by using the application manifest in the **Application Registrations** section. To configure group claims in the application manifest, see [Configure the Azure AD application registration for group attributes](#configure-the-azure-ad-application-registration-for-group-attributes) later in this article.
## Add group claims to tokens for SAML applications using SSO configuration
-To configure Group Claims for a Gallery or Non-Gallery SAML application, open **Enterprise Applications**, click on the application in the list, select **Single Sign On configuration**, and then select **User Attributes & Claims**.
+To configure group claims for a gallery or non-gallery SAML application via single sign-on (SSO):
+
+1. Open **Enterprise Applications**, select the application in the list, select **Single Sign On configuration**, and then select **User Attributes & Claims**.
-Click on **Add a group claim**
+1. Select **Add a group claim**.
-![Screenshot that shows the "User Attributes & Claims" page with "Add a group claim" selected.](media/how-to-connect-fed-group-claims/group-claims-ui-1.png)
+ ![Screenshot that shows the page for user attributes and claims, with the button for adding a group claim selected.](media/how-to-connect-fed-group-claims/group-claims-ui-1.png)
-Use the radio buttons to select which groups should be included in the token
+1. Use the options to select which groups should be included in the token.
-![Screenshot that shows the "Group Claims" window with "Security groups" selected.](media/how-to-connect-fed-group-claims/group-claims-ui-2.png)
+ ![Screenshot that shows the Group Claims window with group options.](media/how-to-connect-fed-group-claims/group-claims-ui-2.png)
-| Selection | Description |
-|-|-|
-| **All groups** | Emits security groups and distribution lists and roles. |
-| **Security groups** | Emits security groups the user is a member of in the groups claim |
-| **Directory roles** | If the user is assigned directory roles, they are emitted as a 'wids' claim (groups claim won't be emitted) |
-| **Groups assigned to the application** | Emits only the groups that are explicitly assigned to the application and the user is a member of |
+ | Selection | Description |
+ |-|-|
+ | **All groups** | Emits security groups and distribution lists and roles. |
+ | **Security groups** | Emits security groups that the user is a member of in the groups claim. |
+ | **Directory roles** | If the user is assigned directory roles, they're emitted as a `wids` claim. (The group's claim won't be emitted.) |
+ | **Groups assigned to the application** | Emits only the groups that are explicitly assigned to the application and that the user is a member of. |
-For example, to emit all the Security Groups the user is a member of, select Security Groups
+ - For example, to emit all the security groups that the user is a member of, select **Security groups**.
-![Screenshot that shows the "Group Claims" window with "Security groups" selected and the "Source attribute" drop-down menu open.](media/how-to-connect-fed-group-claims/group-claims-ui-3.png)
+ ![Screenshot that shows the Group Claims window, with the option for security groups selected.](media/how-to-connect-fed-group-claims/group-claims-ui-3.png)
-To emit groups using Active Directory attributes synced from Active Directory instead of Azure AD objectIDs select the required format from the drop-down. Only groups synchronized from Active Directory will be included in the claims.
+ To emit groups by using Active Directory attributes synced from Active Directory instead of Azure AD `objectID` attributes, select the required format from the **Source attribute** drop-down list. Only groups synchronized from Active Directory will be included in the claims.
-![Screenshot that shows the "Source attribute" drop-down menu open.](media/how-to-connect-fed-group-claims/group-claims-ui-4.png)
+ ![Screenshot that shows the drop-down menu for the source attribute.](media/how-to-connect-fed-group-claims/group-claims-ui-4.png)
-To emit only groups assigned to the application, select **Groups Assigned to the application**
+ - To emit only groups assigned to the application, select **Groups assigned to the application**.
-![Screenshot that shows the "Group Claims" window with "Groups assigned to the application" selected.](media/how-to-connect-fed-group-claims/group-claims-ui-4-1.png)
+ ![Screenshot that shows the Group Claims window, with the option for groups assigned to the application selected.](media/how-to-connect-fed-group-claims/group-claims-ui-4-1.png)
-Groups assigned to the application will be included in the token. Other groups the user is a member of will be omitted. With this option nested groups are not included and the user must be a direct member of the group assigned to the application.
+ Groups assigned to the application will be included in the token. Other groups that the user is a member of will be omitted. With this option, nested groups are not included and the user must be a direct member of the group assigned to the application.
-To change the groups assigned to the application, select the application from the **Enterprise Applications** list and then click **Users and Groups** from the application's left-hand navigation menu.
+ To change the groups assigned to the application, select the application from the **Enterprise Applications** list. Then select **Users and Groups** from the application's left menu.
-See the document [Assign a user or group to an enterprise app](../../active-directory/manage-apps/assign-user-or-group-access-portal.md) for details of managing group assignment to applications.
+ For more information about managing group assignment to applications, see [Assign a user or group to an enterprise app](../../active-directory/manage-apps/assign-user-or-group-access-portal.md).
-### Advanced options
+### Set advanced options
#### Customize group claim name
-The way group claims are emitted can be modified by the settings under Advanced options
+You can modify the way that group claims are emitted by using the settings under **Advanced options**.
-Customize the name of the group claim: If selected, a different claim type can be specified for group claims. Enter the claim type in the Name field and the optional namespace for the claim in the namespace field.
+If you select **Customize the name of the group claim**, you can specify a different claim type for group claims. Enter the claim type in the **Name** box and the optional namespace for the claim in the **Namespace** box.
-![Screenshot that shows the "Advanced options" section with "Customize the name of the group claim" selected and "Name" and "Namespace" values entered.](media/how-to-connect-fed-group-claims/group-claims-ui-5.png)
+![Screenshot that shows advanced options, with the option of customizing the name of the group claim selected and the name and namespace values entered.](media/how-to-connect-fed-group-claims/group-claims-ui-5.png)
-Some applications require the group membership information to appear in the 'role' claim. You can optionally emit the user's groups as roles by checking the 'Emit groups a role claims' box.
+Some applications require the group membership information to appear in the role claim. You can optionally emit the user's groups as roles by selecting the **Emit groups as role claims** checkbox.
-![Screenshot that shows the "Advanced options" section with "Customize the name of the group claim" and "Emit groups as role claims" selected.](media/how-to-connect-fed-group-claims/group-claims-ui-6.png)
+![Screenshot that shows advanced options, with the checkboxes selected for customizing the name of the group claim and emitting groups as role claims.](media/how-to-connect-fed-group-claims/group-claims-ui-6.png)
> [!NOTE]
-> If the option to emit group data as roles is used, only groups will appear in the role claim. Any Application Roles the user is assigned will not appear in the role claim.
+> If you use the option to emit group data as roles, only groups will appear in the role claim. Any application roles that the user is assigned to won't appear in the role claim.
#### Group filtering
-Group filtering allows for fine grain control of the list of groups that is included as part of the group claim. When a filter is configured, only groups that match the filter will be included in the groups claim sent to that application. The filter will be applied against all groups regardless of the group hierarchy.
-Filters can be configured to be applied to the groupΓÇÖs display name or SAMAccountName and the following filtering operations are supported:
+Group filtering allows for fine control of the list of groups that's included as part of the group claim. When a filter is configured, only groups that match the filter will be included in the group's claim that's sent to that application. The filter will be applied against all groups regardless of the group hierarchy.
+You can configure filters to be applied to the group's display name or `SAMAccountName` attribute. The following filtering operations are supported:
- ![Screenshot of filtering](media/how-to-connect-fed-group-claims/group-filter-1.png)
+ - **Prefix**: Matches the start of the selected attribute.
+ - **Suffix**: Matches the end of the selected attribute.
+ - **Contains**: Matches any location in the selected attribute.
+
+ ![Screenshot that shows filtering options.](media/how-to-connect-fed-group-claims/group-filter-1.png)
#### Group transformation
-Some applications might require the groups in a different format to how they are represented in Azure AD. To support this, you can apply a transformation to each group that will be emitted in the group claim. This is achieved by allowing the configuration of a regex and a replacement value on custom group claims.
+Some applications might require the groups in a different format from how they're represented in Azure AD. To support this requirement, you can apply a transformation to each group that will be emitted in the group claim. You achieve it by allowing the configuration of a regular expression (regex) and a replacement value on custom group claims.
- ![Screenshot of group transformation](media/how-to-connect-fed-group-claims/group-transform-1.png)\
+![Screenshot of group transformation, with regex information added.](media/how-to-connect-fed-group-claims/group-transform-1.png)\
+- **Regex pattern**: Use a regex to parse text strings according to the pattern that you set in this box. If the regex pattern that you outline evaluates to `true`, the regex replacement pattern will run.
+- **Regex replacement pattern**: Outline in regex notation how you want to replace your string if the regex pattern that you outlined evaluates to `true`. Use capture groups to match subexpressions in this replacement regex.
-For more information about regex replace and capture groups, see [The Regular Expression Engine - The Captured Group](/dotnet/standard/base-types/the-regular-expression-object-model?WT.mc_id=Portal-fx#the-captured-group).
+For more information about regex replace and capture groups, see [The Regular Expression Object Model: The Captured Group](/dotnet/standard/base-types/the-regular-expression-object-model?WT.mc_id=Portal-fx#the-captured-group).
>[!NOTE]
-> As per the Azure AD documentation a restricted claim cannot be modified using policy. The data source cannot be changed, and no transformation is applied when generating these claims. The "Groups" claim is still a restricted claim, hence you need to customize the groups by changing the name, if you select a restricted name for the name of your custom group claim then the claim will be ignored at runtime.
+> As described in the Azure AD documentation, you can't modify a restricted claim by using a policy. The data source can't be changed, and no transformation is applied when you're generating these claims. The group claim is still a restricted claim, so you need to customize the groups by changing the name. If you select a restricted name for the name of your custom group claim, the claim will be ignored at runtime.
>
->The regex transform feature can also be used as a filter since any groups that donΓÇÖt match the regex pattern will not be emitted in the resulting claim.
+> You can also use the regex transform feature as a filter, because any groups that don't match the regex pattern will not be emitted in the resulting claim.
-### Edit the group claims configuration
+### Edit the group claim configuration
-Once a group claim configuration has been added to the User Attributes & Claims configuration, the option to add a group claim will be greyed out. To change the group claim configuration click on the group claim in the **Additional claims** list.
+After you add a group claim configuration to the **User Attributes & Claims** configuration, the option to add a group claim will be unavailable. To change the group claim configuration, select the group claim in the **Additional claims** list.
-![claims UI](media/how-to-connect-fed-group-claims/group-claims-ui-7.png)
+![Screenshot of the area for user attributes and claims, with the name of a group claim highlighted.](media/how-to-connect-fed-group-claims/group-claims-ui-7.png)
-## Configure the Azure AD Application Registration for group attributes
+## Configure the Azure AD application registration for group attributes
-Group claims can also be configured in the [Optional Claims](../../active-directory/develop/active-directory-optional-claims.md) section of the [Application Manifest](../../active-directory/develop/reference-app-manifest.md).
+You can also configure group claims in the [optional claims](../../active-directory/develop/active-directory-optional-claims.md) section of the [application manifest](../../active-directory/develop/reference-app-manifest.md).
-1. In the portal ->Azure Active Directory -> Application Registrations->Select Application->Manifest
+1. In the portal, select **Azure Active Directory** > **Application Registrations** > **Select Application** > **Manifest**.
-2. Enable group membership claims by changing the groupMembershipClaim
+2. Enable group membership claims by changing `groupMembershipClaims`.
-Valid values are:
+ Valid values are:
-| Selection | Description |
-|-|-|
-| **"All"** | Emits security groups, distribution lists and roles |
-| **"SecurityGroup"** | Emits security groups the user is a member of in the groups claim |
-| **"DirectoryRole"** | If the user is assigned directory roles, they are emitted as a 'wids' claim (groups claim won't be emitted) |
-| **"ApplicationGroup"** | Emits only the groups that are explicitly assigned to the application and the user is a member of |
-| **"None"** | No Groups are returned.(Its not case-sensitive so none works as well and it can be set directly in the application manifest.) |
+ | Selection | Description |
+ |-|-|
+ | `All` | Emits security groups, distribution lists, and roles. |
+ | `SecurityGroup` | Emits security groups that the user is a member of in the group claim. |
+ | `DirectoryRole` | If the user is assigned directory roles, they're emitted as a `wids` claim. (A group claim won't be emitted.) |
+ | `ApplicationGroup` | Emits only the groups that are explicitly assigned to the application and that the user is a member of. |
+ | `None` | No groups are returned. (It's not case-sensitive, so `none` also works. It can be set directly in the application manifest.) |
For example:
Valid values are:
"groupMembershipClaims": "SecurityGroup" ```
- By default Group ObjectIDs will be emitted in the group claim value. To modify the claim value to contain on premises group attributes, or to change the claim type to role, use OptionalClaims configuration as follows:
+ By default, group `ObjectID` attributes will be emitted in the group claim value. To modify the claim value to contain on-premises group attributes, or to change the claim type to a role, use the `optionalClaims` configuration described in the next step.
-3. Set group name configuration optional claims.
+3. Set optional clams for group name configuration.
- If you want the groups in the token to contain the on premises AD group attributes, specify which token type optional claim should be applied to in the optional claims section. Multiple token types can be listed:
+ If you want the groups in the token to contain the on-premises Active Directory group attributes, specify which token-type optional claim should be applied in the `optionalClaims` section. You can list multiple token types:
- - idToken for the OIDC ID token
- - accessToken for the OAuth/OIDC access token
- - Saml2Token for SAML tokens.
+ - `idToken` for the OIDC ID token
+ - `accessToken` for the OAuth/OIDC access token
+ - `Saml2Token` for SAML tokens
> [!NOTE]
- > The Saml2Token type applies to both SAML1.1 and SAML2.0 format tokens
+ > The `Saml2Token` type applies to tokens in both SAML1.1 and SAML2.0 format.
- For each relevant token type, modify the groups claim to use the OptionalClaims section in the manifest. The OptionalClaims schema is as follows:
+ For each relevant token type, modify the group claim to use the `optionalClaims` section in the manifest. The `optionalClaims` schema is as follows:
```json {
Valid values are:
} ```
- | Optional Claims Schema | Value |
+ | Optional claims schema | Value |
|-|-|
- | **name:** | Must be "groups" |
- | **source:** | Not used. Omit or specify null |
- | **essential:** | Not used. Omit or specify false |
- | **additionalProperties:** | List of additional properties. Valid options are "sam_account_name", "dns_domain_and_sam_account_name", "netbios_domain_and_sam_account_name", "emit_as_roles" |
+ | `name` | Must be `"groups"`. |
+ | `source` | Not used. Omit or specify `null`. |
+ | `essential` | Not used. Omit or specify `false`. |
+ | `additionalProperties` | List of additional properties. Valid options are `"sam_account_name"`, `"dns_domain_and_sam_account_name"`, `"netbios_domain_and_sam_account_name"`, and `"emit_as_roles"`. |
- In additionalProperties only one of "sam_account_name", "dns_domain_and_sam_account_name", "netbios_domain_and_sam_account_name" are required. If more than one is present, the first is used and any others ignored.
+ In `additionalProperties`, only one of `"sam_account_name"`, `"dns_domain_and_sam_account_name"`, or `"netbios_domain_and_sam_account_name"` is required. If more than one is present, the first is used and any others are ignored.
- Some applications require group information about the user in the role claim. To change the claim type to from a group claim to a role claim, add "emit_as_roles" to additional properties. The group values will be emitted in the role claim.
+ Some applications require group information about the user in the role claim. To change the claim type to from a group claim to a role claim, add `"emit_as_roles"` to additional properties. The group values will be emitted in the role claim.
> [!NOTE]
- > If "emit_as_roles" is used any Application Roles configured that the user is assigned will not appear in the role claim
+ > If you use `"emit_as_roles"`, any configured application roles that the user is assigned to will not appear in the role claim.
### Examples
-Emit groups as group names in OAuth access tokens in dnsDomainName\SAMAccountName format
+Emit groups as group names in OAuth access tokens in `DNSDomainName\sAMAccountName` format:
```json "optionalClaims": {
Emit groups as group names in OAuth access tokens in dnsDomainName\SAMAccountNam
} ```
-To emit group names to be returned in netbiosDomain\samAccountName format as the roles claim in SAML and OIDC ID Tokens:
+Emit group names to be returned in `NetbiosDomain\sAMAccountName` format as the role claim in SAML and OIDC ID tokens:
```json "optionalClaims": {
To emit group names to be returned in netbiosDomain\samAccountName format as the
## Next steps -- [Add authorization using groups & groups claims to an ASP.NET Core web app (Code sample)](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/5-WebApp-AuthZ/5-2-Groups/README.md)
+- [Add authorization using groups & group claims to an ASP.NET Core web app (code sample)](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/blob/master/5-WebApp-AuthZ/5-2-Groups/README.md)
- [Assign a user or group to an enterprise app](../../active-directory/manage-apps/assign-user-or-group-access-portal.md) - [Configure role claims](../../active-directory/develop/active-directory-enterprise-app-role-management.md)
active-directory Howto Identity Protection Remediate Unblock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/identity-protection/howto-identity-protection-remediate-unblock.md
Previously updated : 01/24/2022 Last updated : 02/17/2022
# Remediate risks and unblock users
-After completing your [investigation](howto-identity-protection-investigate-risk.md), you need to take action to remediate the risk or unblock users. Organizations can enable automated remediation using their [risk policies](howto-identity-protection-configure-risk-policies.md). Organizations should try to close all risk detections that they are presented in a time period your organization is comfortable with. Microsoft recommends closing events quickly, because time matters when working with risk.
+After completing your [investigation](howto-identity-protection-investigate-risk.md), you need to take action to remediate the risk or unblock users. Organizations can enable automated remediation using their [risk policies](howto-identity-protection-configure-risk-policies.md). Organizations should try to close all risk detections that they're presented in a time period your organization is comfortable with. Microsoft recommends closing events quickly, because time matters when working with risk.
## Remediation
Administrators have the following options to remediate:
1. If the account is confirmed compromised: 1. Select the event or user in the **Risky sign-ins** or **Risky users** reports and choose "Confirm compromised".
- 1. If a risk policy or a Conditional Access policy was not triggered at part of the risk detection, and the risk was not [self-remediated](#self-remediation-with-risk-policy), then:
+ 1. If a risk policy or a Conditional Access policy wasn't triggered at part of the risk detection, and the risk wasn't [self-remediated](#self-remediation-with-risk-policy), then:
1. [Request a password reset](#manual-password-reset). 1. Block the user if you suspect the attacker can reset the password or do multi-factor authentication for the user. 1. Revoke refresh tokens.
Some detections may not raise risk to the level where a user self-remediation wo
### Manual password reset
-If requiring a password reset using a user risk policy is not an option, administrators can close all risk detections for a user with a manual password reset.
+If requiring a password reset using a user risk policy isn't an option, administrators can close all risk detections for a user with a manual password reset.
Administrators are given two options when resetting a password for their users: - **Generate a temporary password** - By generating a temporary password, you can immediately bring an identity back into a safe state. This method requires contacting the affected users because they need to know what the temporary password is. Because the password is temporary, the user is prompted to change the password to something new during the next sign-in. -- **Require the user to reset password** - Requiring the users to reset passwords enables self-recovery without contacting help desk or an administrator. This method only applies to users that are registered for Azure AD MFA and SSPR. For users that have not been registered, this option is not available.
+- **Require the user to reset password** - Requiring the users to reset passwords enables self-recovery without contacting help desk or an administrator. This method only applies to users that are registered for Azure AD MFA and SSPR. For users that haven't been registered, this option isn't available.
### Dismiss user risk
-If a password reset is not an option for you, because for example the user has been deleted, you can choose to dismiss user risk detections.
+If a password reset isn't an option for you, you can choose to dismiss user risk detections.
-When you click **Dismiss user risk**, all events are closed and the affected user is no longer at risk. However, because this method does not have an impact on the existing password, it does not bring the related identity back into a safe state.
+When you select **Dismiss user risk**, all events are closed and the affected user is no longer at risk. However, because this method doesn't have an impact on the existing password, it doesn't bring the related identity back into a safe state.
### Close individual risk detections manually
-You can close individual risk detections manually. By closing risk detections manually, you can lower the user risk level. Typically, risk detections are closed manually in response to a related investigation. For example, when talking to a user reveals that an active risk detection is not required anymore.
+You can close individual risk detections manually. By closing risk detections manually, you can lower the user risk level. Typically, risk detections are closed manually in response to a related investigation. For example, when talking to a user reveals that an active risk detection isn't required anymore.
When closing risk detections manually, you can choose to take any of the following actions to change the status of a risk detection:
When closing risk detections manually, you can choose to take any of the followi
- Confirm sign-in safe - Confirm sign-in compromised
+#### Deleted users
+
+It isn't possible for administrators to dismiss risk for users who have been deleted from the directory. To remove deleted users, open a Microsoft support case.
+ ## Unblocking users An administrator may choose to block a sign-in based on their risk policy or investigations. A block may occur based on either sign-in or user risk.
active-directory How Manage User Assigned Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md
editor: Last updated 01/20/2022
active-directory Managed Identities Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identities-status.md
The following Azure services support managed identities for Azure resources:
| Azure VM image builder | [Configure Azure Image Builder Service permissions using Azure CLI](../../virtual-machines/linux/image-builder-permissions-cli.md#using-managed-identity-for-azure-storage-access)| | Azure Virtual Machine Scale Sets | [Configure managed identities on virtual machine scale set - Azure CLI](qs-configure-cli-windows-vmss.md) | | Azure Virtual Machines | [Secure and use policies on virtual machines in Azure](../../virtual-machines/windows/security-policy.md#managed-identities-for-azure-resources) |
+| Azure Web PubSub Service | [Managed identities for Azure Web PubSub Service](../../azure-web-pubsub/howto-use-managed-identity.md) |
## Next steps -- [Managed identities overview](Overview.md)
+- [Managed identities overview](Overview.md)
active-directory Active And Thriving Perth Airport Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/active-and-thriving-perth-airport-tutorial.md
- Title: 'Tutorial: Azure AD SSO integration with Active and Thriving - Perth Airport'
-description: Learn how to configure single sign-on between Azure Active Directory and Active and Thriving - Perth Airport.
-------- Previously updated : 12/20/2021----
-# Tutorial: Azure AD SSO integration with Active and Thriving - Perth Airport
-
-In this tutorial, you'll learn how to integrate Active and Thriving - Perth Airport with Azure Active Directory (Azure AD). When you integrate Active and Thriving - Perth Airport with Azure AD, you can:
-
-* Control in Azure AD who has access to Active and Thriving - Perth Airport.
-* Enable your users to be automatically signed-in to Active and Thriving - Perth Airport with their Azure AD accounts.
-* Manage your accounts in one central location - the Azure portal.
-
-## Prerequisites
-
-To get started, you need the following items:
-
-* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Active and Thriving - Perth Airport single sign-on (SSO) enabled subscription.
-
-## Scenario description
-
-In this tutorial, you configure and test Azure AD SSO in a test environment.
-
-* Active and Thriving - Perth Airport supports **SP and IDP** initiated SSO.
-
-> [!NOTE]
-> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-
-## Add Active and Thriving - Perth Airport from the gallery
-
-To configure the integration of Active and Thriving - Perth Airport into Azure AD, you need to add Active and Thriving - Perth Airport from the gallery to your list of managed SaaS apps.
-
-1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
-1. On the left navigation pane, select the **Azure Active Directory** service.
-1. Navigate to **Enterprise Applications** and then select **All Applications**.
-1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Active and Thriving - Perth Airport** in the search box.
-1. Select **Active and Thriving - Perth Airport** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-
-## Configure and test Azure AD SSO for Active and Thriving - Perth Airport
-
-Configure and test Azure AD SSO with Active and Thriving - Perth Airport using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Active and Thriving - Perth Airport.
-
-To configure and test Azure AD SSO with Active and Thriving - Perth Airport, perform the following steps:
-
-1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Active and Thriving - Perth Airport SSO](#configure-active-and-thrivingperth-airport-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Active and Thriving - Perth Airport test user](#create-active-and-thrivingperth-airport-test-user)** - to have a counterpart of B.Simon in Active and Thriving - Perth Airport that is linked to the Azure AD representation of user.
-1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-
-## Configure Azure AD SSO
-
-Follow these steps to enable Azure AD SSO in the Azure portal.
-
-1. In the Azure portal, on the **Active and Thriving - Perth Airport** application integration page, find the **Manage** section and select **single sign-on**.
-1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
-
-1. On the **Basic SAML Configuration** section perform the following steps, if you wish to configure the application in SP initiated mode:
-
- a. In the **Identifier** text box, type the URL:
- `https://sso-perthairport.activeandthriving.com.au/saml2/aad/metadata`
-
- b. In the **Reply URL** text box, type the URL:
- `https://sso-perthairport.activeandthriving.com.au/saml2/aad/login`
-
- c. In the **Sign-on URL** text box, type the URL:
- `https://sso-perthairport.activeandthriving.com.au/saml2/aad/login`
-
-1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
-
- ![The Certificate download link](common/certificatebase64.png)
-
-1. On the **Set up Active and Thriving - Perth Airport** section, copy the appropriate URL(s) based on your requirement.
-
- ![Copy configuration URLs](common/copy-configuration-urls.png)
-
-### Create an Azure AD test user
-
-In this section, you'll create a test user in the Azure portal called B.Simon.
-
-1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
-1. Select **New user** at the top of the screen.
-1. In the **User** properties, follow these steps:
- 1. In the **Name** field, enter `B.Simon`.
- 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
- 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
- 1. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Active and Thriving - Perth Airport.
-
-1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Active and Thriving - Perth Airport**.
-1. In the app's overview page, find the **Manage** section and select **Users and groups**.
-1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
-1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
-1. In the **Add Assignment** dialog, click the **Assign** button.
-
-## Configure Active and Thriving - Perth Airport SSO
-
-To configure single sign-on on **Active and Thriving - Perth Airport** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Active and Thriving - Perth Airport support team](mailto:hello@activeandthriving.com.au). They set this setting to have the SAML SSO connection set properly on both sides.
-
-### Create Active and Thriving - Perth Airport test user
-
-In this section, you create a user called Britta Simon in Active and Thriving - Perth Airport. Work with [Active and Thriving - Perth Airport support team](mailto:hello@activeandthriving.com.au) to add the users in the Active and Thriving - Perth Airport platform. Users must be created and activated before you use single sign-on.
-
-## Test SSO
-
-In this section, you test your Azure AD single sign-on configuration with following options.
-
-#### SP initiated:
-
-* Click on **Test this application** in Azure portal. This will redirect to Active and Thriving - Perth Airport Sign on URL where you can initiate the login flow.
-
-* Go to Active and Thriving - Perth Airport Sign-on URL directly and initiate the login flow from there.
-
-#### IDP initiated:
-
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the Active and Thriving - Perth Airport for which you set up the SSO.
-
-You can also use Microsoft My Apps to test the application in any mode. When you click the Active and Thriving - Perth Airport tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Active and Thriving - Perth Airport for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-
-## Next steps
-
-Once you configure Active and Thriving - Perth Airport you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Fence Mobile Remotemanager Sso Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fence-mobile-remotemanager-sso-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with FENCE-Mobile RemoteManager SSO'
+description: Learn how to configure single sign-on between Azure Active Directory and FENCE-Mobile RemoteManager SSO.
++++++++ Last updated : 02/01/2022++++
+# Tutorial: Azure AD SSO integration with FENCE-Mobile RemoteManager SSO
+
+In this tutorial, you'll learn how to integrate FENCE-Mobile RemoteManager SSO with Azure Active Directory (Azure AD). When you integrate FENCE-Mobile RemoteManager SSO with Azure AD, you can:
+
+* Control in Azure AD who has access to FENCE-Mobile RemoteManager SSO.
+* Enable your users to be automatically signed-in to FENCE-Mobile RemoteManager SSO with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* FENCE-Mobile RemoteManager SSO single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* FENCE-Mobile RemoteManager SSO supports **SP** initiated SSO.
+
+## Adding FENCE-Mobile RemoteManager SSO from the gallery
+
+To configure the integration of FENCE-Mobile RemoteManager SSO into Azure AD, you need to add FENCE-Mobile RemoteManager SSO from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **FENCE-Mobile RemoteManager SSO** in the search box.
+1. Select **FENCE-Mobile RemoteManager SSO** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
++
+## Configure and test Azure AD SSO for FENCE-Mobile RemoteManager SSO
+
+Configure and test Azure AD SSO with FENCE-Mobile RemoteManager SSO using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in FENCE-Mobile RemoteManager SSO.
+
+To configure and test Azure AD SSO with FENCE-Mobile RemoteManager SSO, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure FENCE-Mobile RemoteManager SSO](#configure-fence-mobile-remotemanager-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create FENCE-Mobile RemoteManager SSO test user](#create-fence-mobile-remotemanager-sso-test-user)** - to have a counterpart of B.Simon in FENCE-Mobile RemoteManager SSO that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **FENCE-Mobile RemoteManager SSO** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, enter the values for the following fields:
+
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ `api://www.fence-mrm.bsc.fujitsu.com/<TID>/<GUID>`
+
+ b. In the **Reply URL** text box, type a URL using one of the following patterns:
+
+ | Reply URL |
+ | |
+ | `https://www.fence-mrm.bsc.fujitsu.com/SConsole/SSOServlet?tid=<TID>` |
+ | `https://ctl.fence-mrm.bsc.fujitsu.com/SControl/SSOServlet?tid=<TID>` |
+ | `https://www.fence-mrm.bsc.fujitsu.com/IMDMLogin/SSOServlet?tid=<TID>` |
+ |
+
+
+ c. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://www.fence-mrm.bsc.fujitsu.com/SConsole/login.jsf?tid=<TID>`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [FENCE-Mobile RemoteManager SSO Client support team](mailto:fj-FMRM_Dev_Azure@dl.jp.fujitsu.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/metadataxml.png)
+
+1. On the **Set up FENCE-Mobile RemoteManager SSO** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to FENCE-Mobile RemoteManager SSO.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **FENCE-Mobile RemoteManager SSO**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure FENCE-Mobile RemoteManager SSO
+
+To configure single sign-on on **FENCE-Mobile RemoteManager SSO** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [FENCE-Mobile RemoteManager SSO support team](mailto:fj-FMRM_Dev_Azure@dl.jp.fujitsu.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create FENCE-Mobile RemoteManager SSO test user
+
+In this section, you create a user called Britta Simon in FENCE-Mobile RemoteManager SSO. Work with [FENCE-Mobile RemoteManager SSO support team](mailto:fj-FMRM_Dev_Azure@dl.jp.fujitsu.com) to add the users in the FENCE-Mobile RemoteManager SSO platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to FENCE-Mobile RemoteManager SSO Sign-on URL where you can initiate the login flow.
+
+* Go to FENCE-Mobile RemoteManager SSO Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the FENCE-Mobile RemoteManager SSO tile in the My Apps, this will redirect to FENCE-Mobile RemoteManager SSO Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure FENCE-Mobile RemoteManager SSO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
++
active-directory Gong Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/gong-provisioning-tutorial.md
+
+ Title: 'Tutorial: Configure Gong for automatic user provisioning with Azure Active Directory | Microsoft Docs'
+description: Learn how to automatically provision and de-provision user accounts from Azure AD to Gong.
+
+documentationcenter: ''
+
+writer: Thwimmer
++
+ms.assetid: 6c8285d3-4f35-4325-9adb-d1a44668a03a
+++
+ms.devlang: na
+ Last updated : 02/09/2022+++
+# Tutorial: Configure Gong for automatic user provisioning
+
+This tutorial describes the steps you need to perform in both Gong and Azure Active Directory (Azure AD) to configure automatic user provisioning. When configured, Azure AD automatically provisions and de-provisions users and groups to [Gong](https://www.gong.io/) using the Azure AD Provisioning service. For important details on what this service does, how it works, and frequently asked questions, see [Automate user provisioning and deprovisioning to SaaS applications with Azure Active Directory](../app-provisioning/user-provisioning.md).
++
+## Capabilities supported
+> [!div class="checklist"]
+> * Create users in Gong.
+> * Remove users in Gong when they do not require access anymore.
+> * Keep user attributes synchronized between Azure AD and Gong.
+
+## Prerequisites
+
+The scenario outlined in this tutorial assumes that you already have the following prerequisites:
+
+* [An Azure AD tenant](../develop/quickstart-create-new-tenant.md).
+* A user account in Azure AD with [permission](../roles/permissions-reference.md) to configure provisioning (for example, Application Administrator, Cloud Application administrator, Application Owner, or Global Administrator).
+* A user account in Gong with **Technical Administrator** privileges.
++
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Gong](../app-provisioning/customize-application-attributes.md).
+
+## Step 2. Configure Gong to support provisioning with Azure AD
+
+1. Go to your company settings page > **PEOPLE** area > **Team Member Provisioning**.
+1. Select **Azure AD** as the provisioning source.
+1. To assign data capture, workspace, and permission settings to Azure AD groups:
+ 1. In the **Assign settings** area, click **ADD ASSIGNMENT**.
+ 1. Give the assignment a name.
+ 1. In the **Azure AD groups** area, select the Azure AD group you want to define the settings for.
+ 1. In the **Data capture** area, select the home workspace and the data capture settings for people that belong to this group.
+ 1. In the **Workspaces and permissions** area, set the permissions profile for other workspaces in your org.
+ 1. In the **Update settings** area, define how settings can be managed for this assignment:
+ * Select **Manual editing** to manage data capture and permission settings for users in this assignment in Gong.
+ After you create the assignment: if you make changes to group settings in Azure AD, they will not be pushed to Gong. However, you can edit the group settings manually in Gong.
+ * (Recommended) Select **Automatic updates** to give Azure AD governance over data capture and permission settings in Gong.
+ Define data capture and permission settings in Gong only when creating an assignment. Thereafter, other changes will only be applied to users in groups with this assignment when pushed from Azure AD.
+ 1. Click **ADD ASSIGNMENT**.
+1. For org's that don't have assignments (step 3), select the permission profile to apply to for automatically provisioned users.
+
+ [More information on permission profiles](https://help.gong.io/hc/en-us/articles/360028568911#UUID-34baef91-0aba-1295-4032-ff49102cb182).
+
+1. In the **Manager's provisioning settings** area:
+ 1. Select **Notify direct managers with recorded teams when a new team member is imported** to keep your team managers in the loop.
+ 1. Select **Managers can turn data capture on or off for their team** to give your team managers some autonomy.
+
+ > [!TIP]
+ > For more information, see "What are Manager's provisioning settings" in the [FAQ for team member provisioning](https://help.gong.io/hc/en-us/articles/360042352912#UUID-0d3df83a-44d1-11b9-ddf5-3ec649c2f594) article.
+1. Click **Update** to save your settings.
+
+> [!NOTE]
+> If you later change the provisioning source from Azure AD and then want to return to Azure AD provisioning, you will need to re-authenticate to Azure AD .
+
+## Step 3. Add Gong from the Azure AD application gallery
+
+Add Gong from the Azure AD application gallery to start managing provisioning to Gong. If you have previously setup Gong for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+
+## Step 4. Define who will be in scope for provisioning
+
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+* When assigning users and groups to Gong, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
+
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
++
+## Step 5. Configure automatic user provisioning to Gong
+
+This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in Gong based on user and/or group assignments in Azure AD.
+
+### To configure automatic user provisioning for Gong in Azure AD:
+
+1. Sign in to the [Azure portal](https://portal.azure.com). Select **Enterprise Applications**, then select **All applications**.
+
+ ![Enterprise applications blade](common/enterprise-applications.png)
+
+1. In the applications list, select **Gong**.
+
+ ![The Gong link in the Applications list](common/all-applications.png)
+
+1. Select the **Provisioning** tab.
+
+ ![Provisioning tab](common/provisioning.png)
+
+1. Set the **Provisioning Mode** to **Automatic**.
+
+ ![Provisioning tab automatic](common/provisioning-automatic.png)
+
+1. In the **Admin Credentials** section, click on Authorize, make sure that you enter your Taskize Connect account's Admin credentials. Click **Test Connection** to ensure Azure AD can connect to Taskize Connect. If the connection fails, ensure your Taskize Connect account has Admin permissions and try again.
+
+ ![Token](media/gong-provisioning-tutorial/gong-authorize.png)
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+
+ ![Notification Email](common/provisioning-notification-email.png)
+
+1. Select **Save**.
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Gong**.
+
+1. Review the user attributes that are synchronized from Azure AD to Gong in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Gong for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Gong API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by Gong|
+ |||||
+ |userName|String|&check;|&check;
+ |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager|String||
+ |active|Boolean||
+ |title|String||
+ |emails[type eq "work"].value|String||
+ |name.givenName|String||&check;
+ |name.familyName|String||&check;
+ |phoneNumbers[type eq "work"].value|String||
+ |externalId|String||
+ |locale|String||
+ |timezone|String||
+ |urn:ietf:params:scim:schemas:extension:Gong:2.0:User:stateOrProvince|String||
+ |urn:ietf:params:scim:schemas:extension:Gong:2.0:User:country|String||
+
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+
+1. To enable the Azure AD provisioning service for Gong, change the **Provisioning Status** to **On** in the **Settings** section.
+
+ ![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
+
+1. Define the users and/or groups that you would like to provision to Gong by choosing the desired values in **Scope** in the **Settings** section.
+
+ ![Provisioning Scope](common/provisioning-scope.png)
+
+1. When you are ready to provision, click **Save**.
+
+ ![Saving Provisioning Configuration](common/provisioning-configuration-save.png)
+
+This operation starts the initial synchronization cycle of all users and groups defined in **Scope** in the **Settings** section. The initial cycle takes longer to perform than subsequent cycles, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running.
+
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
++
+## More resources
+
+* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
+* [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+
+## Next steps
+
+* [Learn how to review logs and get reports on provisioning activity](../app-provisioning/check-status-user-account-provisioning.md)
active-directory Hiretual Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/hiretual-tutorial.md
Title: 'Tutorial: Azure AD SSO integration with Hiretual-SSO'
-description: Learn how to configure single sign-on between Azure Active Directory and Hiretual-SSO.
+ Title: 'Tutorial: Azure AD SSO integration with hireEZ-SSO'
+description: Learn how to configure single sign-on between Azure Active Directory and hireEZ-SSO.
Previously updated : 09/29/2021 Last updated : 02/14/2022
-# Tutorial: Azure AD SSO integration with Hiretual-SSO
+# Tutorial: Azure AD SSO integration with hireEZ-SSO
-In this tutorial, you'll learn how to integrate Hiretual-SSO with Azure Active Directory (Azure AD). When you integrate Hiretual-SSO with Azure AD, you can:
+In this tutorial, you'll learn how to integrate hireEZ-SSO with Azure Active Directory (Azure AD). When you integrate hireEZ-SSO with Azure AD, you can:
-* Control in Azure AD who has access to Hiretual-SSO.
-* Enable your users to be automatically signed-in to Hiretual-SSO with their Azure AD accounts.
+* Control in Azure AD who has access to hireEZ-SSO.
+* Enable your users to be automatically signed-in to hireEZ-SSO with their Azure AD accounts.
* Manage your accounts in one central location - the Azure portal. ## Prerequisites
In this tutorial, you'll learn how to integrate Hiretual-SSO with Azure Active D
To get started, you need the following items: * An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Hiretual-SSO single sign-on (SSO) enabled subscription.
+* hireEZ-SSO single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Hiretual-SSO supports **SP and IDP** initiated SSO.
+* hireEZ-SSO supports **SP and IDP** initiated SSO.
> [!NOTE] > Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-## Add Hiretual-SSO from the gallery
+## Add hireEZ-SSO from the gallery
-To configure the integration of Hiretual-SSO into Azure AD, you need to add Hiretual-SSO from the gallery to your list of managed SaaS apps.
+To configure the integration of hireEZ-SSO into Azure AD, you need to add hireEZ-SSO from the gallery to your list of managed SaaS apps.
1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account. 1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Hiretual-SSO** in the search box.
-1. Select **Hiretual-SSO** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+1. In the **Add from the gallery** section, type **hireEZ-SSO** in the search box.
+1. Select **hireEZ-SSO** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD SSO for Hiretual-SSO
+## Configure and test Azure AD SSO for hireEZ-SSO
-Configure and test Azure AD SSO with Hiretual-SSO using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Hiretual-SSO.
+Configure and test Azure AD SSO with hireEZ-SSO using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in hireEZ-SSO.
-To configure and test Azure AD SSO with Hiretual-SSO, perform the following steps:
+To configure and test Azure AD SSO with hireEZ-SSO, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature. 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon. 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Hiretual-SSO](#configure-hiretual-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Hiretual-SSO test user](#create-hiretual-sso-test-user)** - to have a counterpart of B.Simon in Hiretual-SSO that is linked to the Azure AD representation of user.
+1. **[Configure hireEZ-SSO](#configure-hireez-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create hireEZ-SSO test user](#create-hireez-sso-test-user)** - to have a counterpart of B.Simon in hireEZ-SSO that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the Azure portal, on the **Hiretual-SSO** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **hireEZ-SSO** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**. 1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following step:
- a. In the **Reply URL** text box, type a URL using the following pattern:
- `https://api.hiretual.com/v1/users/saml/login/<teamId>`
-
-1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
-
- In the **Sign-on URL** text box, type the URL:
- `https://app.hiretual.com/`
+ a. In the **Identifier** text box, type the URL:
+ `https://app.hireez.com/`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://api.hireez.com/v1/users/saml/login/<teamId>`
> [!NOTE]
- > This value is not real. Update this value with the actual Reply URL. Contact [Hiretual-SSO Client support team](mailto:support@hiretual.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > The Reply URL value is not real. Update this value with the actual Reply URL. Contact [hireEZ-SSO Client support team](mailto:support@hiretual.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-1. Hiretual-SSO application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+1. Click the **Properties** tab on the left menu bar, copy the value of **User access URL**,and save it on your computer.
- ![image](common/default-attributes.png)
-
-1. In addition to above, Hiretual-SSO application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
-
- | Name | Source Attribute |
- | - | |
- | firstName | user.givenname |
- | title | user.jobtitle |
- | lastName | user.surname |
+ ![Screenshot shows the User access URL.](./media/hiretual-tutorial/access-url.png " SSO Configuration")
1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
In this section, you'll create a test user in the Azure portal called B.Simon.
### Assign the Azure AD test user
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Hiretual-SSO.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to hireEZ-SSO.
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Hiretual-SSO**.
+1. In the applications list, select **hireEZ-SSO**.
1. In the app's overview page, find the **Manage** section and select **Users and groups**. 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog. 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected. 1. In the **Add Assignment** dialog, click the **Assign** button.
-## Configure Hiretual-SSO
+## Configure hireEZ-SSO
-1. Log in to your Hiretual-SSO company site as an administrator.
+1. Log in to your hireEZ-SSO company site as an administrator.
1. Go to **Security & Compliance** > **Single Sign-On**.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. Copy **X509 Certificate** from the metadata file and paste the content in the **Certificate** textbox.
- 1. Fill the required attributes manually according to your requirement and click **Save**.
- 1. Enable **Single Sign-On Connection Status** button. 1. Test your Single Sign-On integration first and then enable **Admin SP-Initiated Single Sign-On** button. > [!NOTE]
- > If your Single Sign-On configuration has any errors or you have trouble to login to Hiretual-SSO Web App/Extension after you connected Admin SP-Initiated Single Sign-On, please contact [Hiretual-SSO support team](mailto:support@hiretual.com).
+ > If your Single Sign-On configuration has any errors or you have trouble to login to hireEZ-SSO Web App/Extension after you connected Admin SP-Initiated Single Sign-On, please contact [hireEZ-SSO support team](mailto:support@hiretual.com).
-### Create Hiretual-SSO test user
+### Create hireEZ-SSO test user
-In this section, you create a user called Britta Simon in Hiretual-SSO. Work with [Hiretual-SSO support team](mailto:support@hiretual.com) to add the users in the Hiretual-SSO platform. Users must be created and activated before you use single sign-on.
+In this section, you create a user called Britta Simon in hireEZ-SSO. Work with [hireEZ-SSO support team](mailto:support@hiretual.com) to add the users in the hireEZ-SSO platform. Users must be created and activated before you use single sign-on.
## Test SSO
In this section, you test your Azure AD single sign-on configuration with follow
#### SP initiated:
-* Click on **Test this application** in Azure portal. This will redirect to Hiretual-SSO Sign on URL where you can initiate the login flow.
+* Click on **Test this application** in Azure portal. This will redirect to hireEZ-SSO Sign on URL where you can initiate the login flow.
-* Go to Hiretual-SSO Sign-on URL directly and initiate the login flow from there.
+* Go to hireEZ-SSO Sign-on URL directly and initiate the login flow from there.
#### IDP initiated:
-* Click on **Test this application** in Azure portal and you should be automatically signed in to the Hiretual-SSO for which you set up the SSO.
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the hireEZ-SSO for which you set up the SSO.
-You can also use Microsoft My Apps to test the application in any mode. When you click the Hiretual-SSO tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Hiretual-SSO for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+You can also use Microsoft My Apps to test the application in any mode. When you click the hireEZ-SSO tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the hireEZ-SSO for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
## Next steps
-Once you configure Hiretual-SSO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
+Once you configure hireEZ-SSO you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Prodpad Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/prodpad-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
## Step 3. Add ProdPad from the Azure AD application gallery
-Add ProdPad from the Azure AD application gallery to start managing provisioning to ProdPad. If you have previously setup ProdPad for SSO, you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
+Add ProdPad from the Azure AD application gallery to start managing provisioning to ProdPad. If you have previously setup [ProdPad for SSO](prodpad-tutorial.md), you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
## Step 4. Define who will be in scope for provisioning
Once you've configured provisioning, use the following resources to monitor your
* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully * Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion * If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).+
+## Troubleshooting Tips
+Reach out to [ProdPad support team](mailto:help@prodpad.com) in case of any issues.
+ ## More resources * [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
active-directory Thrive Lxp Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/thrive-lxp-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
## Step 2. Configure Thrive LXP to support provisioning with Azure AD
-Reach out to your Thrive LXP contact to generate your **Tenant url** and **Secret Token**. These values will be entered in the Tenant URL and Secret Token field in the Provisioning tab of your Thrive LXP application in the Azure portal.
+Reach out to your [Thrive LXP Client support team](mailto:support@thrivelearning.com) to generate your **Tenant url** and **Secret Token**. These values will be entered in the Tenant URL and Secret Token field in the Provisioning tab of your Thrive LXP application in the Azure portal.
## Step 3. Add Thrive LXP from the Azure AD application gallery
active-directory Zendesk Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zendesk-provisioning-tutorial.md
For information on how to read the Azure AD provisioning logs, see [Reporting on
* When a custom role is assigned to a user or group, the Azure AD automatic user provisioning service also assigns the default role **Agent**. Only Agents can be assigned a custom role. For more information, see the [Zendesk API documentation](https://developer.zendesk.com/rest_api/docs/support/users#json-format-for-agent-or-admin-requests).
+* Import of all roles will fail if any of the custom roles is either "agent" or "end-user". To avoid this, ensure that none of the roles being imported has the above display names.
+ ## Additional resources * [Manage user account provisioning for enterprise apps](../app-provisioning/configure-automatic-user-provisioning-portal.md)
aks Reduce Latency Ppg https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/reduce-latency-ppg.md
Proximity placement groups are a node pool concept and associated with each indi
The following example uses the [az group create][az-group-create] command to create a resource group named *myResourceGroup* in the *centralus* region. An AKS cluster named *myAKSCluster* is then created using the [az aks create][az-aks-create] command.
-Accelerated networking greatly improves networking performance of virtual machines. Ideally, use proximity placement groups in conjunction with accelerated networking. By default, AKS uses accelerated networking on [supported virtual machine instances](../virtual-network/create-vm-accelerated-networking-cli.md?toc=/azure/virtual-machines/linux/toc.json#limitations-and-constraints), which include most Azure virtual machine with two or more vCPUs.
+Accelerated networking greatly improves networking performance of virtual machines. Ideally, use proximity placement groups in conjunction with accelerated networking. By default, AKS uses accelerated networking on [supported virtual machine instances](../virtual-network/accelerated-networking-overview.md?toc=/azure/virtual-machines/linux/toc.json#limitations-and-constraints), which include most Azure virtual machine with two or more vCPUs.
Create a new AKS cluster with a proximity placement group associated to the first system node pool:
az group delete --name myResourceGroup --yes --no-wait
[az-aks-nodepool-add]: /cli/azure/aks/nodepool#az_aks_nodepool_add [az-aks-create]: /cli/azure/aks#az_aks_create [az-group-create]: /cli/azure/group#az_group_create
-[az-group-delete]: /cli/azure/group#az_group_delete
+[az-group-delete]: /cli/azure/group#az_group_delete
aks Use Azure Dedicated Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-dedicated-hosts.md
+
+ Title: Use Azure Dedicated Hosts in Azure Kubernetes Service (AKS) (Preview)
+description: Learn how to create an Azure Dedicated Hosts Group and associate it with Azure Kubernetes Service (AKS)
++ Last updated : 02/11/2021+++
+# Add Azure Dedicated Host to an Azure Kubernetes Service (AKS) cluster
+
+Azure Dedicated Host is a service that provides physical servers - able to host one or more virtual machines - dedicated to one Azure subscription. Dedicated hosts are the same physical servers used in our data centers, provided as a resource. You can provision dedicated hosts within a region, availability zone, and fault domain. Then, you can place VMs directly into your provisioned hosts, in whatever configuration best meets your needs.
+
+Using Azure Dedicated Hosts for nodes with your AKS cluster has the following benefits:
+
+* Hardware isolation at the physical server level. No other VMs will be placed on your hosts. Dedicated hosts are deployed in the same data centers and share the same network and underlying storage infrastructure as other, non-isolated hosts.
+* Control over maintenance events initiated by the Azure platform. While the majority of maintenance events have little to no impact on your virtual machines, there are some sensitive workloads where each second of pause can have an impact. With dedicated hosts, you can opt in to a maintenance window to reduce the impact to your service.
++
+## Before you begin
+
+* An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free).
+* [Azure CLI installed](/cli/azure/install-azure-cli).
+
+### Install the `aks-preview` Azure CLI
+
+You also need the *aks-preview* Azure CLI extension version 0.5.54 or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
+
+```azurecli-interactive
+# Install the aks-preview extension
+az extension add --name aks-preview
+# Update the extension to make sure you have the latest version installed
+az extension update --name aks-preview
+```
+
+### Register the `DedicatedHostGroupPreview` preview feature
+
+To use the feature, you must also enable the `DedicatedHostGroupPreview` feature flag on your subscription.
+
+Register the `DedicatedHostGroupPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "DedicatedHostGroupPreview"
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
+
+```azurecli-interactive
+az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/DedicatedHostGroupPreview')].{Name:name,State:properties.state}"
+```
+
+When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+## Limitations
+
+The following limitations apply when you integrate Azure Dedicated Host with Azure Kubernetes Service:
+* An existing agentpool cannot be converted from non-ADH to ADH or ADH to non-ADH.
+* It is not supported to update agentpool from host group A to host group B.
+
+## Add a Dedicated Host Group to an AKS cluster
+
+A host group is a resource that represents a collection of dedicated hosts. You create a host group in a region and an availability zone, and add hosts to it. When planning for high availability, there are additional options. You can use one or both of the following options with your dedicated hosts:
+
+Span across multiple availability zones. In this case, you are required to have a host group in each of the zones you wish to use.
+Span across multiple fault domains which are mapped to physical racks.
+In either case, you are need to provide the fault domain count for your host group. If you do not want to span fault domains in your group, use a fault domain count of 1.
+
+You can also decide to use both availability zones and fault domains.
+
+Not all host SKUs are available in all regions, and availability zones. You can list host availability, and any offer restrictions before you start provisioning dedicated hosts.
+```azurecli-interactive
+az vm list-skus -l eastus2 -r hostGroups/hosts -o table
+```
+
+## Add Dedicated Hosts to the Host Group
+
+Now create a dedicated host in the host group. In addition to a name for the host, you are required to provide the SKU for the host. Host SKU captures the supported VM series as well as the hardware generation for your dedicated host.
+
+For more information about the host SKUs and pricing, see [Azure Dedicated Host pricing](https://azure.microsoft.com/pricing/details/virtual-machines/dedicated-host/).
+
+Use az vm host create to create a host. If you set a fault domain count for your host group, you will be asked to specify the fault domain for your host.
+
+In this example, we will use [az vm host group create](/cli/azure/vm/host/group#az_vm_host_group_create?view=azure-cli-latest&preserve-view=true) to create a host group using both availability zones and fault domains.
+
+```azurecli-interactive
+az vm host group create \
+--name myHostGroup \
+-g myDHResourceGroup \
+-z 1\
+--platform-fault-domain-count 2
+```
+
+## Create an AKS cluster using the Host Group
+Create an AKS cluster, and add the Host Group you just configured.
+
+```azurecli-interactive
+az aks create -g MyResourceGroup -n MyManagedCluster --location westus2 --kubernetes-version 1.20.13 --nodepool-name agentpool1 --node-count 1 --host-group-id <id> --node-vm-size Standard_D2s_v3 --enable-managed-identity --assign-identity <id>
+```
+
+## Add a Dedicated Host Nodepool to an existing AKS cluster
+Add a Host Group to an already existing AKS cluster.
+
+```azurecli-interactive
+az aks nodepool add --cluster-name MyManagedCluster --name agentpool3 --resource-group MyResourceGroup --node-count 1 --host-group-id <id> --node-vm-size Standard_D2s_v3
+```
+
+## Remove a Dedicated Host Nodepool from an AKS cluster
+
+```azurecli-interactive
+az aks nodepool delete --cluster-name MyManagedCluster --name agentpool3 --resource-group MyResourceGroup
+```
+
+## Next steps
+
+In this article, you learned how to create an AKS cluster with a Dedicated host, and to add a dedicated host to an existing cluster. For more information about Dedicated Hosts, see [dedicated-hosts](../virtual-machines/dedicated-hosts.md).
+
+<!-- LINKS - External -->
+[kubernetes-services]: https://kubernetes.io/docs/concepts/services-networking/service/
+
+<!-- LINKS - Internal -->
+[aks-support-policies]: support-policies.md
+[aks-faq]: faq.md
+[azure-cli-install]: /cli/azure/install-azure-cli
+[dedicated-hosts]: /azure/virtual-machines/dedicated-hosts.md
app-service App Service Web Tutorial Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-tutorial-rest-api.md
Title: 'Tutorial: Host RESTful API with CORS' description: Learn how Azure App Service helps you host your RESTful APIs with CORS support. App Service can host both front-end web apps and back end APIs. ms.assetid: a820e400-06af-4852-8627-12b3db4a8e70
+ms.devlang: csharp
Last updated 04/28/2020
app-service Configure Language Dotnet Framework https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-dotnet-framework.md
Title: Configure ASP.NET apps description: Learn how to configure an ASP.NET app in Azure App Service. This article shows the most common configuration tasks.
+ms.devlang: csharp
Last updated 06/02/2020
app-service Configure Language Dotnetcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-dotnetcore.md
Title: Configure ASP.NET Core apps description: Learn how to configure a ASP.NET Core app in the native Windows instances, or in a pre-built Linux container, in Azure App Service. This article shows the most common configuration tasks.
+ms.devlang: csharp
Last updated 06/02/2020
app-service Configure Language Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-php.md
Title: Configure PHP apps description: Learn how to configure a PHP app in the native Windows instances, or in a pre-built PHP container, in Azure App Service. This article shows the most common configuration tasks.
+ms.devlang: php
Last updated 06/02/2020
app-service Configure Language Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-python.md
description: Learn how to configure the Python container in which web apps are r
Last updated 06/11/2021
+ms.devlang: python
app-service Configure Language Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-ruby.md
description: Learn how to configure a pre-built Ruby container for your app. Thi
Last updated 06/18/2020
+ms.devlang: ruby
app-service Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/networking.md
Title: App Service Environment networking
description: App Service Environment networking details Previously updated : 11/15/2021 Last updated : 02/17/2022
You can set route tables without restriction. You can tunnel all of the outbound
You can put your web application firewall devices, such as Azure Application Gateway, in front of inbound traffic. Doing so exposes specific apps on that App Service Environment. If you want to customize the outbound address of your applications on an App Service Environment, you can add a NAT gateway to your subnet.
+## Private endpoint
+
+In order to enable Private Endpoints for apps hosted in your App Service Environment, you must first enable this feature at the App Service Environment level.
+
+You can activate it through the Azure portal: in the App Service Environment configuration pane turn **on** the setting `Allow new private endpoints`.
+Alternatively the following CLI can enable it:
+
+```azurecli-interactive
+az appservice ase update --name myasename --allow-new-private-endpoint-connections true
+```
+
+For more information about Private Endpoint and Web App, see [Azure Web App Private Endpoint][privateendpoint]
++ ## DNS The following sections describe the DNS considerations and configuration that apply inbound to and outbound from your App Service Environment.
While App Service Environment does deploy into your virtual network, there are a
## More resources -- [Environment variables and app settings reference](../reference-app-settings.md)
+- [Environment variables and app settings reference](../reference-app-settings.md)
+
+<!--Links-->
+[privateendpoint]: ../networking/private-endpoint.md
+
app-service Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/networking/private-endpoint.md
description: Connect privately to a Web App using Azure Private Endpoint
ms.assetid: 2dceac28-1ba6-4904-a15d-9e91d5ee162c Previously updated : 12/07/2021 Last updated : 02/17/2022
For more information, see [Service Endpoints][serviceendpoint].
A Private Endpoint is a special network interface (NIC) for your Azure Web App in a Subnet in your Virtual Network (VNet). When you create a Private Endpoint for your Web App, it provides secure connectivity between clients on your private network and your Web App. The Private Endpoint is assigned an IP Address from the IP address range of your VNet.
-The connection between the Private Endpoint and the Web App uses a secure [Private Link][privatelink]. Private Endpoint is only used for incoming flows to your Web App. Outgoing flows will not use this Private Endpoint. You can inject outgoing flows to your network in a different subnet through the [VNet integration feature][vnetintegrationfeature].
+The connection between the Private Endpoint and the Web App uses a secure [Private Link][privatelink]. Private Endpoint is only used for incoming flows to your Web App. Outgoing flows won't use this Private Endpoint. You can inject outgoing flows to your network in a different subnet through the [VNet integration feature][vnetintegrationfeature].
-Each slot of an app is configured separately. You can plug up to 100 Private Endpoints per slot. You cannot share a Private Endpoint between slots.
+Each slot of an app is configured separately. You can plug up to 100 Private Endpoints per slot. You can't share a Private Endpoint between slots.
The Subnet where you plug the Private Endpoint can have other resources in it, you don't need a dedicated empty Subnet. You can also deploy the Private Endpoint in a different region than the Web App.
From a security perspective:
- When you enable Private Endpoints to your Web App, you disable all public access. - You can enable multiple Private Endpoints in others VNets and Subnets, including VNets in other regions. - The IP address of the Private Endpoint NIC must be dynamic, but will remain the same until you delete the Private Endpoint.-- The NIC of the Private Endpoint cannot have an NSG associated.-- The Subnet that hosts the Private Endpoint can have an NSG associated, but you must disable the network policies enforcement for the Private Endpoint: see [Disable network policies for private endpoints][disablesecuritype]. As a result, you cannot filter by any NSG the access to your Private Endpoint.-- When you enable Private Endpoint to your Web App, the [access restrictions][accessrestrictions] configuration of the Web App is not evaluated.
+- The NIC of the Private Endpoint can't have an NSG associated.
+- The Subnet that hosts the Private Endpoint can have an NSG associated, but you must disable the network policies enforcement for the Private Endpoint: see [Disable network policies for private endpoints][disablesecuritype]. As a result, you can't filter by any NSG the access to your Private Endpoint.
+- When you enable Private Endpoint to your Web App, the [access restrictions][accessrestrictions] configuration of the Web App isn't evaluated.
- You can eliminate the data exfiltration risk from the VNet by removing all NSG rules where destination is tag Internet or Azure services. When you deploy a Private Endpoint for a Web App, you can only reach this specific Web App through the Private Endpoint. If you have another Web App, you must deploy another dedicated Private Endpoint for this other Web App.
-In the Web HTTP logs of your Web App, you will find the client source IP. This feature is implemented using the TCP Proxy protocol, forwarding the client IP property up to the Web App. For more information, see [Getting connection Information using TCP Proxy v2][tcpproxy].
+In the Web HTTP logs of your Web App, you'll find the client source IP. This feature is implemented using the TCP Proxy protocol, forwarding the client IP property up to the Web App. For more information, see [Getting connection Information using TCP Proxy v2][tcpproxy].
> [!div class="mx-imgBorder"]
For example, the name resolution will be:
|mywebapp.azurewebsites.net|CNAME|mywebapp.privatelink.azurewebsites.net| |mywebapp.privatelink.azurewebsites.net|CNAME|clustername.azurewebsites.windows.net| |clustername.azurewebsites.windows.net|CNAME|cloudservicename.cloudapp.net|
-|cloudservicename.cloudapp.net|A|40.122.110.154|<--This public IP is not your Private Endpoint, you will receive a 403 error|
+|cloudservicename.cloudapp.net|A|40.122.110.154|<--This public IP isn't your Private Endpoint, you'll receive a 403 error|
You must setup a private DNS server or an Azure DNS private zone, for tests you can modify the host entry of your test machine. The DNS zone that you need to create is: **privatelink.azurewebsites.net**. Register the record for your Web App with a A record and the Private Endpoint IP.
For the Kudu console, or Kudu REST API (deployment with Azure DevOps self-hosted
| mywebapp.scm.privatelink.azurewebsites.net | A | PrivateEndpointIP |
-## ASEv3 special consideration
+## App Service Environment v3 special consideration
-In order to enable Private Endpoint for Web App hosted in an IsolatedV2 plan (ASEv3), you have to enable the Private Endpoint support at the ASE level.
-You can activate the feature by the Azure portal in the ASE configuration pane, or through the following CLI:
+In order to enable Private Endpoint for apps hosted in an IsolatedV2 plan (App Service Environment v3), you have to enable the Private Endpoint support at the App Service Environment level.
+You can activate the feature by the Azure portal in the App Service Environment configuration pane, or through the following CLI:
```azurecli-interactive az appservice ase update --name myasename --allow-new-private-endpoint-connections true ```
+## Specific requirements
+
+If the Virtual Network is in a different subscription than the app, you must ensure that the subscription with the Virtual Network is registered for the Microsoft.Web resource provider. You can explicitly register the provider [by following this documentation][registerprovider], but it will also automatically be registered when creating the first web app in a subscription.
## Pricing
For pricing details, see [Azure Private Link pricing][pricing].
## Limitations
-* When you use Azure Function in Elastic Premium Plan with Private Endpoint, to run or execute the function in Azure Web portal, you must have direct network access or you will receive an HTTP 403 error. In other words, your browser must be able to reach the Private Endpoint to execute the function from the Azure Web portal.
+* When you use Azure Function in Elastic Premium Plan with Private Endpoint, to run or execute the function in Azure Web portal, you must have direct network access or you'll receive an HTTP 403 error. In other words, your browser must be able to reach the Private Endpoint to execute the function from the Azure Web portal.
* You can connect up to 100 Private Endpoints to a particular Web App. * Remote Debugging functionality is not available when Private Endpoint is enabled for the Web App. The recommendation is to deploy the code to a slot and remote debug it there.
-* FTP access is provided through the inbound public IP address. Private Endpoint does not support FTP access to the Web App.
-* IP-Based SSL is not supported with Private Endpoints.
+* FTP access is provided through the inbound public IP address. Private Endpoint doesn't support FTP access to the Web App.
+* IP-Based SSL isn't supported with Private Endpoints.
-We are improving Private Link feature and Private Endpoint regularly, check [this article][pllimitations] for up-to-date information about limitations.
+We're improving Private Link feature and Private Endpoint regularly, check [this article][pllimitations] for up-to-date information about limitations.
## Next steps
We are improving Private Link feature and Private Endpoint regularly, check [thi
[howtoguide5]: https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.web/webapp-privateendpoint-vnet-injection [howtoguide6]: ../scripts/terraform-secure-backend-frontend.md [TiP]: ../deploy-staging-slots.md#route-traffic
+[registerprovider]: ../../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider
app-service Quickstart Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-php.md
description: Deploy your first PHP Hello World to Azure App Service in minutes.
ms.assetid: 6feac128-c728-4491-8b79-962da9a40788 Last updated 05/02/2021
+ms.devlang: php
zone_pivot_groups: app-service-platform-windows-linux
app-service Quickstart Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-ruby.md
keywords: azure app service, linux, oss, ruby, rails
ms.assetid: 6d00c73c-13cb-446f-8926-923db4101afa Last updated 04/27/2021
+ms.devlang: ruby
app-service Scenario Secure App Access Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scenario-secure-app-access-storage.md
Last updated 11/02/2021
+ms.devlang: csharp, javascript
#Customer intent: As an application developer, I want to learn how to access Azure Storage for an app by using managed identities.
app-service Tutorial Auth Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-auth-aad.md
Title: 'Tutorial: Authenticate users E2E' description: Learn how to use App Service authentication and authorization to secure your App Service apps end-to-end, including access to remote APIs. keywords: app service, azure app service, authN, authZ, secure, security, multi-tiered, azure active directory, azure ad
+ms.devlang: csharp
Last updated 09/23/2021
app-service Tutorial Connect Msi Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-key-vault.md
Title: 'Tutorial: Connect to Azure services securely with Key Vault' description: Learn how to secure connectivity to back-end Azure services that don't support managed identity natively.
+ms.devlang: csharp
Last updated 10/26/2021
app-service Tutorial Connect Msi Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-sql-database.md
Title: 'Tutorial: Access data with managed identity' description: Learn how to make database connectivity more secure by using a managed identity, and also how to apply it to other Azure services.
+ms.devlang: csharp
Last updated 01/27/2022
app-service Tutorial Dotnetcore Sqldb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-dotnetcore-sqldb-app.md
Last updated 02/04/2022
+ms.devlang: csharp
app-service Tutorial Java Spring Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-spring-cosmosdb.md
Title: 'Tutorial: Linux Java app with MongoDB'
description: Learn how to get a data-driven Linux Java app working in Azure App Service, with connection to a MongoDB running in Azure (Cosmos DB).
+ms.devlang: java
Last updated 12/10/2018
app-service Tutorial Nodejs Mongodb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-nodejs-mongodb-app.md
Last updated 01/31/2022 ms.role: developer
+ms.devlang: javascript
app-service Tutorial Php Mysql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-php-mysql-app.md
Title: 'Tutorial: PHP app with MySQL'
description: Learn how to get a PHP app working in Azure, with connection to a MySQL database in Azure. Laravel is used in the tutorial. ms.assetid: 14feb4f3-5095-496e-9a40-690e1414bd73
+ms.devlang: php
Last updated 06/15/2020
app-service Tutorial Python Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-python-postgresql-app.md
Title: 'Tutorial: Deploy a Python Django app with Postgres' description: Create a Python web app with a PostgreSQL database and deploy it to Azure. The tutorial uses the Django framework and the app is hosted on Azure App Service on Linux.
+ms.devlang: python
Last updated 11/30/2021
app-service Tutorial Ruby Postgres App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-ruby-postgres-app.md
Title: 'Tutorial: Linux Ruby app with Postgres' description: Learn how to get a Linux Ruby app working in Azure App Service, with connection to a PostgreSQL database in Azure. Rails is used in the tutorial.
+ms.devlang: ruby
Last updated 06/18/2020
applied-ai-services Concept Custom Neural https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-neural.md
https://{endpoint}/formrecognizer/documentModels:build?api-version=2022-01-30-pr
* Train a custom model: > [!div class="nextstepaction"]
- > [Form Recognizer quickstart](quickstarts/try-v3-form-recognizer-studio.md#custom-models)
+ > [How to train a model](how-to-guides/build-custom-model-v3.md)
+
+* Learn more about custom template models:
+
+ > [!div class="nextstepaction"]
+ > [Custom template models](concept-custom-template.md )
* View the REST API:
applied-ai-services Concept Custom Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-template.md
https://{endpoint}/formrecognizer/documentModels:build?api-version=2022-01-30-pr
## Next steps
-* Train a custom template model:
+* * Train a custom model:
> [!div class="nextstepaction"]
- > [Form Recognizer quickstart](quickstarts/try-sdk-rest-api.md)
+ > [How to train a model](how-to-guides/build-custom-model-v3.md)
* Learn more about custom neural models:
applied-ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom.md
To create a custom model, you label a dataset of documents with the values you w
## Custom model types
-Custom models can be one of two types, [**custom template**](concept-custom-template.md ) or [**custom neural**](concept-custom-neural.md) models. The labeling and training process for both models is identical, but the models differ as follows:
+Custom models can be one of two types, [**custom template**](concept-custom-template.md ) or custom form and [**custom neural**](concept-custom-neural.md) or custom document models. The labeling and training process for both models is identical, but the models differ as follows:
### Custom template model
- The custom template model relies on a consistent visual template to extract the labeled data. The accuracy of your model is affected by variances in the visual structure of your documents. Questionnaires or application forms are examples of consistent visual templates.Your training set will consist of structured documents where the formatting and layout are static and constant from one document instance to the next. Custom template models support key-value pairs, selection marks, tables, signature fields and regions and can be trained on documents in any of the [supported languages](language-support.md). For more information, *see* [custom template models](concept-custom-template.md ).
+ The custom template or custom form model relies on a consistent visual template to extract the labeled data. The accuracy of your model is affected by variances in the visual structure of your documents. Structured forms such as questionnaires or applications are examples of consistent visual templates. Your training set will consist of structured documents where the formatting and layout are static and constant from one document instance to the next. Custom template models support key-value pairs, selection marks, tables, signature fields and regions and can be trained on documents in any of the [supported languages](language-support.md). For more information, *see* [custom template models](concept-custom-template.md ).
> [!TIP] >
Custom models can be one of two types, [**custom template**](concept-custom-temp
### Custom neural model
-The custom neural model is a deep learning model type relies on a base model trained on a large collection of labeled documents using key-value pairs. This model is then fine-tuned or adapted to your data when you train the model with a labeled dataset. Custom neural models support structured, semi-structured, and unstructured documents to extract fields. Custom neural models currently support English-language documents. When choosing between the two model types, start with a neural model if it meets your functional needs. See [neural models](concept-custom-neural.md) to learn more about custom document models.
+The custom neural (custom document) model is a deep learning model type that relies on a base model trained on a large collection of documents. This model is then fine-tuned or adapted to your data when you train the model with a labeled dataset. Custom neural models support structured, semi-structured, and unstructured documents to extract fields. Custom neural models currently support English-language documents. When choosing between the two model types, start with a neural model if it meets your functional needs. See [neural models](concept-custom-neural.md) to learn more about custom document models.
## Model features
The following tools are supported by Form Recognizer v3.0:
| Feature | Resources | |-|-|
-|Custom model| <ul><li>[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)</li><li>[C# SDK](quickstarts/try-v3-csharp-sdk.md)</li><li>[Python SDK](quickstarts/try-v3-python-sdk.md)</li></ul>|
+|Custom model| <ul><li>[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)</li><li>[C# SDK](quickstarts/try-v3-csharp-sdk.md)</li><li>[Python SDK](quickstarts/try-v3-python-sdk.md)</li></ul>|
### Try Form Recognizer
applied-ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-model-overview.md
Azure Form Recognizer prebuilt models enable you to add intelligent document pro
| [Invoice](#invoice) | Extract key information from English and Spanish invoices. | | [Receipt](#receipt) | Extract key information from English receipts. | | [ID document](#id-document) | Extract key information from US driver licenses and international passports. |
+| 🆕[W-2 (preview)](#w-2-preview) | Extract employee, employer, wage information, etc. from US W-2 forms. |
| [Business card](#business-card) | Extract key information from English business cards. | | [Custom](#custom) | Extract data from forms and documents specific to your business. Custom models are trained for your distinct data and use cases. |
The ID document model analyzes and extracts key information from U.S. Driver's L
> [!div class="nextstepaction"] > [Learn more: identity document model](concept-id-document.md)
+### W-2 (preview)
++
+The W-2 model analyzes and extracts key information reported in each box on a W-2 form. The model supports standard and customized forms from 2018 to the present, including both single form and multiple forms (copy A, B, C, D, 1, 2) on one page.
+
+***Sample W-2 document processed using [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)***:
++
+> [!div class="nextstepaction"]
+> [Learn more: W-2 model](concept-w2.md)
+ ### Business card :::image type="content" source="media/studio/business-card.png" alt-text="Screenshot: Studio business card icon.":::
The custom model analyzes and extracts data from forms and documents specific to
| Layout | Γ£ô | || Γ£ô | Γ£ô | | | Invoice | Γ£ô | Γ£ô |Γ£ô| Γ£ô | Γ£ô || |Receipt | Γ£ô | Γ£ô |Γ£ô| | ||
- | ID document | Γ£ô | Γ£ô |Γ£ô| | ||
+ | ID document | Γ£ô | Γ£ô |Γ£ô| | ||
+ |🆕W-2 | ✓ | ✓ | ✓ | ✓ | ✓ ||
| Business card | Γ£ô | Γ£ô | Γ£ô| | || | Custom |Γ£ô | Γ£ô || Γ£ô | Γ£ô | Γ£ô |
The custom model analyzes and extracts data from forms and documents specific to
* [**General document (preview)**](concept-general-document.md) model is a new API that uses a pre-trained model to extract text, tables, structure, key-value pairs, and named entities from forms and documents. * [**Receipt (preview)**](concept-receipt.md) model supports single-page hotel receipt processing. * [**ID document (preview)**](concept-id-document.md) model supports endorsements, restrictions, and vehicle classification extraction from US driver's licenses.
+* [**W-2 (preview)**](concept-w2.md) model supports employee, employer, wage information, etc. from US W-2 forms.
* [**Custom model API (preview)**](concept-custom.md) supports signature detection for custom forms. ### Version migration
applied-ai-services Concept W2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-w2.md
recommendations: false
-# Form Recognizer W-2 Form prebuilt model | Preview
+# Form Recognizer W-2 model | Preview
-The Form W-2, Wage and Tax Statement, is a US Internal Revenue Service (IRS) tax form completed by employers to report employees' salary, wages, compensation, and taxes withheld. Employers send a W-2 form to each employee on or before January 31 each year and employees use the form to prepare their tax returns.
+The Form W-2, Wage and Tax Statement, is a [US Internal Revenue Service (IRS) tax form](https://www.irs.gov/forms-pubs/about-form-w-2) completed by employers to report employees' salary, wages, compensation, and taxes withheld. Employers send a W-2 form to each employee on or before January 31 each year and employees use the form to prepare their tax returns. W-2 is a key document used in employee's federal and state taxes filing, as well as other processes like mortgage loan and Social Security Administration (SSA).
-A W-2 is a multipart form divided into state and federal sections:
-
-* Copy A is sent to the Social Security Administration.
-* Copy 1 is for the city, state, or locality tax assessment.
-* Copy B is for filing with the employee's federal tax return.
-* Copy C is for the employee's records.
-* Copy 2 is another copy for a city, state, or locality tax assessment.
-* Copy D is for the employer's records.
-
-Each W-2 Form consists of more than 14 boxes, both numbered and lettered, that detail the employee's income from the previous year. The Form Recognizer **prebuilt-tax**, Form W-2 model, combines Optical Character Recognition (OCR) with deep learning models to analyze and extract information reported in each box on a W-2 form. The model supports standard and customized forms from 2018 to the present, including both single form and multiple forms (copy A, B, C, D, 1, 2) on one page.
+A W-2 is a multipart form divided into state and federal sections and consists of more than 14 boxes, both numbered and lettered, that detail the employee's income from the previous year. The Form Recognizer W-2 model, combines Optical Character Recognition (OCR) with deep learning models to analyze and extract information reported in each box on a W-2 form. The model supports standard and customized forms from 2018 to the present, including both single form and multiple forms ([copy A, B, C, D, 1, 2](https://en.wikipedia.org/wiki/Form_W-2#Filing_requirements) on one page.
***Sample W-2 form processed using Form Recognizer Studio***
See how data, including employee, employer, wage, and tax information is extract
1. On the [Form Recognizer Studio home page](https://formrecognizer.appliedai.azure.com/studio), select **W-2 form**.
-1. You can analyze the sample invoice or select the **Γ₧ò Add** button to upload your own sample.
+1. You can analyze the sample W-2 document or select the **Γ₧ò Add** button to upload your own sample.
1. Select the **Analyze** button:
See how data, including employee, employer, wage, and tax information is extract
|Name| Box | Type | Description | Standardized output| |:--|:-|:-|:-|:-|
-| Employee.SocialSecurityNumber | a | String | Employee's Social Security N number (SSN). | 123-45-6789 |
+| Employee.SocialSecurityNumber | a | String | Employee's Social Security Number (SSN). | 123-45-6789 |
| Employer.IdNumber | b | String | Employer's ID number (EIN), the business equivalent of a social security number.| 12-1234567 | | Employer.Name | c | String | Employer's name. | Contoso | | Employer.Address | c | String | Employer's address (with city). | 123 Example Street Sample City, CA |
applied-ai-services Build Custom Model V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/build-custom-model-v3.md
+
+ Title: "Train a custom model in the Form Recognizer Studio"
+
+description: Learn how to build, label, and train a custom model in the Form Recognizer Studio.
+++++ Last updated : 02/16/2022+++
+# Build your training data set for a custom model
+
+Form Recognizer models require as few as five training documents to get started. If you have at least five documents, you can get started training a custom model. You can train either a [custom template model (custom form)](../concept-custom-template.md) or a [custom neural model (custom document)](../concept-custom-neural.md). The training process is identical for both models and this document walks you through the process of training either model.
+
+## Custom model input requirements
+
+First, make sure your training data set follows the input requirements for Form Recognizer.
++
+## Training data tips
+
+Follow these tips to further optimize your data set for training:
+
+* If possible, use text-based PDF documents instead of image-based documents. Scanned PDFs are handled as images.
+* For forms with input fields, use examples that have all of the fields completed.
+* Use forms with different values in each field.
+* If your form images are of lower quality, use a larger data set (10-15 images, for example).
+
+## Upload your training data
+
+When you've put together the set of forms or documents that you'll use for training, you'll need to upload it to an Azure blob storage container. If you don't know how to create an Azure storage account with a container, following the [Azure Storage quickstart for Azure portal](../../../storage/blobs/storage-quickstart-blobs-portal.md). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
+
+## Create a project in the Form Recognizer Studio
+
+The Form Recognizer Studio provides and orchestrates all the API calls required to create the files required to complete your dataset and train your model.
+
+1. Start by navigating to the [Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio). If this is your first time using the Studio, you'll need to [initialize it for use](../quickstarts/try-v3-form-recognizer-studio.md). Follow the [additional prerequisite for custom projects](../quickstarts/try-v3-form-recognizer-studio.md#additional-prerequisites-for-custom-projects) to configure the Studio to access your training dataset.
+
+1. In the Studio select the **Custom models** tile, on the custom models page and select the **Create a project** button.
+
+ :::image type="content" source="../media/how-to/studio-create-project.png" alt-text="Screenshot: Create a project in the Form Recognizer Studio.":::
+
+ 1. On the create project dialog, provide a name for your project, optionally a description, and select continue.
+
+ 1. On the next step in the workflow, choose or create a Form Recognizer resource before you select continue.
+
+ > [!IMPORTANT]
+ > Custom neural models models are only available in a few regions. If you plan on training a neural model, please select or create a resource in one of [these supported regions](https://aka.ms/fr-neural#l#supported-regions).
+
+ :::image type="content" source="../media/how-to/studio-select-resource.png" alt-text="Screenshot: Select the Form Recognizer resource.":::
+
+1. Next select the storage account where you uploaded the dataset you wish to use to train your custom model. The **Folder path** should be empty if your training documents are in the root of the container. If your documents are in a sub-folder, enter the relative path from the container root in the **Folder path** field. Once your storage account is configured, select continue.
+
+ :::image type="content" source="../media/how-to/studio-select-storage.png" alt-text="Screenshot: Select the storage account.":::
+
+1. Finally, review your project settings and select **Create Project** to create a new project. You should now be in the labeling window and see the files in your dataset listed.
+
+## Label your data
+
+In your project, your first task is to label your dataset with the fields you wish to extract.
+
+You'll see the files you uploaded to storage on the left of your screen, with the first file ready to be labeled.
+
+1. To start labeling your dataset, create your first field by selecting the plus (Γ₧ò) button on the top-right of the screen to select a field type.
+
+ :::image type="content" source="../media/how-to/studio-create-label.png" alt-text="Screenshot: Create a label.":::
+
+1. Enter a name for the field.
+
+1. To assign a value to the field, simply choose a word or words in the document and select the field in either the dropdown or the field list on the right navigation bar. You'll see the labeled value below the field name in the list of fields.
+
+1. Repeat this process for all the fields you wish to label for your dataset
+
+1. Label the remaining documents in your dataset by selecting each document in the document list and selecting the text to be labeled
+
+You now have all the documents in your dataset labeled. If you look at the storage account, you'll find a *.labels.json* and *.ocr.json* files that correspond to each document in your training dataset and an additional fields.json file. This is the training dataset that will be submitted to train the model.
+
+## Train your model
+
+With your dataset labeled, you're now ready to train your model. Select the train button in the upper-right corner.
+
+1. On the train model dialog, provide a unique model ID and, optionally, a description.
+
+1. For the build mode, select the type of model you want to train. Learn more about the [model types and capabilities](../concept-custom.md).
+
+ :::image type="content" source="../media/how-to/studio-train-model.png" alt-text="Screenshot: Train model dialog":::
+
+1. Select **Train** to initiate the training process.
+
+1. Template models train in a few minutes. Neural models can take up to 30 minutes to train.
+
+1. Navigate to the *Models* menu to view the status of the train operation.
+
+## Test the model
+
+Once the model training is complete, you can test your model by selecting the model on the models list page.
+
+1. Select the model and select on the **Test** button.
+
+1. Select the `+ Add` button to select a file to test the model.
+
+1. With a file selected, choose the **Analyze** button to test the model.
+
+1. The model results are displayed in the main window and the fields extracted are listed in the right navigation bar.
+
+1. Validate your model by evaluating the results for each field.
+
+1. The right navigation bar also has the sample code to invoke your model and the JSON results from the API.
+
+Congratulations you've trained a custom model in the Form Recognizer Studio! Your model is ready for use with the REST API or the SDK to analyze documents.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn about custom model types](../concept-custom.md)
+
+> [!div class="nextstepaction"]
+> [Learn about accuracy and confidence with custom models](../concept-accuracy-confidence.md)
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/overview.md
The following features and development options are supported by the Form Recogn
|[🆕 **Read**](concept-read.md)|Extract text lines, words, detected languages, and handwritten style if detected.|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/read)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#try-it-general-document-model)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#general-document-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#general-document-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#general-document-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#general-document-model)</li></ul> | |[🆕 **General document model**](concept-general-document.md)|Extract text, tables, structure, key-value pairs and, named entities.|<ul ><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/document)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#try-it-general-document-model)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#general-document-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#general-document-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#general-document-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#general-document-model)</li></ul> | |[**Layout model**](concept-layout.md) | Extract text, selection marks, and tables structures, along with their bounding box coordinates, from forms and documents.</br></br> Layout API has been updated to a prebuilt model. | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/layout)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md#try-it-layout-model)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#layout-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#layout-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#layout-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#layout-model)</li></ul>|
-|[**Custom model (updated)**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.</br></br>Custom model API v3.0 supports **signature detection for custom forms**.</li></ul>| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|
+|[**Custom model (updated)**](concept-custom.md) | Extraction and analysis of data from forms and documents specific to distinct business data and use cases.<ul><li>Custom model API v3.0 supports **signature detection for custom template (custom form) models**.</br></br></li><li>Custom model API v3.0 offers a new model type **Custom Neural** or custom document to analyze unstructured documents.</li></ul>| [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md)</li></ul>|
|[**Invoice model**](concept-invoice.md) | Automated data processing and extraction of key information from sales invoices. | <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=invoice)</li><li>[**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li></ul>| |[**Receipt model (updated)**](concept-receipt.md) | Automated data processing and extraction of key information from sales receipts.</br></br>Receipt model v3.0 supports processing of **single-page hotel receipts**.| <ul><li>[**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=receipt)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul>| |[**ID document model (updated)**](concept-id-document.md) |Automated data processing and extraction of key information from US driver's licenses and international passports.</br></br>Prebuilt ID document API supports the **extraction of endorsements, restrictions, and vehicle classifications from US driver's licenses**. |<ul><li> [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=idDocument)</li><li>[**REST API**](quickstarts/try-v3-rest-api.md)</li><li>[**C# SDK**](quickstarts/try-v3-csharp-sdk.md#prebuilt-model)</li><li>[**Python SDK**](quickstarts/try-v3-python-sdk.md#prebuilt-model)</li><li>[**Java SDK**](quickstarts/try-v3-java-sdk.md#prebuilt-model)</li><li>[**JavaScript**](quickstarts/try-v3-javascript-sdk.md#prebuilt-model)</li></ul>|
applied-ai-services Preview Error Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/preview-error-guide.md
When possible, more details are specified in the *inner error* property.
| InvalidRequest | ContentSourceNotAccessible | Content is not accessible: {details} | | InvalidRequest | ContentSourceTimeout | Timeout while receiving the file from client. | | InvalidRequest | DocumentModelLimit | Account cannot create more than {maximumModels} models. |
+| InvalidRequest | DocumentModelLimitNeural | Account cannot create more than 10 custom neural models per month. Please contact support to request additional capacity. |
| InvalidRequest | DocumentModelLimitComposed | Account cannot create a model with more than {details} component models. | | InvalidRequest | InvalidContent | The file is corrupted or format is unsupported. Refer to documentation for the list of supported formats. | | InvalidRequest | InvalidContentDimensions | The input image dimensions are out of range. Refer to documentation for supported image dimensions. |
When possible, more details are specified in the *inner error* property.
| InvalidRequest | TrainingContentMissing | Training data is missing: {details} | | InvalidRequest | UnsupportedContent | Content is not supported: {details} | | NotFound | ModelNotFound | The requested model was not found. It may have been deleted or is still building. |
-| NotFound | OperationNotFound | The requested operation was not found. The identifier may be invalid or the operation may have expired. |
+| NotFound | OperationNotFound | The requested operation was not found. The identifier may be invalid or the operation may have expired. |
applied-ai-services Try V3 Csharp Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-csharp-sdk.md
In this quickstart, you'll use following features to analyze and extract data an
* [**Layout model**](#layout-model)ΓÇöAnalyze and extract tables, lines, words, and selection marks like radio buttons and check boxes in forms documents, without the need to train a model.
-* [**Prebuilt model**](#prebuilt-model)ΓÇöAnalyze and extract common fields from specific document types using a pre-trained model.
+* [**Prebuilt model**](#prebuilt-model)ΓÇöAnalyze and extract common fields from specific document types using a prebuilt model.
## Prerequisites
This version of the client library defaults to the 2021-09-30-preview version of
:::image type="content" source="../media/quickstarts/select-nuget-package.png" alt-text="Screenshot: select-nuget-package.png":::
- 1. Select the Browse tab and type Azure.AI.FormRecognizer.
+ 1. Select the Browse tab and type Azure.AI.FormRecognizer.
:::image type="content" source="../media/quickstarts/azure-nuget-package.png" alt-text="Screenshot: select-form-recognizer-package.png":::
- 1. Choose the **Include prerelease** checkbox and select version **4.0.0-beta.*** from the dropdown menu.
-
- 1. Select **Install**.
-
- :::image type="content" source="../media/quickstarts/prerelease-nuget-package.png" alt-text="{alt-text}":::
-
+ 1. Choose the **Include prerelease** checkbox and select version **4.0.0-beta.3*** from the dropdown menu and install the package in your project.
<!-- --> ## Build your application
applied-ai-services Try V3 Javascript Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-javascript-sdk.md
In this quickstart you'll use following features to analyze and extract data and
> [!TIP] >
- > * You can create a new file using Powershell.
- > * Open a Powershell window in your project directory by holding down the Shift key and right-clicking the folder.
+ > * You can create a new file using PowerShell.
+ > * Open a PowerShell window in your project directory by holding down the Shift key and right-clicking the folder.
> * Type the following command **New-Item index.js**. 1. Open the `index.js` file in Visual Studio Code or your favorite IDE. First, we'll add the necessary libraries. Copy the following and paste it at the top of the file:
applied-ai-services Try V3 Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-rest-api.md
To learn more about Form Recognizer features and development options, visit our
The REST API supports the following models and capabilities:
-* 🆕General document—Analyze and extract text, tables, structure, key-value pairs, and named entities.|
-* 🆕 W-2 Tax Forms—Analyze and extract fields from W-2 tax documents, using a pre-trained W-2 model.
+* 🆕General document—Analyze and extract text, tables, structure, key-value pairs, and named entities.
+* 🆕 W-2—Analyze and extract fields from W-2 tax documents, using a pre-trained W-2 model.
* LayoutΓÇöAnalyze and extract tables, lines, words, and selection marks like radio buttons and check boxes in forms documents, without the need to train a model. * CustomΓÇöAnalyze and extract form fields and other content from your custom forms, using models you trained with your own form types. * InvoicesΓÇöAnalyze and extract common fields from invoices, using a pre-trained invoice model.
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
Form Recognizer v3.0 preview release introduces several new features and capabilities and enhances existing one:
+* [🆕 **Custom neural model**](concept-custom-neural.md) or custom document model is a new custom model to extract text and selection marks from structured forms, semi-strutured and **unstructured documents**.
+* [🆕 **W-2 prebuilt model**](concept-w2.md) is a new prebuilt model to extract fields from W-2 forms for tax reporting and income verification scenarios.
+* [🆕 **Read**](concept-read.md) API extracts printed text lines, words, text locations, detected languages, and handwritten text, if detected.
+* [**General document**](concept-general-document.md) pre-trained model is now updated to support selection marks in addition to API text, tables, structure, key-value pairs, and named entities from forms and documents.
+* [**Invoice API**](language-support.md#invoice-model) Invoice prebuilt model expands support to Spanish invoices.
* [**Form Recognizer Studio**](https://formrecognizer.appliedai.azure.com) adds new demos for Read, W2, Hotel receipt samples, and support for training the new custom neural models.
-* [🆕 **W-2 prebuilt model**](concept-w2.md) is a new prebuilt model to extract fields from W-2 tax documents.
-* [🆕 **Read**](concept-read.md) API extracts text lines, words, their locations, detected languages, and handwritten style if detected.
-* [🆕 **Custom neural model**](concept-custom-neural.md) is a new custom model to extract text and selection marks from structured forms and **unstructured documents**.
* [**Language Expansion**](language-support.md) Form Recognizer Read, Layout, and Custom Form add support for 42 new languages including Arabic, Hindi, and other languages using Arabic and Devanagari scripts to expand the coverage to 164 languages. Handwritten support for the same features expands to Japanese and Korean in addition to English, Chinese Simplified, French, German, Italian, Portuguese, and Spanish languages.
-* [**Invoice API**](language-support.md#invoice-model) Invoice API expands support to Spanish invoices.
-* [**General document**](concept-general-document.md) pre-trained model now updated to support selection marks in addition to API text, tables, structure, key-value pairs, and named entities from forms and documents.
-Get stared with the new [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument), [Python](quickstarts/try-v3-python-sdk.md), or [.NET](quickstarts/try-v3-csharp-sdk.md) SDK for the v3.0 preview API.
+Get started with the new [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument), [Python](quickstarts/try-v3-python-sdk.md), or [.NET](quickstarts/try-v3-csharp-sdk.md) SDK for the v3.0 preview API.
#### Form Recognizer model data extraction
applied-ai-services Tutorial Ios Picture Immersive Reader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/tutorial-ios-picture-immersive-reader.md
You need some values from the Azure AD authentication configuration prerequisite
TenantId => Azure subscription TenantId ClientId => Azure AD ApplicationId ClientSecret => Azure AD Application Service Principal password
-Subdomain => Immersive Reader resource subdomain (resource 'Name' if the resource was created in the Azure portal, or 'CustomSubDomain' option if the resource was created with Azure CLI Powershell. Check the Azure portal for the subdomain on the Endpoint in the resource Overview page, for example, 'https://[SUBDOMAIN].cognitiveservices.azure.com/')
+Subdomain => Immersive Reader resource subdomain (resource 'Name' if the resource was created in the Azure portal, or 'CustomSubDomain' option if the resource was created with Azure CLI PowerShell. Check the Azure portal for the subdomain on the Endpoint in the resource Overview page, for example, 'https://[SUBDOMAIN].cognitiveservices.azure.com/')
```` In the main project folder, which contains the ViewController.swift file, create a Swift class file called Constants.swift. Replace the class with the following code, adding in your values where applicable. Keep this file as a local file that only exists on your machine and be sure not to commit this file into source control, as it contains secrets that should not be made public. It is recommended that you do not keep secrets in your app. Instead, we recommend using a backend service to obtain the token, where the secrets can be kept outside of the app and off of the device. The backend API endpoint should be secured behind some form of authentication (for example, [OAuth](https://oauth.net/2/)) to prevent unauthorized users from obtaining tokens to use against your Immersive Reader service and billing; that work is beyond the scope of this tutorial.
automation Automation Linux Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-linux-hrw-install.md
Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/read | Reads a
Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/write | Creates a Hybrid Runbook Worker Group. Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/delete | Deletes a Hybrid Runbook Worker Group. Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/read | Reads a Hybrid Runbook Worker.
-Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/write | Creates a Hybrid Runbook Worker.
-Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/move/action | Moves Hybrid Runbook Worker from one Worker Group to another.
Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/delete| Deletes a Hybrid Runbook Worker.
automation Automation Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-role-based-access-control.md
In Azure Automation, access is granted by assigning the appropriate Azure role t
## Role permissions
-The following tables describe the specific permissions given to each role. This can include Actions, which give permissions, and NotActions, which restrict them.
+The following tables describe the specific permissions given to each role. This can include Actions, which give permissions, and Not Actions, which restrict them.
### Owner
An Owner can manage everything, including access. The following table shows the
|Actions|Description| |||
-|Microsoft.Automation/automationAccounts/|Create and manage resources of all types.|
+|Microsoft.Automation/automationAccounts/*|Create and manage resources of all types.|
### Contributor
A Contributor can manage everything except access. The following table shows the
|**Actions** |**Description** | |||
-|Microsoft.Automation/automationAccounts/|Create and manage resources of all types|
+|Microsoft.Automation/automationAccounts/*|Create and manage resources of all types|
|**Not Actions**|| |Microsoft.Authorization/*/Delete| Delete roles and role assignments. | |Microsoft.Authorization/*/Write | Create roles and role assignments. |
A Contributor can manage everything except access. The following table shows the
### Reader
+>[!Note]
+> We have recently made a change in the built-in Reader role permission for the Automation account. [Learn more](#reader-role-access-permissions)
+ A Reader can view all the resources in an Automation account but can't make any changes. |**Actions** |**Description** | ||| |Microsoft.Automation/automationAccounts/read|View all resources in an Automation account. | + ### Automation Contributor An Automation Contributor can manage all resources in the Automation account except access. The following table shows the permissions granted for the role: |**Actions** |**Description** | |||
+|[Microsoft.Automation](/azure/role-based-access-control/resource-provider-operations#microsoftautomation)/automationAccounts/* | Create and manage resources of all types.|
|Microsoft.Authorization/*/read|Read roles and role assignments.| |Microsoft.Resources/deployments/*|Create and manage resource group deployments.| |Microsoft.Resources/subscriptions/resourceGroups/read|Read resource group deployments.|
The following table shows the permissions granted for the role:
|Microsoft.Resources/deployments/* |Create and manage resource group deployments. | |Microsoft.Insights/alertRules/* | Create and manage alert rules. | |Microsoft.Support/* |Create and manage support tickets.|
+|[Microsoft.ResourceHealth](/azure/role-based-access-control/resource-provider-operations#microsoftresourcehealth)/availabilityStatuses/read| Gets the availability statuses for all resources in the specified scope.|
### Automation Job Operator
The following table shows the permissions granted for the role:
|Microsoft.Resources/deployments/* |Create and manage resource group deployments. | |Microsoft.Insights/alertRules/* | Create and manage alert rules. | |Microsoft.Support/* |Create and manage support tickets.|
+|Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/read | Reads a Hybrid Runbook Worker Group.|
+|Microsoft.Automation/automationAccounts/jobs/output/read | Gets the output of a job.|
### Automation Runbook Operator
A Log Analytics Contributor can read all monitoring data and edit monitoring set
|Microsoft.Resources/subscriptions/resourcegroups/deployments/*|Create and manage resource group deployments.| |Microsoft.Storage/storageAccounts/listKeys/action|List storage account keys.| |Microsoft.Support/*|Create and manage support tickets.|
+|Microsoft.HybridCompute/machines/extensions/write| Installs or Updates an Azure Arc extensions.|
### Log Analytics Reader
A User Access Administrator can manage user access to Azure resources. The follo
|Microsoft.Authorization/*|Manage authorization| |Microsoft.Support/*|Create and manage support tickets| +
+## Reader role access permissions
+
+>[!Important]
+> To strengthen the overall Azure Automation security posture, the built-in RBAC Reader would not have access to Automation account keys through the API call - `GET /AUTOMATIONACCOUNTS/AGENTREGISTRATIONINFORMATION`.
+
+The Built-in Reader role for the Automation Account can't use the `API ΓÇô GET /AUTOMATIONACCOUNTS/AGENTREGISTRATIONINFORMATION` to fetch the Automation Account keys. This is a high privilege operation providing sensitive information that could pose a security risk of an unwanted malicious actor with low privileges who can get access to automation account keys and can perform actions with elevated privilege level.
+
+To access the `API ΓÇô GET /AUTOMATIONACCOUNTS/AGENTREGISTRATIONINFORMATION`, we recommend that you switch to the built-in roles like Owner, Contributor or Automation Contributor to access the Automation account keys. These roles, by default, will have the *listKeys* permission. As a best practice, we recommend that you create a custom role with limited permissions to access the Automation account keys. For a custom role, you need to add
+`Microsoft.Automation/automationAccounts/listKeys/action` permission to the role definition.
+[Learn more](/azure/role-based-access-control/custom-roles) about how to create custom role from the Azure portal.
+
## Feature setup permissions The following sections describe the minimum required permissions needed for enabling the Update Management and Change Tracking and Inventory features.
Perform the following steps to create the Azure Automation custom role with Powe
1. Complete the remaining steps as outlined in [Create or update Azure custom roles using Azure PowerShell](./../role-based-access-control/custom-roles-powershell.md#create-a-custom-role-with-json-template). It can take a few minutes for your custom role to appear everywhere.
+## Manage Role permissions for Hybrid Worker Groups and Hybrid Workers
+
+You can create [Azure custom roles](/azure/role-based-access-control/custom-roles) in Automation and grant the following permissions to Hybrid Worker Groups and Hybrid Workers:
+
+- [Extension-based Hybrid Runbook Worker](/azure/automation/extension-based-hybrid-runbook-worker-install?tabs=windows#manage-role-permissions-for-hybrid-worker-groups-and-hybrid-workers)
+- [Agent-based Windows Hybrid Runbook Worker](/azure/automation/automation-windows-hrw-install#manage-role-permissions-for-hybrid-worker-groups-and-hybrid-workers)
+ - [Agent-based Linux Hybrid Runbook Worker](/azure/automation/automation-linux-hrw-install#manage-role-permissions-for-hybrid-worker-groups-and-hybrid-workers)
++ ## Update Management permissions Update Management can be used to assess and schedule update deployments to machines in multiple subscriptions in the same Azure Active Directory (Azure AD) tenant, or across tenants using Azure Lighthouse. The following table lists the permissions needed to manage update deployments.
automation Automation Security Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-security-guidelines.md
+
+ Title: Azure Automation security guidelines
+description: This article helps you with the guidelines that Azure Automation offers to ensure data privacy and data security.
++ Last updated : 02/16/2022+++
+# Best practices for security in Azure Automation
+
+This article details the best practices to securely execute the automation jobs.
+[Azure Automation](/azure/automation/overview) provides you the platform to orchestrate frequent, time consuming, error-prone infrastructure management and operational tasks, as well as mission-critical operations. This service allows you to execute scripts, known as automation runbooks seamlessly across cloud and hybrid environments.
+
+The platform components of Azure Automation Service are actively secured and hardened. The service goes through robust security and compliance checks. [Azure security benchmark](/security/benchmark/azure/overview) details the best practices and recommendations to help improve the security of workloads, data, and services on Azure. Also see [Azure security baseline for Azure Automation](/security/benchmark/azure/baselines/automation-security-baseline?toc=/azure/automation/TOC.json).
+
+## Secure configuration of Automation account
+
+This section guides you in configuring your Automation account securely.
+
+### Permissions
+
+1. Follow the principle of least privilege to get the work done when granting access to Automation resources. Implement [Automation granular RBAC roles](/azure/automation/automation-role-based-access-control) and avoid assigning broader roles or scopes such as subscription level. When creating the custom roles, only include the permissions users need. By limiting roles and scopes, you limit the resources that are at risk if the security principal is ever compromised. For detailed information on role-based access control concepts, see [Azure role-based access control best practices](/azure/role-based-access-control/best-practices).
+
+1. Avoid roles that include Actions having a wildcard (_*_) as it implies full access to the Automation resource or a sub-resource, for example _automationaccounts/*/read_. Instead, use specific actions only for the required permission.
+
+1. Configure [Role based access at a runbook level](/azure/automation/automation-role-based-access-control) if the user doesn't require access to all the runbooks in the Automation account.
+
+1. Limit the number of highly privileged roles such as Automation Contributor to reduce the potential for breach by a compromised owner.
+
+1. Use [Azure AD Privileged Identity Management](/azure/active-directory/roles/security-planning#use-azure-ad-privileged-identity-management) to protect the privileged accounts from malicious cyber-attacks to increase your visibility into their use through reports and alerts.
+
+### Securing Hybrid Runbook worker role
+
+1. Install Hybrid workers using the [Hybrid Runbook Worker VM extension](/azure/automation/extension-based-hybrid-runbook-worker-install?tabs=windows), that doesn't have any dependency on the Log Analytics agent. We recommend this platform as it leverages Azure AD based authentication.
+ [Hybrid Runbook Worker](/azure/automation/automation-hrw-run-runbooks) feature of Azure Automation allows you to execute runbooks directly on the machine hosting the role in Azure or non-Azure machine to execute Automation jobs in the local environment.
+ - Use only high privilege users or [Hybrid worker custom roles](/azure/automation/extension-based-hybrid-runbook-worker-install?tabs=windows#manage-role-permissions-for-hybrid-worker-groups) for users responsible for managing operations such as registering or unregistering Hybrid workers and hybrid groups and executing runbooks against Hybrid runbook worker groups.
+ - The same user would also require VM contributor access on the machine hosting Hybrid worker role. Since the VM contributor is a high privilege role, ensure only a limited right set of users have access to manage Hybrid works, thereby reducing the potential for breach by a compromised owner.
+
+ Follow the [Azure RBAC best practices](/azure/role-based-access-control/best-practices).
+
+1. Follow the principle of least privilege and grant only the required permissions to users for runbook execution against a Hybrid worker. Don't provide unrestricted permissions to the machine hosting the hybrid runbook worker role. In case of unrestricted access, a user with VM Contributor rights or having permissions to run commands against the hybrid worker machine can use the Automation Account Run As certificate from the hybrid worker machine and could potentially allow a malicious user access as a subscription contributor. This could jeopardize the security of your Azure environment.
+ Use [Hybrid worker custom roles](/azure/automation/extension-based-hybrid-runbook-worker-install?tabs=windows#manage-role-permissions-for-hybrid-worker-groups) for users responsible to manage Automation runbooks against Hybrid runbook workers and Hybrid runbook worker groups.
+
+1. [Unregister](/azure/automation/extension-based-hybrid-runbook-worker-install?tabs=windows#delete-a-hybrid-runbook-worker) any unused or non-responsive hybrid workers.
+
+### Authentication certificate and identities
+
+1. For runbook authentication, we recommend that you use [Managed identities](/azure/automation/automation-security-overview#managed-identities) instead of Run As accounts. The Run As accounts are an administrative overhead and we plan to deprecate them. A managed identity from Azure Active Directory (Azure AD) allows your runbook to easily access other Azure AD-protected resources such as Azure Key Vault. The identity is managed by the Azure platform and does not require you to provision or rotate any secrets. For more information about managed identities in Azure Automation, see [Managed identities for Azure Automation](/azure/automation/automation-security-overview#managed-identities)
+
+ You can authenticate an Automation account using two types of managed identities:
+ - **System-assigned identity** is tied to your application and is deleted if your app is deleted. An app can only have one system-assigned identity.
+ - **User-assigned identity** is a standalone Azure resource that can be assigned to your app. An app can have multiple user-assigned identities.
+
+ Follow the [Managed identity best practice recommendations](/azure/active-directory/managed-identities-azure-resources/managed-identity-best-practice-recommendations#choosing-system-or-user-assigned-managed-identities) for more details.
+
+1. If you use Run As accounts as the authentication mechanism for your runbooks, ensure the following:
+ - Track the service principals in your inventory. Service principals often have elevated permissions.
+ - Delete any unused Run As accounts to minimize your exposed attack surface.
+ - [Renew the Run As certificate](/azure/automation/manage-runas-account#cert-renewal) periodically.
+ - Follow the RBAC guidelines to limit the permissions assigned to Run As account using this [script](/azure/automation/manage-runas-account#limit-run-as-account-permissions). Do not assign high privilege permissions like Contributor, Owner and so on.
+
+1. Rotate the [Azure Automation keys](/azure/automation/automation-create-standalone-account?tabs=azureportal#manage-automation-account-keys) periodically. The key regeneration prevents future DSC or hybrid worker node registrations from using previous keys. We recommend to use the [Extension based hybrid workers](/azure/automation/automation-hybrid-runbook-worker) that use Azure AD authentication instead of Automation keys. Azure AD centralizes the control and management of identities and resource credentials.
+
+### Data security
+1. Secure the assets in Azure Automation including credentials, certificates, connections and encrypted variables. These assets are protected in Azure Automation using multiple levels of encryption. By default, data is encrypted with Microsoft-managed keys. For additional control over encryption keys, you can supply customer-managed keys to use for encryption of Automation assets. These keys must be present in Azure Key Vault for Automation service to be able to access the keys. See [Encryption of secure assets using customer-managed keys](/azure/automation/automation-secure-asset-encryption).
+
+1. Don't print any credentials or certificate details in the job output. An Automation job operator who is the low privilege user can view the sensitive information.
+
+1. Maintain a valid [backup of Automation](/azure/automation/automation-managing-data#data-backup) configuration like runbooks and assets ensuring backups are validated and protected to maintain business continuity after an unexpected event.
+
+### Network isolation
+
+1. Use [Azure Private Link](/azure/automation/how-to/private-link-security) to securely connect Hybrid runbook workers to Azure Automation. Azure Private Endpoint is a network interface that connects you privately and securely to a an Azure Automation service powered by Azure Private Link. Private Endpoint uses a private IP address from your Virtual Network (VNet), to effectively bring the Automation service into your VNet.
+
+If you want to access and manage other services privately through runbooks from Azure VNet without the need to open an outbound connection to the internet, you can execute runbooks on a Hybrid Worker that is connected to the Azure VNet.
+
+### Policies for Azure Automation
+
+Review the Azure Policy recommendations for Azure Automation and act as appropriate. See [Azure Automation policies](/azure/automation/policy-reference).
+
+## Next steps
+
+* To learn how to use Azure role-based access control (Azure RBAC), see [Manage role permissions and security in Azure Automation](/automation/automation-role-based-access-control).
+* For information on how Azure protects your privacy and secures your data, see [Azure Automation data security](/azure/automation/automation-managing-data).
+* To learn about configuring the Automation account to use encryption, see [Encryption of secure assets in Azure Automation](/automation/automation-secure-asset-encryption).
automation Automation Windows Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-windows-hrw-install.md
Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/read | Reads a
Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/write | Creates a Hybrid Runbook Worker Group. Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/delete | Deletes a Hybrid Runbook Worker Group. Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/read | Reads a Hybrid Runbook Worker.
-Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/write | Creates a Hybrid Runbook Worker.
-Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/move/action | Moves Hybrid Runbook Worker from one Worker Group to another.
Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/delete | Deletes a Hybrid Runbook Worker.
automation Extension Based Hybrid Runbook Worker Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/extension-based-hybrid-runbook-worker-install.md
To install and use Hybrid Worker extension using REST API, follow these steps. T
} ```
-## Manage Role permissions for Hybrid Worker Groups
+## Manage Role permissions for Hybrid Worker Groups and Hybrid Workers
-You can create custom Azure Automation roles and grant following permissions to Hybrid Worker Groups. To learn more about how to create Azure Automation custom roles, see [Azure custom roles](../role-based-access-control/custom-roles.md).
+You can create custom Azure Automation roles and grant following permissions to Hybrid Worker Groups and Hybrid Workers. To learn more about how to create Azure Automation custom roles, see [Azure custom roles](../role-based-access-control/custom-roles.md).
**Actions** | **Description** | Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/read | Reads a Hybrid Runbook Worker Group. Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/write | Creates a Hybrid Runbook Worker Group. Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/delete | Deletes a Hybrid Runbook Worker Group.
+Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/read | Reads a Hybrid Runbook Worker.
+Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/write | Creates a Hybrid Runbook Worker.
+Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/move/action | Moves Hybrid Runbook Worker from one Worker Group to another.
+Microsoft.Automation/automationAccounts/hybridRunbookWorkerGroups/hybridRunbookWorkers/delete | Deletes a Hybrid Runbook Worker.
++ ## Next steps
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md
Azure Automation receives improvements on an ongoing basis. To stay up to date w
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [Archive for What's new in Azure Automation](whats-new-archive.md).
+## February 2022
+
+### Permissions change in the built-in Reader role for the Automation Account.
+
+**Type:** New change
+
+To strengthen the overall Azure Automation security posture, the built-in RBAC Reader role would not have access to Automation account keys through the API call - `GET /automationAccounts/agentRegistrationInformation`. Read [here](/azure/automation/automation-role-based-access-control#reader) for more information.
+ ## December 2021 ### New scripts added for Azure VM management based on Azure Monitor Alert
azure-app-configuration Howto App Configuration Event https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-app-configuration-event.md
ms.assetid:
+ms.devlang: csharp
Last updated 03/04/2020
azure-app-configuration Use Key Vault References Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/use-key-vault-references-spring-boot.md
editor: ''
ms.assetid:
+ms.devlang: java
Last updated 08/11/2020
azure-arc Onboard Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/onboard-service-principal.md
Title: Connect hybrid machines to Azure at scale description: In this article, you learn how to connect machines to Azure using Azure Arc-enabled servers using a service principal. Previously updated : 02/10/2022 Last updated : 02/16/2022
If you don't have an Azure subscription, create a [free account](https://azure.m
You can create a service principal in the Azure portal or by using Azure PowerShell. > [!NOTE]
-> To create a service principal and assign roles, your account must be a member of the **Owner** or **User Access Administrator** role in the subscription that you want to use for onboarding. If you don't have sufficient permissions to configure role assignments, the service principal might still be created, but it won't be able to onboard machines.
+> To create a service principal and assign roles, your account must be a member of the **Owner** or **User Access Administrator** role in the subscription that you want to use for onboarding.
### Azure portal
The values from the following properties are used with parameters passed to the
> [!TIP] > Make sure to use the service principal **ApplicationId** property, not the **Id** property.
-The **Azure Connected Machine Onboarding** role contains only the permissions required to onboard a machine. You can assign the service principal permission to allow its scope to include a resource group or a subscription. To add role assignments, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md) or [Assign Azure roles using Azure CLI](../../role-based-access-control/role-assignments-cli.md).
+4. Assign the **Azure Connected Machine Onboarding** role to the service principal for the designated resource group or subscription. This role contains only the permissions required to onboard a machine. Note that your account must be a member of the **Owner** or **User Access Administrator** role for the subscription to which the service principal will have access. For information on how to add role assignments, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md) or [Assign Azure roles using Azure CLI](../../role-based-access-control/role-assignments-cli.md).
## Generate the installation script from the Azure portal
azure-functions Create First Function Cli Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-node.md
Title: Create a JavaScript function from the command line - Azure Functions
description: Learn how to create a JavaScript function from the command line, then publish the local Node.js project to serverless hosting in Azure Functions. Last updated 11/18/2021
+ms.devlang: javascript
azure-functions Create First Function Cli Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-powershell.md
Title: Create a PowerShell function from the command line - Azure Functions
description: Learn how to create a PowerShell function from the command line, then publish the local project to serverless hosting in Azure Functions. Last updated 11/03/2020
+ms.devlang: powershell
azure-functions Create First Function Cli Typescript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-typescript.md
Title: Create a TypeScript function from the command line - Azure Functions
description: Learn how to create a TypeScript function from the command line, then publish the local project to serverless hosting in Azure Functions. Last updated 11/18/2021
+ms.devlang: typescript
azure-functions Durable Functions Event Publishing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-event-publishing.md
Title: Durable Functions publishing to Azure Event Grid
description: Learn how to configure automatic Azure Event Grid publishing for Durable Functions. Last updated 05/11/2020
+ms.devlang: csharp, javascript
azure-functions Functions Event Hub Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-event-hub-cosmos-db.md
Last updated 11/04/2019
+ms.devlang: java
#Customer intent: As a Java developer, I want to write Java functions that process data continually (for example, from IoT sensors), and store the processing results in Azure Cosmos DB.
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Red Hat OpenShift](https://azure.microsoft.com/services/openshift/) | &#x2705; | &#x2705; | | [Azure Resource Graph](../../governance/resource-graph/overview.md) | &#x2705; | &#x2705; | | [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/) | &#x2705; | &#x2705; |
-| [Azure Scheduler](../../scheduler/scheduler-intro.md) (replaced by [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/)) | &#x2705; | &#x2705; |
+| [Azure Scheduler](../../scheduler/index.yml) (replaced by [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/)) | &#x2705; | &#x2705; |
| [Azure Service Fabric](https://azure.microsoft.com/services/service-fabric/) | &#x2705; | &#x2705; | | [Azure Service Health](https://azure.microsoft.com/features/service-health/) | &#x2705; | &#x2705; | | [Azure Service Manager (RDFE)](/previous-versions/azure/ee460799(v=azure.100)) | &#x2705; | &#x2705; |
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| **Service** | **FedRAMP High** | **DoD IL2** | **DoD IL4** | **DoD IL5** | **DoD IL6** | | [Azure Resource Graph](../../governance/resource-graph/overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
-| [Azure Scheduler](../../scheduler/scheduler-intro.md) (replaced by [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
+| [Azure Scheduler](../../scheduler/index.yml) (replaced by [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | |
| [Azure Service Fabric](https://azure.microsoft.com/services/service-fabric/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Azure Service Health](https://azure.microsoft.com/features/service-health/) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | | [Azure Service Manager (RDFE)](/previous-versions/azure/ee460799(v=azure.100)) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
azure-government Connect With Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/connect-with-azure-pipelines.md
This article helps you use Azure Pipelines to set up continuous integration (CI)
> [!NOTE] > For special considerations when deploying apps to Azure Government, see **[Deploy apps to Azure Government Cloud](/azure/devops/pipelines/library/government-cloud).**
-[Azure Pipelines](/azure/devops/pipelines/get-started/) is used by teams to configure continuous deployment for applications hosted in Azure subscriptions. We can use this service for applications running in Azure Government by defining [service connections](/azure/devops/pipelines/library/service-endpoints) for Azure Government.
+[Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines) is used by teams to configure continuous deployment for applications hosted in Azure subscriptions. We can use this service for applications running in Azure Government by defining [service connections](/azure/devops/pipelines/library/service-endpoints) for Azure Government.
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)]
The following steps will set up a CD process to deploy to this Web App.
Follow through one of the quickstarts below to set up a Build for your specific type of app: - [ASP.NET 4 app](/azure/devops/pipelines/apps/aspnet/build-aspnet-4)-- [ASP.NET Core app](/azure/devops/pipelines/languages/dotnet-core?tabs=yaml)-- [Node.js app with Gulp](/azure/devops/pipelines/languages/javascript?tabs=yaml)
+- [ASP.NET Core app](/azure/devops/pipelines/ecosystems/dotnet-core)
+- [Node.js app with Gulp](/azure/devops/pipelines/ecosystems/javascript)
## Generate a service principal
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md
Below you can find a list of all the authorized Cloud Solution Providers (CSPs),
|[South River Technologies](https://southrivertech.com)| |[Stabilify](http://www.stabilify.net/)| |[Stafford Associates](https://www.staffordnet.com/)|
-|[Static Networks, LLC](https://staticnetworks.com)|
+|Static Networks, LLC|
|[Steel Root](https://steelroot.us)| |[StoneFly, Inc.](https://stonefly.com)| |[Strategic Communications](https://stratcomminc.com)|
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
you can configure Application Insights Java 3.x to use an HTTP proxy:
Application Insights Java 3.x also respects the global `https.proxyHost` and `https.proxyPort` system properties if those are set (and `http.nonProxyHosts` if needed).
-Starting from 3.2.6, authenticated proxies are supported. You can add `"user"` and `"password"` under `"proxy"` in
-the json above (or if you are using the system properties above, you can add `https.proxyUser` and `https.proxyPassword`
-system properties).
- ## Self-diagnostics "Self-diagnostics" refers to internal logging from Application Insights Java 3.x.
azure-monitor Migrate From Instrumentation Keys To Connection Strings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/migrate-from-instrumentation-keys-to-connection-strings.md
Title: Migrate from instrumentation keys to connection strings
+ Title: Migrate from Application Insights instrumentation keys to connection strings
description: Learn the steps required to upgrade from Azure Monitor Application Insights instrumentation keys to connection strings Last updated 02/14/2022
-# Migrate from instrumentation keys to connection strings
+# Migrate from Application Insights instrumentation keys to connection strings
This guide walks through migrating from [instrumentation keys](separate-resources.md#about-resources-and-instrumentation-keys) to [connection strings](sdk-connection-string.md#overview).
Use environment variables to pass a connection string to the Application Insight
To set a connection string via environment variable, place the value of the connection string into an environment variable named ΓÇ£APPLICATIONINSIGHTS_CONNECTION_STRINGΓÇ¥.
-This process can be automated in your Azure deployments. For example, the following ARM template shows how you can automatically include the correct connection string with an App Services deployment (be sure to include any other App Settings your app requires):
+This process can be [automated in your Azure deployments](../../azure-resource-manager/templates/deploy-portal.md#deploy-resources-with-arm-templates-and-azure-portal). For example, the following ARM template shows how you can automatically include the correct connection string with an App Services deployment (be sure to include any other App Settings your app requires):
```JSON {
Connection strings provide a single configuration setting and eliminate the need
### Missing data
-1. Confirm you're using a [supported SDK version](#supported-sdk-versions). If you use Application Insights integration in another Azure product offering, check its documentation on how to properly configure a connection string.
+- Confirm you're using a [supported SDK version](#supported-sdk-versions). If you use Application Insights integration in another Azure product offering, check its documentation on how to properly configure a connection string.
-1. Confirm you aren't setting both an instrumentation key and connection string at the same time. Instrumentation key settings should be removed from your configuration.
+- Confirm you aren't setting both an instrumentation key and connection string at the same time. Instrumentation key settings should be removed from your configuration.
-1. Confirm your connection string is exactly as provided in the Azure portal.
+- Confirm your connection string is exactly as provided in the Azure portal.
### Environment variables aren't working
You can't enable [Azure AD authentication](azure-ad-authentication.md) for [auto
### What is the difference between global and regional ingestion?
-Global ingestion sends all telemetry data to a single endpoint, no matter where this data will end up or be stored. Regional ingestion allows you to define specific endpoints per region for data ingestion, ensuring data stays within a specific region during processing and storage.
+Global ingestion sends all telemetry data to a single endpoint, no matter where this data will be stored. Regional ingestion allows you to define specific endpoints per region for data ingestion, ensuring data stays within a specific region during processing and storage.
### How do connection strings impact the billing?
azure-monitor Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/pricing.md
Previously updated : 11/26/2021-- Last updated : 02/17/2021+ # Manage usage and costs for Application Insights
+This article describes how to proactively monitor and control Application Insights costs.
+
+[Monitoring usage and estimated costs](..//usage-estimated-costs.md) describes usage and estimated costs across Azure Monitor features using [Azure Cost Management + Billing](../logs/manage-cost-storage.md#viewing-log-analytics-usage-on-your-azure-bill).
+ > [!NOTE]
-> This article describes how to understand and control your costs for Application Insights. A related article, [Monitoring usage and estimated costs](..//usage-estimated-costs.md) describes how to view usage and estimated costs across multiple Azure monitoring features using [Azure Cost Management + Billing](../logs/manage-cost-storage.md#viewing-log-analytics-usage-on-your-azure-bill). All prices and costs in this article are for example purposes only.
+> All prices and costs in this article are for example purposes only.
-Application Insights is designed to get everything you need to monitor the availability, performance, and usage of your web applications, whether they're hosted on Azure or on-premises. Application Insights supports popular languages and frameworks, such as .NET, Java, and Node.js, and integrates with DevOps processes and tools like Azure DevOps, Jira, and PagerDuty. It's important to understand what determines the costs of monitoring your applications. In this article, we review what drives your application monitoring costs and how you can proactively monitor and control them.
+<! App Insights monitoring features (availability, performance, usage, etc. ) Supported languages, integration with specific tools (Azure DevOps, Jira, and PagerDuty, etc.) should be documented elsewhere. (e.g. platforms.md) -->
If you have questions about how pricing works for Application Insights, you can post a question in our [Microsoft Q&A question page](/answers/topics/azure-monitor.html). ## Pricing model
-The pricing for [Azure Application Insights][start] is a **Pay-As-You-Go** model based on data volume ingested and optionally for longer data retention. Each Application Insights resource is charged as a separate service and contributes to the bill for your Azure subscription. Data volume is measured as the size of the uncompressed JSON data package that's received by Application Insights from your application. Data volume is measured in GB (10^9 bytes). There is no data volume charge for using the [Live Metrics Stream](./live-stream.md). On your Azure bill or in [Azure Cost Management + Billing](../logs/manage-cost-storage.md#viewing-log-analytics-usage-on-your-azure-bill), your data ingestion and data retention for a classic Application Insights resource will be reported with a meter category of **Log Analytics**.
+The pricing for [Azure Application Insights][start] is a **Pay-As-You-Go** model based on data volume ingested and optionally for longer data retention. Each Application Insights resource is charged as a separate service and contributes to the bill for your Azure subscription. Data volume is measured as the size of the uncompressed JSON data package that's received by Application Insights from your application. Data volume is measured in GB (10^9 bytes). There's no data volume charge for using the [Live Metrics Stream](./live-stream.md). On your Azure bill or in [Azure Cost Management + Billing](../logs/manage-cost-storage.md#viewing-log-analytics-usage-on-your-azure-bill), your data ingestion and data retention for a classic Application Insights resource will be reported with a meter category of **Log Analytics**.
-[Multi-step web tests](./availability-multistep.md) incur an additional charge. Multi-step web tests are web tests that perform a sequence of actions. There's no separate charge for *ping tests* of a single page. Telemetry from ping tests and multi-step tests is charged the same as other telemetry from your app.
+[Multi-step web tests](./availability-multistep.md) incur extra charges. Multi-step web tests are web tests that perform a sequence of actions. There's no separate charge for *ping tests* of a single page. Telemetry from ping tests and multi-step tests is charged the same as other telemetry from your app.
-The Application Insights option to [Enable alerting on custom metric dimensions](./pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation) can also generate in additional costs because this can result in the creation of additional pre-aggregation metrics. [Learn more](./pre-aggregated-metrics-log-metrics.md) about log-based and pre-aggregated metrics in Application Insights and about [pricing](https://azure.microsoft.com/pricing/details/monitor/) for Azure Monitor custom metrics.
+The Application Insights option to [Enable alerting on custom metric dimensions](./pre-aggregated-metrics-log-metrics.md#custom-metrics-dimensions-and-pre-aggregation) can also increase costs because this can result in the creation of more pre-aggregation metrics. [Learn more](./pre-aggregated-metrics-log-metrics.md) about log-based and pre-aggregated metrics in Application Insights and about [pricing](https://azure.microsoft.com/pricing/details/monitor/) for Azure Monitor custom metrics.
### Workspace-based Application Insights
If you're not yet using Application Insights, you can use the [Azure Monitor pri
### Learn from what similar applications collect
-In the Azure Monitoring Pricing calculator for Application Insights, click to enable the **Estimate data volume based on application activity**. Here you can provide inputs about your application (requests per month and page views per month, in case you will collect client-side telemetry), and then the calculator will tell you the median and 90th percentile amount of data collected by similar applications. These applications span the range of Application Insights configuration (e.g some have default [sampling](./sampling.md), some have no sampling etc.), so you still have the control to reduce the volume of data you ingest far below the median level using sampling.
+In the Azure Monitoring Pricing calculator for Application Insights, click to enable the **Estimate data volume based on application activity**. Here you can provide inputs about your application (requests per month and page views per month, in case you'll collect client-side telemetry), and then the calculator will tell you the median and 90th percentile amount of data collected by similar applications. These applications span the range of Application Insights configuration (e.g some have default [sampling](./sampling.md), some have no sampling etc.), so you still have the control to reduce the volume of data you ingest far below the median level using sampling.
### Data collection when using sampling
-With the ASP.NET SDK's [adaptive sampling](sampling.md#adaptive-sampling), the data volume is adjusted automatically to keep within a specified maximum rate of traffic for default Application Insights monitoring. If the application produces a low amount of telemetry, such as when debugging or due to low usage, items won't be dropped by the sampling processor as long as volume is below the configured events per second level. For a high volume application, with the default threshold of five events per second, adaptive sampling will limit the number of daily events to 432,000. Using a typical average event size of 1 KB, this corresponds to 13.4 GB of telemetry per 31-day month per node hosting your application since the sampling is done local to each node.
+With the ASP.NET SDK's [adaptive sampling](sampling.md#adaptive-sampling), the data volume is adjusted automatically to keep within a specified maximum rate of traffic for default Application Insights monitoring. If the application produces a low amount of telemetry, such as when debugging or due to low usage, items won't be dropped by the sampling processor as long as volume is below the configured events per second level. For a high volume application, with the default threshold of five events per second, adaptive sampling will limit the number of daily events to 432,000. Considering a typical average event size of 1 KB, this corresponds to 13.4 GB of telemetry per 31-day month per node hosting your application since the sampling is done local to each node.
For SDKs that don't support adaptive sampling, you can employ [ingestion sampling](./sampling.md#ingestion-sampling), which samples when the data is received by Application Insights based on a percentage of data to retain, or [fixed-rate sampling for ASP.NET, ASP.NET Core, and Java websites](sampling.md#fixed-rate-sampling) to reduce the traffic sent from your web server and web browsers ## Viewing Application Insights usage on your Azure bill
-The easiest way to see the billed usage for a single Application Insights resource which is not a workspace-baed resource is to go to the resource's Overview page and click **View Cost** in the upper right corner. You might need additional access to Cost Management data ([learn more](../../cost-management-billing/costs/assign-access-acm-data.md)).
+The easiest way to see the billed usage for a single Application Insights resource, which isn't a workspace-baed resource is to go to the resource's Overview page and click **View Cost** in the upper right corner. You might need elevated access to Cost Management data ([learn more](../../cost-management-billing/costs/assign-access-acm-data.md)).
-To learn more, Azure provides a great deal of useful functionality in the [Azure Cost Management + Billing](../../cost-management-billing/costs/quick-acm-cost-analysis.md?toc=/azure/billing/TOC.json) hub. For instance, the "Cost analysis" functionality enables you to view your spends for Azure resources. Adding a filter by resource type (to microsoft.insights/components for Application Insights) will allow you to track your spending. Then for "Group by" select "Meter category" or "Meter". Note that Application Insights billed usage for data ingestion and data retention will show up as **Log Analytics** for the Meter category since Log Analytics backend for all Azure Monitor logs.
+To learn more, Azure provides a great deal of useful functionality in the [Azure Cost Management + Billing](../../cost-management-billing/costs/quick-acm-cost-analysis.md?toc=/azure/billing/TOC.json) hub. For instance, the "Cost analysis" functionality enables you to view your spends for Azure resources. Adding a filter by resource type (to microsoft.insights/components for Application Insights) will allow you to track your spending. Then for "Group by" select "Meter category" or "Meter". Application Insights billed usage for data ingestion and data retention will show up as **Log Analytics** for the Meter category since Log Analytics backend for all Azure Monitor logs.
> [!NOTE] > Application Insights billing for data ingestion and data retention is reported as coming from the **Log Analytics** service (Meter category in Azure Cost Management + Billing). Even more understanding of your usage can be gained by [downloading your usage from the Azure portal](../../cost-management-billing/understand/download-azure-daily-usage.md).
-In the downloaded spreadsheet, you can see usage per Azure resource per day. In this Excel spreadsheet, usage from your Application Insights resources can be found by first filtering on the "Meter Category" column to show "Application Insights" and "Log Analytics", and then adding a filter on the "Instance ID" column which is "contains microsoft.insights/components". Most Application Insights usage is reported on meters with the Meter Category of Log Analytics, since there is a single logs backend for all Azure Monitor components. Only Application Insights resources on legacy pricing tiers and multi-step web tests are reported with a Meter Category of Application Insights. The usage is shown in the "Consumed Quantity" column and the unit for each entry is shown in the "Unit of Measure" column. More details are available to help you [understand your Microsoft Azure bill](../../cost-management-billing/understand/review-individual-bill.md).
+In the downloaded spreadsheet, you can see usage per Azure resource per day. In this Excel spreadsheet, usage from your Application Insights resources can be found by first filtering on the "Meter Category" column to show "Application Insights" and "Log Analytics", and then adding a filter on the "Instance ID" column, which is "contains microsoft.insights/components". Most Application Insights usage is reported on meters with the Meter Category of Log Analytics, since there's a single logs backend for all Azure Monitor components. Only Application Insights resources on legacy pricing tiers and multi-step web tests are reported with a Meter Category of Application Insights. The usage is shown in the "Consumed Quantity" column and the unit for each entry is shown in the "Unit of Measure" column. More details are available to help you [understand your Microsoft Azure bill](../../cost-management-billing/understand/review-individual-bill.md).
## Understand your usage and optimizing your costs <a name="understand-your-usage-and-estimate-costs"></a>
C. View data volume trends for the past month.
D. Enable data ingestion [sampling](./sampling.md). E. Set the daily data volume cap.
-(Note that all prices displayed in screenshots in this article are for example purposes only. For current prices in your currency and region, see [Application Insights pricing][pricing].)
+(All prices displayed in screenshots in this article are for example purposes only. For current prices in your currency and region, see [Application Insights pricing][pricing].)
To investigate your Application Insights usage more deeply, open the **Metrics** page, add the metric named "Data point volume", and then select the *Apply splitting* option to split the data by "Telemetry item type".
To learn more about your data volumes, selecting **Metrics** for your Applicatio
### Queries to understand data volume details
-There are two approaches to investigating data volumes for Application Insights. The first uses aggregated information in the `systemEvents` table, and the second uses the `_BilledSize` property, which is available on each ingested event. `systemEvents` will not have data size information for [workspace-based-application-insights](#data-volume-for-workspace-based-application-insights-resources).
+There are two approaches to investigating data volumes for Application Insights. The first uses aggregated information in the `systemEvents` table, and the second uses the `_BilledSize` property, which is available on each ingested event. `systemEvents` won't have data size information for [workspace-based-application-insights](#data-volume-for-workspace-based-application-insights-resources).
#### Using aggregated data volume information
systemEvents
| summarize sum(BillingTelemetrySizeInBytes) by BillingTelemetryType, bin(timestamp, 1d) | render barchart ```
-Note that this query can be used in an [Azure Log Alert](../alerts/alerts-unified-log.md) to set up alerting on data volumes.
+This query can be used in an [Azure Log Alert](../alerts/alerts-unified-log.md) to set up alerting on data volumes.
To learn more about your telemetry data changes, we can get the count of events by type using the query:
The volume of data you send can be managed using the following techniques:
* **Sampling**: You can use sampling to reduce the amount of telemetry that's sent from your server and client apps, with minimal distortion of metrics. Sampling is the primary tool you can use to tune the amount of data you send. Learn more about [sampling features](./sampling.md).
-* **Limit Ajax calls**: You can [limit the number of Ajax calls that can be reported](./javascript.md#configuration) in every page view, or switch off Ajax reporting. Note that disabling Ajax calls will disable [JavaScript correlation](./javascript.md#enable-correlation).
+* **Limit Ajax calls**: You can [limit the number of Ajax calls that can be reported](./javascript.md#configuration) in every page view, or switch off Ajax reporting. Disabling Ajax calls will disable [JavaScript correlation](./javascript.md#enable-correlation).
* **Disable unneeded modules**: [Edit ApplicationInsights.config](./configuration-with-applicationinsights-config.md) to turn off collection modules that you don't need. For example, you might decide that performance counters or dependency data are inessential.
The volume of data you send can be managed using the following techniques:
We've removed the restriction on some subscription types that have credit that couldn't be used for Application Insights. Previously, if the subscription has a spending limit, the daily cap dialog has instructions to remove the spending limit and enable the daily cap to be raised beyond 32.3 MB/day.
-* **Throttling**: Throttling limits the data rate to 32,000 events per second, averaged over 1 minute per instrumentation key. The volume of data that your app sends is assessed every minute. If it exceeds the per-second rate averaged over the minute, the server refuses some requests. The SDK buffers the data and then tries to resend it. It spreads out a surge over several minutes. If your app consistently sends data at more than the throttling rate, some data will be dropped. (The ASP.NET, Java, and JavaScript SDKs try to resend data this way; other SDKs might simply drop throttled data.) If throttling occurs, a notification warning alerts you that this has occurred.
+* **Throttling**: Throttling limits the data rate to 32,000 events per second, averaged over 1 minute per instrumentation key. The volume of data that your app sends is assessed every minute. If it exceeds the per-second rate averaged over the minute, the server refuses some requests. The SDK buffers the data and then tries to resend it. It spreads out a surge over several minutes. If your app consistently sends data at more than the throttling rate, some data will be dropped. (The ASP.NET, Java, and JavaScript SDKs try to resend data this way; other SDKs might drop throttled data.) If throttling occurs, a notification warning alerts you that this has occurred.
## Manage your maximum daily data volume
-You can use the daily volume cap to limit the data collected. However, if the cap is met, a loss of all telemetry sent from your application for the remainder of the day occurs. It is *not advisable* to have your application hit the daily cap. You can't track the health and performance of your application after it reaches the daily cap.
+You can use the daily volume cap to limit the data collected. However, if the cap is met, a loss of all telemetry sent from your application for the remainder of the day occurs. It *isn't advisable* to have your application hit the daily cap. You can't track the health and performance of your application after it reaches the daily cap.
> [!WARNING] > If you have a workspace-based Application Insights resource, we recommend using the [workspace's daily cap](../logs/manage-cost-storage.md#manage-your-maximum-daily-data-volume) to limit ingestion and costs. The daily cap in Application Insights may not limit ingestion in all cases to the selected level. (If your Application Insights resource is ingesting a lot of data, the Application Insights daily cap might need to be raised.)
To change the retention, from your Application Insights resource, go to the **Us
![Screenshot that shows where to change the data retention period.](./media/pricing/pricing-005.png)
-When the retention is lowered, there is a several day grace period before the oldest data is removed.
+A several-day grace period begins when the retention is lowered before the oldest data is removed.
The retention can also be [set programatically using PowerShell](powershell.md#set-the-data-retention) using the `retentionInDays` parameter. If you set the data retention to 30 days, you can trigger an immediate purge of older data using the `immediatePurgeDataOn30Days` parameter, which may be useful for compliance-related scenarios. This purge functionality is only exposed via Azure Resource Manager and should be used with extreme care. The daily reset time for the data volume cap can be configured using Azure Resource Manager to set the `dailyQuotaResetTime` parameter.
To disable the daily volume cap e-mails, under the **Configure** section of your
## Legacy Enterprise (Per Node) pricing tier
-For early adopters of Azure Application Insights, there are still two possible pricing tiers: Basic and Enterprise. The Basic pricing tier is the same as described above and is the default tier. It includes all Enterprise tier features, at no additional cost. The Basic tier bills primarily on the volume of data that's ingested.
+For early adopters of Azure Application Insights, there are still two possible pricing tiers: Basic and Enterprise. The Basic pricing tier is the same as described above and is the default tier. It includes all Enterprise tier features, at no extra cost. The Basic tier bills primarily on the volume of data that's ingested.
These legacy pricing tiers have been renamed. The Enterprise pricing tier is now called **Per Node** and the Basic pricing tier is now called **Per GB**. These new names are used below and in the Azure portal.
-The Per Node (formerly Enterprise) tier has a per-node charge, and each node receives a daily data allowance. In the Per Node pricing tier, you are charged for data ingested above the included allowance. If you are using Operations Management Suite, you should choose the Per Node tier. In April 2018, we [introduced](https://azure.microsoft.com/blog/introducing-a-new-way-to-purchase-azure-monitoring-services/) a new pricing model for Azure monitoring. This model adopts a simple "pay-as-you-go" model across the complete portfolio of monitoring services. Learn more about the [new pricing model](..//usage-estimated-costs.md).
+The Per Node (formerly Enterprise) tier has a per-node charge, and each node receives a daily data allowance. In the Per Node pricing tier, you're charged for data ingested above the included allowance. If you're using Operations Management Suite, you should choose the Per Node tier. In April 2018, we [introduced](https://azure.microsoft.com/blog/introducing-a-new-way-to-purchase-azure-monitoring-services/) a new pricing model for Azure monitoring. This model adopts a simple "pay-as-you-go" model across the complete portfolio of monitoring services. Learn more about the [new pricing model](..//usage-estimated-costs.md).
For current prices in your currency and region, see [Application Insights pricing](https://azure.microsoft.com/pricing/details/application-insights/). ### Understanding billed usage on the legacy Enterprise (Per Node) tier
-As described below in more detail, the legacy Enterprise (Per Node) tier combines usage from across all Application Insights resources in a subscription to calculate the number of nodes and the data overage. Due to this combination process, **usage for all Application Insights resources in a subscription are reported against just one of the resources**. This makes reconciling your [billed usage](#viewing-application-insights-usage-on-your-azure-bill) with the usage you observe for each Application Insights resources very complicated.
+As described below in more detail, the legacy Enterprise (Per Node) tier combines usage from across all Application Insights resources in a subscription to calculate the number of nodes and the data overage. Due to this combination process, **usage for all Application Insights resources in a subscription are reported against just one of the resources**. This makes reconciling your [billed usage](#viewing-application-insights-usage-on-your-azure-bill) with the usage you observe for each Application Insights resource complicated.
> [!WARNING] > Because of the complexity of tracking and understanding usage of Application Insights resources in the legacy Enterprise (Per Node) tier we strongly recommend using the current Pay-As-You-Go pricing tier. ### Per Node tier and Operations Management Suite subscription entitlements
-Customers who purchase Operations Management Suite E1 and E2 can get Application Insights Per Node as an additional component at no additional cost as [previously announced](/archive/blogs/msoms/azure-application-insights-enterprise-as-part-of-operations-management-suite-subscription). Specifically, each unit of Operations Management Suite E1 and E2 includes an entitlement to one node of the Application Insights Per Node tier. Each Application Insights node includes up to 200 MB of data ingested per day (separate from Log Analytics data ingestion), with 90-day data retention at no additional cost. The tier is described in more detailed later in the article.
+Customers who purchase Operations Management Suite E1 and E2 can get Application Insights Per Node as an supplemental component at no extra cost as [previously announced](/archive/blogs/msoms/azure-application-insights-enterprise-as-part-of-operations-management-suite-subscription). Specifically, each unit of Operations Management Suite E1 and E2 includes an entitlement to one node of the Application Insights Per Node tier. Each Application Insights node includes up to 200 MB of data ingested per day (separate from Log Analytics data ingestion), with 90-day data retention at no extra cost. The tier is described in more detailed later in the article.
Because this tier is applicable only to customers with an Operations Management Suite subscription, customers who don't have an Operations Management Suite subscription don't see an option to select this tier.
Because this tier is applicable only to customers with an Operations Management
* You pay for each node that sends telemetry for any apps in the Per Node tier. * A *node* is a physical or virtual server machine or a platform-as-a-service role instance that hosts your app.
- * Development machines, client browsers, and mobile devices do not count as nodes.
+ * Development machines, client browsers, and mobile devices don't count as nodes.
* If your app has several components that send telemetry, such as a web service and a back-end worker, the components are counted separately. * [Live Metrics Stream](./live-stream.md) data isn't counted for pricing purposes. In a subscription, your charges are per node, not per app. If you have five nodes that send telemetry for 12 apps, the charge is for five nodes. * Although charges are quoted per month, you're charged only for any hour in which a node sends telemetry from an app. The hourly charge is the quoted monthly charge divided by 744 (the number of hours in a 31-day month).
You can write a script to set the pricing tier by using Azure Resource Managemen
## Next steps
-* [sampling](./sampling.md)
+[Sampling](./sampling.md) in Application Insights is the recommended way to reduce telemetry traffic, data costs, and storage costs.
[api]: app-insights-api-custom-events-metrics.md [apiproperties]: app-insights-api-custom-events-metrics.md#properties [start]: ./app-insights-overview.md [pricing]: https://azure.microsoft.com/pricing/details/application-insights/ [pricing]: https://azure.microsoft.com/pricing/details/application-insights/+
+## Troubleshooting
+
+### Unexpected usage or estimated cost
+
+Lower your bill with updated versions of the ASP.NET Core SDK and Worker Service SDK, which [don't collect counters by default](eventcounters.md#default-counters-collected).
+
+### Microsoft Q&A question page
+
+If you have questions about how pricing works for Application Insights, you can post a question in our [Microsoft Q&A question page](/answers/topics/azure-monitor.html).
azure-monitor Resources Roles Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/resources-roles-access-control.md
Title: Resources, roles and access control in Azure Application Insights | Micro
description: Owners, contributors and readers of your organization's insights. Last updated 02/14/2019 -+
First, some definitions:
* [**Resource group**][group] - Every resource belongs to one group. A group is a convenient way to manage related resources, particularly for access control. For example, into one resource group you could put a Web App, an Application Insights resource to monitor the app, and a Storage resource to keep exported data.
-* [**Subscription**](https://portal.azure.com) - To use Application Insights or other Azure resources, you sign in to an Azure subscription. Every resource group belongs to one Azure subscription, where you choose your price package and, if it's an organization subscription, choose the members and their access permissions.
+* [**Subscription**](https://portal.azure.com) - To use Application Insights or other Azure resources, you sign in to an Azure subscription. Every resource group belongs to one Azure subscription, where you choose your price package. If it's an organization subscription, the owner may choose the members and their access permissions.
* [**Microsoft account**][account] - The username and password that you use to sign in to Microsoft Azure subscriptions, XBox Live, Outlook.com, and other Microsoft services. ## <a name="access"></a> Control access in the resource group
The user must have a [Microsoft Account][account], or access to their [organizat
#### Navigate to resource group or directly to the resource itself
-Choose **Access control (IAM)** from the left-hand menu.
+1. Assign the Contributor role to the Role Based Access Control.
-![Screenshot of Access control button in Azure portal](./media/resources-roles-access-control/0001-access-control.png)
-
-Select **Add role assignment**
-
-![Screenshot of Access control menu with add button highlighted in red](./media/resources-roles-access-control/0002-add.png)
-
-The **Add permissions** view below is primarily specific to Application Insights resources, if you were viewing the access control permissions from a higher level like resource groups, you would see additional non-Application Insights-centric roles.
-
-To view information on all Azure role-based access control built-in roles use the [official reference content](../../role-based-access-control/built-in-roles.md).
-
-![Screenshot of Access control user role list](./media/resources-roles-access-control/0003-user-roles.png)
+ For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
#### Select a role
Where applicable we link to the associated official reference documentation.
| [Contributor](../../role-based-access-control/built-in-roles.md#contributor) |Can edit anything, including all resources. | | [Application Insights Component contributor](../../role-based-access-control/built-in-roles.md#application-insights-component-contributor) |Can edit Application Insights resources. | | [Reader](../../role-based-access-control/built-in-roles.md#reader) |Can view but not change anything. |
-| [Application Insights Snapshot Debugger](../../role-based-access-control/built-in-roles.md#application-insights-snapshot-debugger) | Gives the user permission to use Application Insights Snapshot Debugger features. Note that this role is included in neither the Owner nor Contributor roles. |
+| [Application Insights Snapshot Debugger](../../role-based-access-control/built-in-roles.md#application-insights-snapshot-debugger) | Gives the user permission to use Application Insights Snapshot Debugger features. This role is included in neither the Owner nor Contributor roles. |
| Azure Service Deploy Release Management Contributor | Contributor role for services deploying through Azure Service Deploy. | | [Data Purger](../../role-based-access-control/built-in-roles.md#data-purger) | Special role for purging personal data. See our [guidance for personal data](../logs/personal-data-mgmt.md) for more information. | | ExpressRoute Administrator | Can create delete and manage express routes.|
azure-monitor Sdk Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-connection-string.md
See also: [Regions that require endpoint modification](./custom-endpoints.md#reg
## Connection string examples
-### Minimal valid connection string
-
-`InstrumentationKey=00000000-0000-0000-0000-000000000000;`
-
-In this example, only the Instrumentation Key has been set.
--- Authorization scheme defaults to "ikey" -- Instrumentation Key: 00000000-0000-0000-0000-000000000000-- The regional service URIs are based on the [SDK defaults](https://github.com/microsoft/ApplicationInsights-dotnet/blob/develop/BASE/src/Microsoft.ApplicationInsights/Extensibility/Implementation/Endpoints/Constants.cs) and will connect to the public global Azure:
- - Ingestion: `https://dc.services.visualstudio.com/`
- - Live metrics: `https://rt.services.visualstudio.com/`
- - Profiler: `https://profiler.monitor.azure.com/`
- - Debugger: `https://snapshot.monitor.azure.com/`
--- ### Connection string with endpoint suffix `InstrumentationKey=00000000-0000-0000-0000-000000000000;EndpointSuffix=ai.contoso.com;`
You can set the connection string in the `applicationinsights.json` configuratio
} ```
-See [connection string configuration](./java-standalone-config.md#connection-string) for more details.
+For more information, [connection string configuration](./java-standalone-config.md#connection-string).
For Application Insights Java 2.x, you can set the connection string in the `ApplicationInsights.xml` configuration file:
azure-monitor Snapshot Debugger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/snapshot-debugger.md
Access to snapshots is protected by Azure role-based access control (Azure RBAC)
Subscription owners should assign the `Application Insights Snapshot Debugger` role to users who will inspect snapshots. This role can be assigned to individual users or groups by subscription owners for the target Application Insights resource or its resource group or subscription.
-1. Navigate to the Application Insights resource in the Azure portal.
-1. Click **Access control (IAM)**.
-1. Click the **+Add role assignment** button.
-1. Select **Application Insights Snapshot Debugger** from the **Roles** drop-down list.
-1. Search for and enter a name for the user to add.
-1. Click the **Save** button to add the user to the role.
+1. Assign the Debugger role to the **Application Insights Snapshot**.
+
+ For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
> [!IMPORTANT]
azure-monitor Container Insights Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-troubleshoot.md
During the onboarding or update process, granting the **Monitoring Metrics Publi
You can also manually grant this role from the Azure portal by performing the following steps:
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. In the Azure portal, click **All services** found in the upper left-hand corner. In the list of resources, type **Kubernetes**. As you begin typing, the list filters based on your input. Select **Azure Kubernetes**.
-3. In the list of Kubernetes clusters, select one from the list.
-2. From the left-hand menu, click **Access control (IAM)**.
-3. Select **+ Add** to add a role assignment and select the **Monitoring Metrics Publisher** role and under the **Select** box type **AKS** to filter the results on just the clusters service principals defined in the subscription. Select the one from the list that is specific to that cluster.
-4. Select **Save** to finish assigning the role.
+1. Assign the **Publisher** role to the **Monitoring Metrics** scope.
+
+ For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
## Container insights is enabled but not reporting any information
azure-monitor Collect Sccm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/collect-sccm.md
description: This article shows the steps to connect Configuration Manager to wo
Previously updated : 08/02/2021 Last updated : 08/02/2021 # Connect Configuration Manager to Azure Monitor
In the following procedure, you grant the *Contributor* role in your Log Analyti
> You must specify permissions in the Log Analytics workspace for Configuration Manager. Otherwise, you receive an error message when you use the configuration wizard in Configuration Manager. >
-1. In the Azure portal, click **All services** found in the upper left-hand corner. In the list of resources, type **Log Analytics**. As you begin typing, the list filters based on your input. Select **Log Analytics**.
+1. Assign the Contributor role to the Log Analytics scope.
-2. In your list of Log Analytics workspaces, select the workspace to modify.
-
-3. From the left pane, select **Access control (IAM)**.
-
-4. In the Access control (IAM) page, click **Add role assignment** and the **Add role assignment** pane appears.
-
-5. In the **Add role assignment** pane, under the **Role** drop-down list select the **Contributor** role.
-
-6. Under the **Assign access to** drop-down list, select the Configuration Manager application created in AD earlier, and then click **OK**.
+ For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
## Download and install the agent
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-cost-storage.md
na Previously updated : 02/13/2022 Last updated : 02/17/2022
Billing for the commitment tiers is done on a daily basis. [Learn more](https://
<a name="data-size"></a> <a name="free-data-types"></a>
-In all pricing tiers, an event's data size is calculated from a string representation of the properties that are stored in Log Analytics for this event, regardless of whether the data is sent from an agent or added during the ingestion process. This includes any [custom fields](custom-fields.md) that are added as data is collected and then stored in Log Analytics. Several properties common to all data types, including some [Log Analytics Standard Properties](./log-standard-columns.md), are excluded in the calculation of the event size. This includes `_ResourceId`, `_SubscriptionId`, `_ItemId`, `_IsBillable`, `_BilledSize` and `Type`. All other properties stored in Log Analytics are included in the calculation of the event size. Some data types are free from data ingestion charges altogether, for example the [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity), [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat), [Usage](/azure/azure-monitor/reference/tables/usage) and [Operation](/azure/azure-monitor/reference/tables/operation) types. Some solutions have more solution-specific policies about free data ingestion, for instance [Azure Migrate](https://azure.microsoft.com/pricing/details/azure-migrate/) makes dependency visualization data free for the first 180-days of a Server Assessment. To determine whether an event was excluded from billing for data ingestion, you can use the [_IsBillable](log-standard-columns.md#_isbillable) property as shown [below](#data-volume-for-specific-events). Usage is reported in GB (10^9 bytes).
+In all pricing tiers, an event's data size is calculated from a string representation of the properties that are stored in Log Analytics for this event, regardless of whether the data is sent from an agent or added during the ingestion process. This includes any [custom fields](custom-fields.md) that are added as data is collected and then stored in Log Analytics. Several properties common to all data types, including some [Log Analytics Standard Properties](./log-standard-columns.md), are excluded in the calculation of the event size. This includes `_ResourceId`, `_SubscriptionId`, `_ItemId`, `_IsBillable`, `_BilledSize` and `Type`. All other properties stored in Log Analytics are included in the calculation of the event size. Some data types are free from data ingestion charges altogether, for example the [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity), [Heartbeat](/azure/azure-monitor/reference/tables/heartbeat), [Usage](/azure/azure-monitor/reference/tables/usage) and [Operation](/azure/azure-monitor/reference/tables/operation) types. Some solutions have more solution-specific policies about free data ingestion, for instance [Azure Migrate](https://azure.microsoft.com/pricing/details/azure-migrate/) makes dependency visualization data free for the first 180-days of a Server Assessment. To determine whether an event was excluded from billing for data ingestion, you can use the [_IsBillable](log-standard-columns.md#_isbillable) property as shown [below]Fdata-volume-for-specific-events). Usage is reported in GB (10^9 bytes).
Also, some solutions, such as [Microsoft Defender for Cloud](https://azure.microsoft.com/pricing/details/azure-defender/), [Microsoft Sentinel](https://azure.microsoft.com/pricing/details/azure-sentinel/), and [Configuration management](https://azure.microsoft.com/pricing/details/automation/) have their own pricing models.
The easiest way to view your billed usage for a particular Log Analytics workspa
Alternatively, you can start in the [Azure Cost Management + Billing](../../cost-management-billing/costs/quick-acm-cost-analysis.md?toc=%2fazure%2fbilling%2fTOC.json) hub. here you can use the "Cost analysis" functionality to view your Azure resource expenses. To track your Log Analytics expenses, you can add a filter by "Resource type" (to microsoft.operationalinsights/workspace for Log Analytics and microsoft.operationalinsights/cluster for Log Analytics Clusters). For **Group by**, select **Meter category** or **Meter**. Other services, like Microsoft Defender for Cloud and Microsoft Sentinel, also bill their usage against Log Analytics workspace resources. To see the mapping to the service name, you can select the Table view instead of a chart.
+<a name="export-usage"></a>
+<a name="download-usage"></a>
+ To gain more understanding of your usage, you can [download your usage from the Azure portal](../../cost-management-billing/understand/download-azure-daily-usage.md). For step-by-step instructions, review this [tutorial](../../cost-management-billing/costs/tutorial-export-acm-data.md). In the downloaded spreadsheet, you can see usage per Azure resource (for example, Log Analytics workspace) per day. In this Excel spreadsheet, usage from your Log Analytics workspaces can be found by first filtering on the "Meter Category" column to show "Log Analytics", "Insight and Analytics" (used by some of the legacy pricing tiers), and "Azure Monitor" (used by commitment tier pricing tiers), and then adding a filter on the "Instance ID" column that is "contains workspace" or "contains cluster" (the latter to include Log Analytics Cluster usage). The usage is shown in the "Consumed Quantity" column, and the unit for each entry is shown in the "Unit of Measure" column. For more information, see [Review your individual Azure subscription bill](../../cost-management-billing/understand/review-individual-bill.md).
None of the legacy pricing tiers have regional-based pricing.
## Log Analytics and Microsoft Defender for Cloud <a name="ASC"></a>
-[Microsoft Defender for servers (Defender for Cloud)](../../security-center/index.yml) billing is closely tied to Log Analytics billing. Microsoft Defender for Cloud [bills by the number of monitored services](https://azure.microsoft.com/pricing/details/azure-defender/) and provides 500 MB/server/day data allocation that is applied to the following subset of [security data types](/azure/azure-monitor/reference/tables/tables-category#security) (WindowsEvent, SecurityAlert, SecurityBaseline, SecurityBaselineSummary, SecurityDetection, SecurityEvent, WindowsFirewall, MaliciousIPCommunication, LinuxAuditLog, SysmonEvent, ProtectionStatus) and the Update and UpdateSummary data types when the Update Management solution isn't running on the workspace or solution targeting is enabled ([learn more](../../security-center/security-center-pricing.md#what-data-types-are-included-in-the-500-mb-data-daily-allowance)). The count of monitored servers is calculated on an hourly granularity. The daily data allocation contributions from each monitored server are aggregated at the workspace level. If the workspace is in the legacy Per Node pricing tier, the Microsoft Defender for Cloud and Log Analytics allocations are combined and applied jointly to all billable ingested data.
+[Microsoft Defender for Servers (part of Defender for Cloud)](../../security-center/index.yml) billing is closely tied to Log Analytics billing. Microsoft Defender for Servers [bills by the number of monitored services](https://azure.microsoft.com/pricing/details/azure-defender/) and provides 500 MB/server/day data allocation that is applied to the following subset of [security data types](/azure/azure-monitor/reference/tables/tables-category#security) (WindowsEvent, SecurityAlert, SecurityBaseline, SecurityBaselineSummary, SecurityDetection, SecurityEvent, WindowsFirewall, MaliciousIPCommunication, LinuxAuditLog, SysmonEvent, ProtectionStatus) and the Update and UpdateSummary data types when the Update Management solution isn't running on the workspace or solution targeting is enabled ([learn more](../../security-center/security-center-pricing.md#what-data-types-are-included-in-the-500-mb-data-daily-allowance)). The count of monitored servers is calculated on an hourly granularity. The daily data allocation contributions from each monitored server are aggregated at the workspace level. If the workspace is in the legacy Per Node pricing tier, the Microsoft Defender for Cloud and Log Analytics allocations are combined and applied jointly to all billable ingested data.
+
+To view the daily Defender for Servers data allocations for a workspace, you need to [export your usage details](#viewing-log-analytics-usage-on-your-azure-bill), open the usage spreadsheet and filter the meter category to "Insight and Analytics". You'll then see usage with the meter name "Data Included per Node" which has a zero price per GB. The consumed quantity column will show the number of GBs of Defender for Cloud data allocation for the day. (If the workspace is in the legacy Per Node Log Analytics pricing tier, this meter will also include the data allocations from this Log Analytics pricing tier.)
## Change the data retention period
This query isn't an exact replication of how usage is calculated, but it provide
> [!NOTE] > To use the entitlements that come from purchasing OMS E1 Suite, OMS E2 Suite, or OMS Add-On for System Center, choose the Log Analytics *Per Node* pricing tier.
+<a name="allocations"></a>
+
+## Viewing data allocation benefits
+
+To view data allocation benefits from sources such as [Microsoft Defender for Servers](https://azure.microsoft.com/pricing/details/defender-for-cloud/), [Microsoft Sentinel benefit for Microsoft 365 E5, A5, F5 and G5 customers](https://azure.microsoft.com/offers/sentinel-microsoft-365-offer/), or the [Sentinel Free Trial](https://azure.microsoft.com/pricing/details/microsoft-sentinel/), you need to [export your usage details](#viewing-log-analytics-usage-on-your-azure-bill). Open the exported usage spreadsheet and filter the "Instance ID" column to your workspace. (To select all of your workspaces in the spreadsheet, filter the Instance ID column to "contains /workspaces/".) Next, filter the ResourceRate column to show only rows where this is equal to zero. Now you will see the data allocations from these various sources.
+
+> [!NOTE]
+> Data allocations from Defender for Servers 500 MB/server/day will appear in rows with the meter name "Data Included per Node" and the meter category to "Insight and Analytics" (the name of a legacy offer still used with this meter.) If the workspace is in the legacy Per Node Log Analytics pricing tier, this meter will also include the data allocations from this Log Analytics pricing tier.
+ ## Late-arriving data Situations can arise where data is ingested with old timestamps. For example, if an agent can't communicate to Log Analytics because of a connectivity issue or when a host has an incorrect time date/time. This can manifest itself by an apparent discrepancy between the ingested data reported by the **Usage** data type and a query summing **_BilledSize** over the raw data for a particular day specified by **TimeGenerated**, the timestamp when the event was generated.
azure-portal Azure Portal Add Remove Sort Favorites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-add-remove-sort-favorites.md
Title: Add, remove, and arrange favorites in Azure portal description: Learn how to add or remove items from the favorites list and rearrange the order of items keywords: favorites,portal Previously updated : 03/16/2021 Last updated : 02/17/2022 # Add, remove, and rearrange favorites
-Add or remove items from your **Favorites** list so that you can quickly go to the services you use most often. We already added some common services to your **Favorites** list, but youΓÇÖll likely want to customize it. You're the only one who sees the changes you make to **Favorites**.
+Add or remove items from your **Favorites** list in the Azure portal so that you can quickly go to the services you use most often. We've already added some common services to your **Favorites** list, but you'll likely want to customize it. You're the only one who sees the changes you make to **Favorites**.
## Add a favorite Items that are listed under **Favorites** are selected from **All services**. Hover over a service name to display information and resources related to the service. A filled star icon ![Filled star icon](./media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-graystar.png) next to the service name indicates that the item appears on the **Favorites** list. Select the star icon to add a service to the **Favorites** list.
-### Add Cost Management + Billing to Favorites
+In this example, we'll add Cost Management + Billing to the **Favorites** list.
1. Select **All services** from the Azure portal menu.
- ![Screenshot showing All services selected](./media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-new-all-services.png)
+ :::image type="content" source="media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-new-all-services.png" alt-text="Screenshot showing All services in the Azure portal menu.":::
1. Enter the word "cost" in the search field. Services that have "cost" in the title or that have "cost" as a keyword are shown.
- ![Screenshot showing search in All services](./media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-find-service.png)
+ :::image type="content" source="media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-find-service.png" alt-text="Screenshot showing a search in All services in the Azure portal.":::
1. Hover over the service name to display the **Cost Management + Billing** information card. Select the star icon.
- ![Screenshot showing star next to cost management + billing selected](./media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-add.png)
+ :::image type="content" source="media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-add.png" alt-text="Screenshot showing the star icon to add a service to Favorites in the Azure portal.":::
1. **Cost Management + Billing** is now added as the last item in your **Favorites** list.
You can now remove an item directly from the **Favorites** list.
1. In the **Favorites** section of the portal menu, hover over the name of the service you want to remove.
- ![Screenshot showing hover behavior in Favorites](./media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-remove.png)
+ :::image type="content" source="media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-remove.png" alt-text="Screenshot showing how to remove a service from Favorites in the Azure portal.":::
2. On the information card, select the star so that it changes from filled to unfilled. The service is removed from the **Favorites** list. ## Rearrange favorites
-You can change the order that your favorite services are listed. Just drag and drop the menu item to another location under **Favorites**.
-
-### Move Cost Management + Billing to the top of Favorites
-
-1. Select and hold the **Cost Management + Billing** entry on the **Favorites** list.
-
- ![Screenshot showing cost management + billing selected](./media/azure-portal-add-remove-sort-favorites/azure-portal-favorites-sort.png)
-
-1. While continuing to hold, drag the item to the top of **Favorites** and then release.
+You can change the order in which your favorite services are listed. Just select an item, then drag and drop it to another location under **Favorites**.
## Next steps
-* To create a project-focused workspace, see [Create and share dashboards in the Azure portal](../azure-portal/azure-portal-dashboards.md)
-* Discover more how-to's in the [Azure portal how-to video series](https://www.youtube.com/playlist?list=PLLasX02E8BPBKgXP4oflOL29TtqTzwhxR)
+- To create a project-focused workspace, see [Create and share dashboards in the Azure portal](../azure-portal/azure-portal-dashboards.md).
+- Explore the [Azure portal how-to video series](https://www.youtube.com/playlist?list=PLLasX02E8BPBKgXP4oflOL29TtqTzwhxR).
azure-resource-manager Deploy To Azure Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-to-azure-button.md
Title: Deploy to Azure button
-description: Use button to deploy Azure Resource Manager templates from a GitHub repository.
+description: Use button to deploy remote Azure Resource Manager templates.
Previously updated : 12/03/2021 Last updated : 02/15/2022
-# Use a deployment button to deploy templates from GitHub repository
+# Use a deployment button to deploy remote templates
-This article describes how to use the **Deploy to Azure** button to deploy ARM JSON templates from a GitHub repository. You can add the button directly to the _README.md_ file in your GitHub repository. Or, you can add the button to a web page that references the repository. This method doesn't support [Bicep files](../bicep/overview.md).
+This article describes how to use the **Deploy to Azure** button to deploy remote ARM JSON templates from a GitHub repository or an Azure storage account. You can add the button directly to the _README.md_ file in your GitHub repository. Or, you can add the button to a web page that references the repository. This method doesn't support deploying remote [Bicep files](../bicep/overview.md).
The deployment scope is determined by the template schema. For more information, see:
The image appears as:
## Create URL for deploying template
-To create the URL for your template, start with the raw URL to the template in your repo. To see the raw URL, select **Raw**.
+This section shows how to get the URLs for the templates stored in GitHub and Azure storage account, and how to format the URLs.
+
+### Template stored in GitHub
+
+To create the URL for your template, start with the raw URL to the template in your GitHub repo. To see the raw URL, select **Raw**.
:::image type="content" source="./media/deploy-to-azure-button/select-raw.png" alt-text="select Raw":::
The format of the URL is:
https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.json ```
-Then, convert the URL to a URL-encoded value. You can use an online encoder or run a command. The following PowerShell example shows how to URL encode a value.
+
+If you're using [Git with Azure Repos](/azure/devops/repos/git/) instead of a GitHub repo, you can still use the **Deploy to Azure** button. Make sure your repo is public. Use the [Items operation](/rest/api/azure/devops/git/items/get) to get the template. Your request should be in the following format:
+
+```http
+https://dev.azure.com/{organization-name}/{project-name}/_apis/git/repositories/{repository-name}/items?scopePath={url-encoded-path}&api-version=6.0
+```
+
+## Template stored in Azure storage account
+
+The format of the URLs for the templates stored in a public container is:
+
+```html
+https://{storage-account-name}.blob.core.windows.net/{container-name}/{template-file-name}
+```
+
+For example:
+
+```html
+https://demostorage0215.blob.core.windows.net/democontainer/azuredeploy.json
+```
+
+You can secure the template with SAS token. For more information, see [How to deploy private ARM template with SAS token](./secure-template-with-sas-token.md). The following url is an example with SAS token:
+
+```html
+https://demostorage0215.blob.core.windows.net/privatecontainer/azuredeploy.json?sv=2019-07-07&sr=b&sig=rnI8%2FvKoCHmvmP7XvfspfyzdHjtN4GPsSqB8qMI9FAo%3D&se=2022-02-16T17%3A47%3A46Z&sp=r
+```
+
+## Format the URL
+
+Once you have the URL, you need to convert the URL to a URL-encoded value. You can use an online encoder or run a command. The following PowerShell example shows how to URL encode a value.
```powershell $url = "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.storage/storage-account-create/azuredeploy.json"
https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.github
You have your full URL for the link. -
-If you're using [Git with Azure Repos](/azure/devops/repos/git/) instead of a GitHub repo, you can still use the **Deploy to Azure** button. Make sure your repo is public. Use the [Items operation](/rest/api/azure/devops/git/items/get) to get the template. Your request should be in the following format:
-
-```http
-https://dev.azure.com/{organization-name}/{project-name}/_apis/git/repositories/{repository-name}/items?scopePath={url-encoded-path}&api-version=6.0
-```
-
-Encode this request URL.
- ## Create Deploy to Azure button Finally, put the link and image together.
azure-signalr Howto Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/howto-custom-domain.md
+
+ Title: Configure a custom domain for Azure SignalR Service
+
+description: How to configure a custom domain for Azure SignalR Service
++++ Last updated : 02/15/2022+++
+# Configure a custom domain for Azure SignalR Service
+
+In addition to the default domain provided Azure SignalR Service, you can also add custom domains.
+
+## Prerequisites
+
+* Resource must be Premium tier
+* A custom certificate matching custom domain is stored in Azure Key Vault
+
+## Add a custom certificate
+
+Before you can add a custom domain, you need add a matching custom certificate first. A custom certificate is a sub resource of your Azure SignalR Service. It references a certificate in your Azure Key Vault. For security and compliance reasons, Azure SignalR Service doesn't permanently store your certificate. Instead it fetches it from your Key Vault on the fly and keeps it in memory.
+
+### Step 1: Grant your Azure SignalR Service resource access to Key Vault
+
+Azure SignalR Service uses Managed Identity to access your Key Vault. In order to authorize, it needs to be granted permissions.
+
+1. In the Azure portal, go to your Azure SignalR Service resource.
+1. In the menu pane, select **Identity**.
+1. Turn on either **System assigned** or **User assigned** identity. Click **Save**.
+
+ :::image type="content" alt-text="Screenshot of enabling managed identity." source="media\howto-custom-domain\portal-identity.png" :::
+
+1. Go to your Key Vault resource.
+1. In the menu pane, select **Access configuration**. Click **Go to access policies**.
+1. Click **Create**. Select **Secret Get** permission and **Certificate Get** permission. Click **Next**.
+
+ :::image type="content" alt-text="Screenshot of permissions selection in Key Vault." source="media\howto-custom-domain\portal-key-vault-permissions.png" :::
+
+1. Search for the Azure SignalR Service resource name or the user assigned identity name. Click **Next**.
+
+ :::image type="content" alt-text="Screenshot of principal selection in Key Vault." source="media\howto-custom-domain\portal-key-vault-principal.png" :::
+
+1. Skip **Application (optional)**. Click **Next**.
+1. In **Review + create**, click **Create**.
+
+### Step 2: Create a custom certificate
+
+1. In the Azure portal, go to your Azure SignalR Service resource.
+1. In the menu pane, select **Custom domain**.
+1. Under **Custom certificate**, click **Add**.
+
+ :::image type="content" alt-text="Screenshot of custom certificate management." source="media\howto-custom-domain\portal-custom-certificate-management.png" :::
+
+1. Fill in a name for the custom certificate.
+1. Click **Select from your Key Vault** to choose a Key Vault certificate. After selection the following **Key Vault Base URI**, **Key Vault Secret Name** should be automatically filled. Alternatively you can also fill in these fields manually.
+1. Optionally, you can specify a **Key Vault Secret Version** if you want to pin the certificate to a specific version.
+1. Click **Add**.
+
+ :::image type="content" alt-text="Screenshot of adding a custom certificate." source="media\howto-custom-domain\portal-custom-certificate-add.png" :::
+
+Azure SignalR Service will then fetch the certificate and validate its content. If everything is good, the **Provisioning State** will be **Succeeded**.
+
+ :::image type="content" alt-text="Screenshot of an added custom certificate." source="media\howto-custom-domain\portal-custom-certificate-added.png" :::
+
+## Create a custom domain CNAME
+
+To validate the ownership of your custom domain, you need to create a CNAME record for the custom domain and point it to the default domain of Azure SignalR Service.
+
+For example, if your default domain is `contoso.service.signalr.net`, and your custom domain is `contoso.example.com`, you need to create a CNAME record on `example.com` like:
+
+```
+contoso.example.com. 0 IN CNAME contoso.service.signalr.net.
+```
+
+If you're using Azure DNS Zone, see [manage DNS records](../dns/dns-operations-recordsets-portal.md) for how to add a CNAME record.
+
+ :::image type="content" alt-text="Screenshot of adding a CNAME record in Azure DNS Zone." source="media\howto-custom-domain\portal-dns-cname.png" :::
+
+If you're using other DNS providers, follow provider's guide to create a CNAME record.
+
+## Add a custom domain
+
+A custom domain is another sub resource of your Azure SignalR Service. It contains all configurations for a custom domain.
+
+1. In the Azure portal, go to your Azure SignalR Service resource.
+1. In the menu pane, select **Custom domain**.
+1. Under **Custom domain**, click **Add**.
+
+ :::image type="content" alt-text="Screenshot of custom domain management." source="media\howto-custom-domain\portal-custom-domain-management.png" :::
+
+1. Fill in a name for the custom domain. It's the sub resource name.
+1. Fill in the domain name. It's the full domain name of your custom domain, for example, `contoso.com`.
+1. Select a custom certificate that applies to this custom domain.
+1. Click **Add**.
+
+ :::image type="content" alt-text="Screenshot of adding a custom domain." source="media\howto-custom-domain\portal-custom-domain-add.png" :::
+
+## Verify a custom domain
+
+You can now access your Azure SignalR Service endpoint via the custom domain. To verify it, you can access the health API.
+
+Here's an example using cURL:
+
+#### [PowerShell](#tab/azure-powershell)
+
+```powershell
+PS C:\> curl.exe -v https://contoso.example.com/api/health
+...
+> GET /api/health HTTP/1.1
+> Host: contoso.example.com
+
+< HTTP/1.1 200 OK
+...
+PS C:\>
+```
+
+#### [Bash](#tab/azure-bash)
+
+```bash
+$ curl -vvv https://contoso.example.com/api/health
+...
+* SSL certificate verify ok.
+...
+> GET /api/health HTTP/2
+> Host: contoso.example.com
+...
+< HTTP/2 200
+...
+```
+
+--
+
+It should return `200` status code without any certificate error.
+
+## Next steps
+++ [How to enable managed identity for Azure SignalR Service](howto-use-managed-identity.md)++ [Get started with Key Vault certificates](../key-vault/certificates/certificate-scenarios.md)++ [What is Azure DNS](../dns/dns-overview.md)
azure-signalr Signalr Concept Authenticate Oauth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-authenticate-oauth.md
Last updated 11/13/2019
+ms.devlang: csharp
# Azure SignalR Service authentication
azure-sql Always Encrypted Azure Key Vault Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/always-encrypted-azure-key-vault-configure.md
azure-sql Application Authentication Get Client Id Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/application-authentication-get-client-id-keys.md
azure-sql Auto Failover Group Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/auto-failover-group-configure.md
azure-sql Connect Query Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/connect-query-java.md
+ms.devlang: java
Last updated 06/26/2020
azure-sql Database Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/database-copy.md
azure-sql Database Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/database-import.md
sqlpackage.exe /a:Import /sf:testExport.bacpac /tdn:NewDacFX /tsn:apptestserver.
> [A SQL Managed Instance](../managed-instance/sql-managed-instance-paas-overview.md) does not currently support migrating a database into an instance database from a BACPAC file using Azure PowerShell. To import into a SQL Managed Instance, use SQL Server Management Studio or SQLPackage. > [!NOTE]
-> The machines processing import/export requests submitted through portal or Powershell need to store the bacpac file as well as temporary files generated by Data-Tier Application Framework (DacFX). The disk space required varies significantly among DBs with same size and can take up to 3 times of the database size. Machines running the import/export request only have 450GB local disk space. As result, some requests may fail with "There is not enough space on the disk" error. In this case, the workaround is to run sqlpackage.exe on a machine with enough local disk space. When importing/exporting databases larger than 150GB, use SqlPackage to avoid this issue.
+> The machines processing import/export requests submitted through portal or PowerShell need to store the bacpac file as well as temporary files generated by Data-Tier Application Framework (DacFX). The disk space required varies significantly among DBs with same size and can take up to 3 times of the database size. Machines running the import/export request only have 450GB local disk space. As result, some requests may fail with "There is not enough space on the disk" error. In this case, the workaround is to run sqlpackage.exe on a machine with enough local disk space. When importing/exporting databases larger than 150GB, use SqlPackage to avoid this issue.
# [PowerShell](#tab/azure-powershell)
az sql db import --resource-group "<resourceGroup>" --server "<server>" --name "
## Cancel the import request Use the [Database Operations - Cancel API](/rest/api/sql/databaseoperations/cancel)
-or the Powershell [Stop-AzSqlDatabaseActivity command](/powershell/module/az.sql/Stop-AzSqlDatabaseActivity), here an example of powershell command.
+or the PowerShell [Stop-AzSqlDatabaseActivity command](/powershell/module/az.sql/Stop-AzSqlDatabaseActivity), here an example of powershell command.
```cmd Stop-AzSqlDatabaseActivity -ResourceGroupName $ResourceGroupName -ServerName $ServerName -DatabaseName $DatabaseName -OperationId $Operation.OperationId
azure-sql Failover Group Add Elastic Pool Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/failover-group-add-elastic-pool-tutorial.md
azure-sql Ledger Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/ledger-overview.md
Ledger provides a solution for these networks. Participants can verify the integ
### Trusted off-chain storage for blockchain
-When a blockchain network is necessary for a multiple-party business process, the ability query the data on the blockchain without sacrificing performance is a challenge.
+When a blockchain network is necessary for a multiple-party business process, the ability to query the data on the blockchain without sacrificing performance is a challenge.
Typical patterns for solving this problem involve replicating data from the blockchain to an off-chain store, such as a database. But after the data is replicated to the database from the blockchain, the data integrity guarantees that a blockchain offer is lost. Ledger provides data integrity for off-chain storage of blockchain networks, which helps ensure complete data trust through the entire system.
azure-sql Logical Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/logical-servers.md
azure-sql Long Term Backup Retention Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/long-term-backup-retention-configure.md
azure-sql Single Database Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/single-database-manage.md
azure-sql Threat Detection Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/threat-detection-configure.md
Previously updated : 12/01/2020 Last updated : 02/16/2022 # Configure Advanced Threat Protection for Azure SQL Database [!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
You can receive notifications about the detected threats via [email notification
## Set up Advanced Threat Protection in the Azure portal 1. Sign into the [Azure portal](https://portal.azure.com).
-2. Navigate to the configuration page of the server you want to protect. In the security settings, select **Defender for Cloud**.
-3. On the **Microsoft Defender for SQL** configuration page:
+2. Navigate to the configuration page of the [server](logical-servers.md) you want to protect. In the security settings, select **Microsoft Defender for Cloud**.
+3. On the **Microsoft Defender for Cloud** configuration page:
- - Enable **Microsoft Defender for SQL** on the server.
- - In **Advanced Threat Protection Settings**, provide the list of emails to receive security alerts upon detection of anomalous database activities in the **Send alerts to** text box.
+ 1. If Microsoft Defender for SQL hasn't yet been enabled, select **Enable Microsoft Defender for SQL**.
- :::image type="content" source="media/azure-defender-for-sql/set-up-advanced-threat-protection.png" alt-text="set up advanced threat protection":::
-
+ 1. Select **Configure**.
+
+ :::image type="content" source="media/azure-defender-for-sql/enable-microsoft-defender-sql.png" alt-text="Enable Microsoft Defender for SQL." lightbox="media/azure-defender-for-sql/enable-microsoft-defender-sql.png":::
+
+ 1. Under **ADVANCED THREAT PROTECTION SETTINGS**, select **Add your contact details to the subscription's email settings in Defender for Cloud**.
+
+ :::image type="content" source="media/azure-defender-for-sql/advanced-threat-protection-add-contact-details.png" alt-text="Select link to proceed to advanced threat protection settings." lightbox="media/azure-defender-for-sql/advanced-threat-protection-add-contact-details.png":::
+
+ 1. Provide the list of emails to receive notifications upon detection of anomalous database activities in the **Additional email addresses (separated by commas)** text box.
+ 1. Optionally customize the severity of alerts that will trigger notifications to be sent under **Notification types**.
+ 1. Select **Save**.
+
+ :::image type="content" source="media/azure-defender-for-sql/advanced-threat-protection-configure-emails.png" alt-text="Enter emails for Advanced Threat Protection notifications." lightbox="media/azure-defender-for-sql/advanced-threat-protection-configure-emails.png":::
+
## Set up Advanced Threat Protection using PowerShell For a script example, see [Configure auditing and Advanced Threat Protection using PowerShell](scripts/auditing-threat-detection-powershell-configure.md). ## Next steps -- Learn more about [Advanced Threat Protection](threat-detection-overview.md).-- Learn more about [Advanced Threat Protection in SQL Managed Instance](../managed-instance/threat-detection-configure.md). -- Learn more about [Microsoft Defender for SQL](azure-defender-for-sql.md).-- Learn more about [auditing](../../azure-sql/database/auditing-overview.md)-- Learn more about [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md)
+Learn more about Advanced Threat Protection and Microsoft Defender for SQL in the following articles:
+
+- [Advanced Threat Protection](threat-detection-overview.md)
+- [Advanced Threat Protection in SQL Managed Instance](../managed-instance/threat-detection-configure.md)
+- [Microsoft Defender for SQL](azure-defender-for-sql.md)
+- [Auditing for Azure SQL Database and Azure Synapse Analytics](auditing-overview.md)
+- [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md)
- For more information on pricing, see the [SQL Database pricing page](https://azure.microsoft.com/pricing/details/sql-database/)
azure-sql Transparent Data Encryption Byok Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/transparent-data-encryption-byok-configure.md
azure-sql Api References Create Manage Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/api-references-create-manage-instance.md
description: Learn about creating and configuring managed instances of Azure SQL
azure-sql Failover Group Add Instance Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/failover-group-add-instance-tutorial.md
Add managed instances of Azure SQL Managed Instance to a failover group. In this
> - Create a secondary managed instance as part of a [failover group](../database/auto-failover-group-overview.md). > - Test failover.
+ There are multiple ways to establish connectivity between managed instances in different virtual networks, including:
+ * [Azure ExpressRoute](../../expressroute/expressroute-howto-circuit-portal-resource-manager.md)
+ * [Virtual network peering](../../virtual-network/virtual-network-peering-overview.md)
+ * VPN gateways
+
+This tutorial provides steps for creating and connecting VPN gateways. If you prefer to use ExpressRoute or VNet peering, replace the gateway steps accordingly, or
+skip ahead to [Step 7](#create-a-failover-group) if you already have ExpressRoute or global VNet peering configured.
++ > [!NOTE] > - When going through this tutorial, ensure you are configuring your resources with the [prerequisites for setting up failover groups for SQL Managed Instance](../database/auto-failover-group-overview.md#enabling-geo-replication-between-managed-instances-and-their-vnets).
- > - Creating a managed instance can take a significant amount of time. As a result, this tutorial could take several hours to complete. For more information on provisioning times, see [SQL Managed Instance management operations](sql-managed-instance-paas-overview.md#management-operations).
- > - Managed instances participating in a failover group require [Azure ExpressRoute](../../expressroute/expressroute-howto-circuit-portal-resource-manager.md), global VNet peering, or two connected VPN gateways. This tutorial provides steps for creating and connecting the VPN gateways. Skip these steps if you already have ExpressRoute configured.
-
+ > - Creating a managed instance can take a significant amount of time. As a result, this tutorial may take several hours to complete. For more information on provisioning times, see [SQL Managed Instance management operations](sql-managed-instance-paas-overview.md#management-operations).
## Prerequisites
This portion of the tutorial uses the following PowerShell cmdlets:
## Create a primary gateway
-For two managed instances to participate in a failover group, there must be either ExpressRoute or a gateway configured between the virtual networks of the two managed instances to allow network communication. If you choose to configure [ExpressRoute](../../expressroute/expressroute-howto-circuit-portal-resource-manager.md) instead of connecting two VPN gateways, skip ahead to [Step 7](#create-a-failover-group).
-
-This article provides steps to create the two VPN gateways and connect them, but you can skip ahead to creating the failover group if you have configured ExpressRoute instead.
- > [!NOTE] > The SKU of the gateway affects throughput performance. This tutorial deploys a gateway with the most basic SKU (`HwGw1`). Deploy a higher SKU (example: `VpnGw3`) to achieve higher throughput. For all available options, see [Gateway SKUs](../../vpn-gateway/vpn-gateway-about-vpngateways.md#benchmark)
azure-sql Long Term Backup Retention Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/long-term-backup-retention-configure.md
backup Backup Azure Afs Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-afs-automation.md
This article explains how to:
[!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] > [!NOTE]
-> Azure Powershell currently doesn't support backup policies with hourly schedule. Please use Azure Portal to leverage this feature. [Learn more](manage-afs-backup.md#create-a-new-policy)
+> Azure PowerShell currently doesn't support backup policies with hourly schedule. Please use Azure Portal to leverage this feature. [Learn more](manage-afs-backup.md#create-a-new-policy)
Set up PowerShell as follows:
backup Backup Azure Arm Restore Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-arm-restore-vms.md
If you don't have permissions, you can [restore a disk](#restore-disks), and the
As one of the [restore options](#restore-options), you can create a VM quickly with basic settings from a restore point.
-1. In **Restore Virtual Machine** > **Create new** > **Restore Type**, select **Create a virtual machine**.
+1. In **Restore Virtual Machine** > **Create new** > **Restore Type**, select **Create new virtual machine**.
1. In **Virtual machine name**, specify a VM that doesn't exist in the subscription. 1. In **Resource group**, select an existing resource group for the new VM, or create a new one with a globally unique name. If you assign a name that already exists, Azure assigns the group the same name as the VM. 1. In **Virtual network**, select the VNet in which the VM will be placed. All VNets associated with the subscription in the same location as the vault, which is active and not attached with any affinity group, are displayed. Select the subnet.
As one of the [restore options](#restore-options), you can create a disk from a
- [Attach restored disks](../virtual-machines/windows/attach-managed-disk-portal.md) to an existing VM. - [Create a new VM](./backup-azure-vms-automation.md#create-a-vm-from-restored-disks) from the restored disks using PowerShell.
-1. In **Restore configuration** > **Create new** > **Restore Type**, select **Restore disks**.
+1. In **Restore configuration** > **Create new** > **Restore Type**, select **Create new virtual machine**.
1. In **Resource group**, select an existing resource group for the restored disks, or create a new one with a globally unique name. 1. In **Staging location**, specify the storage account to which to copy the VHDs. [Learn more](#storage-accounts).
When your virtual machine uses managed disks and you select the **Create virtual
While you restore disks for a Managed VM from a Vault-Standard recovery point, it restores the Managed disk and Azure Resource Manager (ARM) templates, along with the VHD files of the disks in staging location. If you restore disks from an Instant recovery point, it restores the Managed disks and ARM templates only. >[!Note]
->For restoring disk from a Vault-Standard recovery point that is/was greater than 4 TB, Azure Backup doesn't restore the VHD files.
+>- For restoring disk from a Vault-Standard recovery point that is/was greater than 4 TB, Azure Backup doesn't restore the VHD files.
+>- For information on managed/premium disk performance after restored via Azure Backup, see the [Latency](../virtual-machines/premium-storage-performance.md#latency) section.
### Use templates to customize a restored VM
There are a few things to note after restoring a VM:
## Next steps - If you experience difficulties during the restore process, [review](backup-azure-vms-troubleshoot.md#restore) common issues and errors.-- After the VM is restored, learn about [managing virtual machines](backup-azure-manage-vms.md)
+- After the VM is restored, learn about [managing virtual machines](backup-azure-manage-vms.md)
backup Backup Azure Restore Files From Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-restore-files-from-vm.md
To restore files or folders from the recovery point, go to the virtual machine a
## Step 2: Ensure the machine meets the requirements before executing the script
-After the script is successfully downloaded, make sure you have the right machine to execute this script. The VM where you are planning to execute the script, should not have any of the following unsupported configurations. **If it does, then choose an alternate machine preferably from the same region that meets the requirements**.
+After the script is successfully downloaded, make sure you have the right machine to execute this script. The VM where you are planning to execute the script, should not have any of the following unsupported configurations. **If it does, then choose an alternate machine that meets the requirements**.
### Dynamic disks
In Linux, the OS of the computer used to restore files must support the file sys
| SLES | 12 and above | | openSUSE | 42.2 and above |
-> [!NOTE]
-> We've found some issues in running the file recovery script on machines with SLES 12 SP4 OS and we're investigating with the SLES team.
-> Currently, running the file recovery script is working on machines with SLES 12 SP2 and SP3 OS versions.
->
- The script also requires Python and bash components to execute and connect securely to the recovery point. |Component | Version |
If the file recovery process hangs after you run the file-restore script (for ex
![Registry key changes](media/backup-azure-restore-files-from-vm/iscsi-reg-key-changes.png) ```registry-- HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Disk\TimeOutValue ΓÇô change this from 60 to 1200 secs.-- HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Class\{4d36e97b-e325-11ce-bfc1-08002be10318}\0003\Parameters\SrbTimeoutDelta ΓÇô change this from 15 to 1200 secs.
+- HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Disk\TimeOutValue ΓÇô change this from 60 to 2400 secs.
+- HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Class\{4d36e97b-e325-11ce-bfc1-08002be10318}\0003\Parameters\SrbTimeoutDelta ΓÇô change this from 15 to 2400 secs.
- HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Class\{4d36e97b-e325-11ce-bfc1-08002be10318}\0003\Parameters\EnableNOPOut ΓÇô change this from 0 to 1-- HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Class\{4d36e97b-e325-11ce-bfc1-08002be10318}\0003\Parameters\MaxRequestHoldTime - change this from 60 to 1200 secs.
+- HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Class\{4d36e97b-e325-11ce-bfc1-08002be10318}\0003\Parameters\MaxRequestHoldTime - change this from 60 to 2400 secs.
``` ### For Linux
backup Backup Azure Sql Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-automation.md
Azure Backup can restore SQL Server databases that are running on Azure VMs as f
Check the prerequisites mentioned [here](restore-sql-database-azure-vm.md#restore-prerequisites) before restoring SQL DBs. > [!WARNING]
-> Due to a security issue related to RBAC, we had to introduce a breaking change in the restore commands for SQL DB via Powershell. Please upgrade to Az 6.0.0 version or above for the proper restore commands to be submitted via Powershell. The latest PS commands are provided below.
+> Due to a security issue related to RBAC, we had to introduce a breaking change in the restore commands for SQL DB via PowerShell. Please upgrade to Az 6.0.0 version or above for the proper restore commands to be submitted via PowerShell. The latest PS commands are provided below.
First fetch the relevant backed up SQL DB using the [Get-AzRecoveryServicesBackupItem](/powershell/module/az.recoveryservices/get-azrecoveryservicesbackupitem) PowerShell cmdlet.
PointInTime : 1/1/0001 12:00:00 AM
#### Alternate workload restore to a vault in secondary region > [!IMPORTANT]
-> Support for secondary region restores for SQL from Powershell is available from Az 6.0.0
+> Support for secondary region restores for SQL from PowerShell is available from Az 6.0.0
If you have enabled cross region restore, then the recovery points will be replicated to the secondary, paired region as well. Then, you can fetch those recovery points and trigger a restore to a machine, present in that paired region. As with the normal restore, the target machine should be registered to the target vault in the secondary region. The following sequence of steps should clarify the end-to-end process.
backup Restore Blobs Storage Account Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-blobs-storage-account-ps.md
Title: Restore Azure blobs via Azure Powershell
-description: Learn how to restore Azure blobs to any point-in-time using Azure Powershell.
+ Title: Restore Azure blobs via Azure PowerShell
+description: Learn how to restore Azure blobs to any point-in-time using Azure PowerShell.
Last updated 05/05/2021
-# Restore Azure blobs to point-in-time using Azure Powershell
+# Restore Azure blobs to point-in-time using Azure PowerShell
This article describes how to restore [blobs](blob-backup-overview.md) to any point-in-time using Azure Backup.
batch Batch Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-customer-managed-key.md
Title: Configure customer-managed keys for your Azure Batch account with Azure K
description: Learn how to encrypt Batch data using customer-managed keys. Last updated 02/11/2021
+ms.devlang: csharp
batch Disk Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/disk-encryption.md
Title: Create a pool with disk encryption enabled
description: Learn how to use disk encryption configuration to encrypt nodes with a platform-managed key. Last updated 04/16/2021
+ms.devlang: csharp
cloud-services-extended-support Deploy Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/deploy-template.md
This tutorial explains how to create a Cloud Service (extended support) deployme
## Deploy a Cloud Service (extended support) > [!NOTE]
-> An easier and faster way of generating your ARM template and parameter file is via the [Azure portal](https://portal.azure.com). You can [download the generated ARM template](generate-template-portal.md) via the portal to create your Cloud Service via Powershell
+> An easier and faster way of generating your ARM template and parameter file is via the [Azure portal](https://portal.azure.com). You can [download the generated ARM template](generate-template-portal.md) via the portal to create your Cloud Service via PowerShell
1. Create virtual network. The name of the virtual network must match the references in the Service Configuration (.cscfg) file. If using an existing virtual network, omit this section from the ARM template.
cloud-services-extended-support Post Migration Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/post-migration-changes.md
Customers need to update their tooling and automation to start using the new API
## Changes to Certificate Management Post Migration
-As a standard practice to manage your certificates, all the valid .pfx certificate files should be added to certificate store in Key Vault and update would work perfectly fine via any client - Portal, Powershell or Rest API.
+As a standard practice to manage your certificates, all the valid .pfx certificate files should be added to certificate store in Key Vault and update would work perfectly fine via any client - Portal, PowerShell or Rest API.
Currently, Azure Portal does a validation for you to check if all the required Certificates are uploaded in certificate store in Key Vault and warns if a certificate is not found. However, if you are planning to use Certificates as secrets, then these certificates cannot be validated for their thumbprint and any update operation which involves addition of secrets would fail via Portal. Customers are reccomended to use PowerShell or RestAPI to continue updates involving Secrets.
cloudfoundry Create Cloud Foundry On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloudfoundry/create-cloud-foundry-on-azure.md
editor: ruyakubu
ms.assetid: Last updated 09/13/2018 multiple
cognitive-services Luis Reference Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-reference-regions.md
# Authoring and publishing regions and the associated keys
-LUIS authoring regions are supported by the LUIS portal. To publish a LUIS app to more than one region, you need at least one key per region.
-
+LUIS authoring regions are supported by the LUIS portal. To publish a LUIS app to more than one region, you need at least one predection key per region.
<a name="luis-website"></a>
LUIS authoring regions are supported by the LUIS portal. To publish a LUIS app t
Authoring regions are the regions where the application gets created and the training take place.
-LUIS has the following authoring regions available:
+LUIS has the following authoring regions available with [paired fail-over regions](../../availability-zones/cross-region-replication-azure.md):
* Australia east * West Europe * West US * Switzerland north -
-LUIS has one portal you can use regardless of region, [www.luis.ai](https://www.luis.ai). You must still author and publish in the same region. Authoring regions have [paired fail-over regions](../../availability-zones/cross-region-replication-azure.md).
+LUIS has one portal you can use regardless of region, [www.luis.ai](https://www.luis.ai).
<a name="regions-and-azure-resources"></a> ## Publishing regions and Azure resources
-Publishing regions are the regions where the application will be used in runtime. To use the application in a publishing region, you must create a resource in this region and publish your application to it.
+Publishing regions are the regions where the application will be used in runtime. To use the application in a publishing region, you must create a resource in this region and assign your application to it. For example, if you create an app with the *westus* authoring region and publish it to the *eastus* and *brazilsouth* regions, the app will run in those two regions.
-The app is published to all regions associated with the LUIS resources added in the LUIS portal. For example, for an app created on [www.luis.ai][www.luis.ai], if you create a LUIS or Cognitive Service resource in **westus** and [add it to the app as a resource](luis-how-to-azure-subscription.md), the app is published in that region.
- ## Public apps
-A public app is published in all regions so that a user with a region-based LUIS resource key can access the app in whichever region is associated with their resource key.
+A public app is published in all regions so that a user with a supported predection resource can access the app in all regions.
<a name="publishing-regions"></a> ## Publishing regions are tied to authoring regions
-When you first create our LUIS application, you are required to choose an [authoring region](#luis-authoring-regions). To use the application in runtime, you are required to create a publishing region.
-
-The authoring region app can only be published to a corresponding publish region. If your app is currently in the wrong authoring region, export the app, and import it into the correct authoring region for your publishing region.
+When you first create our LUIS application, you are required to choose an [authoring region](#luis-authoring-regions). To use the application in runtime, you are required to create a resource in a publishing region.
-> [!NOTE]
-> LUIS apps created on https://www.luis.ai can now be published to all endpoints including the [European](#publishing-to-europe) and [Australian](#publishing-to-australia) regions.
+Your app can only be published to one of its corresponding authoring regions, which are listed in the tables below. If your app is currently in the wrong authoring region, export the app, and import it into the correct authoring region to match the required publishing region.
## Single data residency
-Regions that fall under single data residency are the regions where data do not leave the boundaries of the region.
-
-The following publishing regions do not have a failover region:
--
-* Brazil South
-* Southeast Asia
+Single data residency means that the data does not leave the boundaries of the region.
> [!Note]
-> * Make sure to set `log=false` for [V3 APIs](https://westus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0/operations/5cb0a91e54c9db63d589f433) to disable active learning. By default this value is `false`, to ensure that data does not leave the boundaries of the publishing region.
-> * If `log=true`, data is returned to the authoring region for active learning even if the publishing region is one of the single data residnecy regions.
-
+> * Make sure to set `log=false` for [V3 APIs](https://westus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0/operations/5cb0a91e54c9db63d589f433) to disable active learning. By default this value is `false`, to ensure that data does not leave the boundaries of the runtime region.
+> * If `log=true`, data is returned to the authoring region for active learning.
## Publishing to Europe
Learn more about the [authoring and prediction endpoints](developer-reference-re
## Failover regions
-Each region has a secondary region to fail over to. Europe fails over inside Europe and Australia fails over inside Australia.
+Each region has a secondary region to fail over to. Failover will only happen in the same geographical region.
-Publishing regions that fall under [single data residency](#single-data-residency) do not have a failover region.
+Authoring regions have [paired fail-over regions](../../availability-zones/cross-region-replication-azure.md).
+The following publishing regions do not have a failover region:
-Authoring regions have [paired fail-over regions](../../availability-zones/cross-region-replication-azure.md).
+* Brazil South
+* Southeast Asia
## Next steps
cognitive-services Network Isolation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/QnAMaker/How-To/network-isolation.md
Add-AzWebAppAccessRestrictionRule -ResourceGroupName "<resource group name>" -We
> [!div class="mx-imgBorder"] > [ ![Screenshot of access restriction rule with the addition of public IP address]( ../media/network-isolation/public-address.png) ]( ../media/network-isolation/public-address.png#lightbox)
+### Outbound access from App Service
+
+The QnA Maker App Service requires outbound access to the below endpoints. Please make sure theyΓÇÖre added to the allow list if there are any restrictions on the outbound traffic.
+- https://qnamakerstore.blob.core.windows.net
+- https://qnamaker-data.trafficmanager.net
+- https://qnamakerconfigprovider.trafficmanager.net
++ ### Configure App Service Environment to host QnA Maker App Service The App Service Environment (ASE) can be used to host the QnA Maker App Service instance. Follow the steps below:
cognitive-services Custom Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-neural-voice.md
The following articles help you start using this feature:
* To get started with Custom Neural Voice and create a project, see [Get started with Custom Neural Voice](how-to-custom-voice.md). * To prepare and upload your audio data, see [Prepare training data](how-to-custom-voice-prepare-data.md).
-* To train and deploy your models, see [Create and use your voice model](how-to-custom-voice-create-voice.md).
+* To train and deploy your models, see [Train your voice model](how-to-custom-voice-create-voice.md) and [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md).
## Terms and definitions
cognitive-services How To Custom Voice Create Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-create-voice.md
Last updated 01/23/2022 +
-# Create and use your voice model
+# Train your voice model
In [Prepare training data](how-to-custom-voice-prepare-data.md), you learned about the different data types you can use to train a custom neural voice, and the different format requirements. After you've prepared your data and the voice talent verbal statement, you can start to upload them to [Speech Studio](https://aka.ms/custom-voice-portal). In this article, you learn how to train a custom neural voice through the Speech Studio portal.
To train a neural voice, you must create a voice talent profile with an audio fi
Upload this audio file to the Speech Studio as shown in the following screenshot. You create a voice talent profile, which is used to verify against your training data when you create a voice model. For more information, see [voice talent verification](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext).
- :::image type="content" source="media/custom-voice/upload-verbal-statement.png" alt-text="Screenshot that shows the upload voice talent statement.":::
> [!NOTE] > Custom Neural Voice is available with limited access. Make sure you understand the [responsible AI requirements](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=%2fazure%2fcognitive-services%2fspeech-service%2fcontext%2fcontext), and then [apply for access](https://aka.ms/customneural).
For more information, [learn more about the capabilities and limits of this feat
> [!NOTE] > Custom Neural Voice training is only available in the three regions: East US, Southeast Asia, and UK South. But you can easily copy a neural voice model from the three regions to other regions. For more information, see the [regions for Custom Neural Voice](regions.md#text-to-speech).
-## Create and use a Custom Neural Voice endpoint
-
-After you've successfully created and tested your voice model, you deploy it in a custom text-to-speech endpoint. Use this endpoint instead of the usual endpoint when you're making text-to-speech requests through the REST API. The subscription that you've used to deploy the model is the only one that can call your custom endpoint.
-
-To create a Custom Neural Voice endpoint:
-
-1. On the **Deploy model** tab, select **Deploy model**.
-1. Enter a **Name** and **Description** for your custom endpoint.
-1. Select a voice model that you want to associate with this endpoint.
-1. Select **Deploy** to create your endpoint.
-
-In the endpoint table, you now see an entry for your new endpoint. It might take a few minutes to instantiate a new endpoint. When the status of the deployment is **Succeeded**, the endpoint is ready for use.
-
-You can suspend and resume your endpoint if you don't use it all the time. When an endpoint is reactivated after suspension, the endpoint URL is retained, so you don't need to change your code in your apps.
-
-You can also update the endpoint to a new model. To change the model, make sure the new model is named the same as the one you want to update.
-
-> [!NOTE]
->- Standard subscription (S0) users can create up to 50 endpoints, each with its own custom neural voice.
->- To use your custom neural voice, you must specify the voice model name, use the custom URI directly in an HTTP request, and use the same subscription to pass through the authentication of the text-to-speech service.
-
-After your endpoint is deployed, the endpoint name appears as a link. Select the link to display information specific to your endpoint, such as the endpoint key, endpoint URL, and sample code.
-
-The custom endpoint is functionally identical to the standard endpoint that's used for text-to-speech requests. For more information, see the [Speech SDK](./get-started-text-to-speech.md) or [REST API](rest-text-to-speech.md).
-
-[Audio Content Creation](https://speech.microsoft.com/audiocontentcreation) is a tool that allows you to fine-tune audio output by using a friendly UI.
- ## Next steps
+- [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md)
- [How to record voice samples](record-custom-voice-samples.md)
+- [Text-to-Speech API reference](rest-text-to-speech.md)
- [Long Audio API](long-audio-api.md)+
cognitive-services How To Custom Voice Prepare Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice-prepare-data.md
All audio files should be grouped into a zip file. Once your dataset is successf
## Next steps -- [Create and use your voice model](how-to-custom-voice-create-voice.md)
+- [Train your voice model](how-to-custom-voice-create-voice.md)
+- [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md)
- [How to record voice samples](record-custom-voice-samples.md)
cognitive-services How To Custom Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-voice.md
If you're using the old version of Custom Voice (which is scheduled to be retire
## Next steps -- [Prepare data for Custom Neural Voice](how-to-custom-voice-prepare-data.md)-- [Train and deploy a custom neural voice](how-to-custom-voice-create-voice.md)
+- [Prepare data for custom neural voice](how-to-custom-voice-prepare-data.md)
+- [Train your voice model](how-to-custom-voice-create-voice.md)
+- [Deploy and use your voice model](how-to-deploy-and-use-endpoint.md)
- [How to record voice samples](record-custom-voice-samples.md)
cognitive-services How To Deploy And Use Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-deploy-and-use-endpoint.md
+
+ Title: How to deploy and use voice model - Speech service
+
+description: Learn about how to deploy and use a custom neural voice model.
++++++ Last updated : 02/09/2022++++
+# Deploy and use your voice model
+
+After you've successfully created and trained your voice model, you deploy it to a custom neural voice endpoint. Use the custom neural voice endpoint instead of the usual text-to-speech endpoint for requests with the REST API. Use the speech studio to create a custom neural voice endpoint. Use the REST API to suspend or resume a custom neural voice endpoint.
+
+## Create a custom neural voice endpoint
+
+To create a custom neural voice endpoint:
+
+1. On the **Deploy model** tab, select **Deploy model**.
+1. Enter a **Name** and **Description** for your custom endpoint.
+1. Select a voice model that you want to associate with this endpoint.
+1. Select **Deploy** to create your endpoint.
+
+In the endpoint table, you now see an entry for your new endpoint. It might take a few minutes to instantiate a new endpoint. When the status of the deployment is **Succeeded**, the endpoint is ready for use.
+
+You can suspend and resume your endpoint if you don't use it all the time. When an endpoint is reactivated after suspension, the endpoint URL is retained, so you don't need to change your code in your apps.
+
+You can also update the endpoint to a new model. To change the model, make sure the new model is named the same as the one you want to update.
+
+> [!NOTE]
+>- Standard subscription (S0) users can create up to 50 endpoints, each with its own custom neural voice.
+>- To use your custom neural voice, you must specify the voice model name, use the custom URI directly in an HTTP request, and use the same subscription to pass through the authentication of the text-to-speech service.
+
+After your endpoint is deployed, the endpoint name appears as a link. Select the link to display information specific to your endpoint, such as the endpoint key, endpoint URL, and sample code.
+
+The custom endpoint is functionally identical to the standard endpoint that's used for text-to-speech requests. For more information, see the [Speech SDK](./get-started-text-to-speech.md) or [REST API](rest-text-to-speech.md).
+
+[Audio Content Creation](https://speech.microsoft.com/audiocontentcreation) is a tool that allows you to fine-tune audio output by using a friendly UI.
+
+## Copy your voice model to another project
+
+You can copy your voice model to another project for the same region or another region. For example, you can copy a neural voice model that was trained in one region, to a project for another region.
+
+> [!NOTE]
+> Custom neural voice training is only available in the these regions: East US, Southeast Asia, and UK South. But you can copy a neural voice model from those regions to other regions. For more information, see the [regions for custom neural voice](regions.md#text-to-speech).
+
+To copy your custom neural voice model to another project:
+
+1. On the **Train model** tab, select a voice model that you want to copy, and then select **Copy to project**.
+
+ :::image type="content" source="media/custom-voice/cnv-model-copy.png" alt-text="Copy to project":::
+
+1. Select the **Region**, **Speech resource**, and **Project** where you want to copy the model. You must have a speech resource and project in the target region, otherwise you need to create them first.
+
+ :::image type="content" source="media/custom-voice/cnv-model-copy-dialog.png" alt-text="Copy voice model":::
+
+1. Select **Submit** to copy the model.
+1. Select **View model** under the notification message for copy success.
+1. On the **Train model** tab, select the newly copied model and then select **Deploy model**.
+
+## Suspend and resume an endpoint
+
+You can suspend or resume an endpoint, to limit spend and conserve resources that aren't in use. You won't be charged while the endpoint is suspended. When you resume an endpoint, you can use the same endpoint URL in your application to synthesize speech.
+
+You can suspend and resume an endpoint in Speech Studio or via the REST API.
+
+> [!NOTE]
+> The suspend operation will complete almost immediately. The resume operation completes in about the same amount of time as a new deployment.
+
+### Suspend and resume an endpoint in Speech Studio
+
+This section describes how to suspend or resume a custom neural voice endpoint in the Speech Studio portal.
+
+#### Suspend endpoint
+
+1. To suspend and deactivate your endpoint, select **Suspend** from the **Deploy model** tab in [Speech Studio](https://aka.ms/custom-voice-portal).
+
+ :::image type="content" source="media/custom-voice/cnv-endpoint-suspend.png" alt-text="Screenshot of the select suspend endpoint option":::
+
+1. In the dialog box that appears, select **Submit**. After the endpoint is suspended, Speech Studio will show the **Successfully suspended endpoint** notification.
+
+#### Resume endpoint
+
+1. To resume and activate your endpoint, select **Resume** from the **Deploy model** tab in [Speech Studio](https://aka.ms/custom-voice-portal).
+
+ :::image type="content" source="media/custom-voice/cnv-endpoint-resume.png" alt-text="Screenshot of the select resume endpoint option":::
+
+1. In the dialog box that appears, select **Submit**. After you successfully reactivate the endpoint, the status will change from **Suspended** to **Succeeded**.
+
+### Suspend and resume endpoint via REST API
+
+This section will show you how to [get](#get-endpoint), [suspend](#suspend-endpoint), or [resume](#resume-endpoint) a custom neural voice endpoint via REST API.
+
+#### Application settings
+
+The application settings that you use as REST API [request parameters](#request-parameters) are available on the **Deploy model** tab in [Speech Studio](https://aka.ms/custom-voice-portal).
++
+* The **Endpoint key** shows the subscription key the endpoint is associated with. Use the endpoint key as the value of your `Ocp-Apim-Subscription-Key` request header.
+* The **Endpoint URL** shows your service region. Use the value that precedes `voice.speech.microsoft.com` as your service region request parameter. For example, use `eastus` if the endpoint URL is `https://eastus.voice.speech.microsoft.com/cognitiveservices/v1`.
+* The **Endpoint URL** shows your endpoint ID. Use the value appended to the `?deploymentId=` query parameter as the value of your endpoint ID request parameter.
+* The Azure region the endpoint is associated with.
+
+#### Get endpoint
+
+Get the endpoint by endpoint ID. The operation returns details about an endpoint such as model ID, project ID, and status.
+
+For example, you might want to track the status progression for [suspend](#suspend-endpoint) or [resume](#resume-endpoint) operations. Use the `status` property in the response payload to determine the status of the endpoint.
+
+The possible `status` property values are:
+
+| Status | Description |
+| - | |
+| `NotStarted` | The endpoint hasn't yet been deployed, and it's not available for speech synthesis. |
+| `Running` | The endpoint is in the process of being deployed or resumed, and it's not available for speech synthesis. |
+| `Succeeded` | The endpoint is active and available for speech synthesis. The endpoint has been deployed or the resume operation succeeded. |
+| `Failed` | The endpoint deploy or suspend operation failed. The endpoint can only be viewed or deleted in [Speech Studio](https://aka.ms/custom-voice-portal).|
+| `Disabling` | The endpoint is in the process of being suspended, and it's not available for speech synthesis. |
+| `Disabled` | The endpoint is inactive, and it's not available for speech synthesis. The suspend operation succeeded or the resume operation failed. |
+
+> [!Tip]
+> If the status is `Failed` or `Disabled`, check `properties.error` for a detailed error message. However, there won't be error details if the status is `Disabled` due to a successful suspend operation.
+
+##### Get endpoint example
+
+For information about endpoint ID, region, and subscription key parameters, see [request parameters](#request-parameters).
+
+HTTP example:
+
+```HTTP
+GET api/texttospeech/v3.0/endpoints/<YourEndpointId> HTTP/1.1
+Ocp-Apim-Subscription-Key: YourSubscriptionKey
+Host: <YourServiceRegion>.customvoice.api.speech.microsoft.com
+```
+
+cURL example:
+
+```Console
+curl -v -X GET "https://<YourServiceRegion>.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/endpoints/<YourEndpointId>" -H "Ocp-Apim-Subscription-Key: <YourSubscriptionKey >"
+```
+
+Response header example:
+
+```
+Status code: 200 OK
+```
+
+Response body example:
+
+```json
+{
+ "model": {
+ "id": "a92aa4b5-30f5-40db-820c-d2d57353de44"
+ },
+ "project": {
+ "id": "ffc87aba-9f5f-4bfa-9923-b98186591a79"
+ },
+ "properties": {},
+ "status": "Succeeded",
+ "lastActionDateTime": "2019-01-07T11:36:07Z",
+ "id": "e7ffdf12-17c7-4421-9428-a7235931a653",
+ "createdDateTime": "2019-01-07T11:34:12Z",
+ "locale": "en-US",
+ "name": "Voice endpoint",
+ "description": "Example for voice endpoint"
+}
+```
+
+#### Suspend endpoint
+
+You can suspend an endpoint to limit spend and conserve resources that aren't in use. You won't be charged while the endpoint is suspended. When you resume an endpoint, you can use the same endpoint URL in your application to synthesize speech.
+
+You suspend an endpoint with its unique deployment ID. The endpoint status must be `Succeeded` before you can suspend it.
+
+Use the [get endpoint](#get-endpoint) operation to poll and track the status progression from `Succeeded`, to `Disabling`, and finally to `Disabled`.
+
+##### Suspend endpoint example
+
+For information about endpoint ID, region, and subscription key parameters, see [request parameters](#request-parameters).
+
+HTTP example:
+
+```HTTP
+POST api/texttospeech/v3.0/endpoints/<YourEndpointId>/suspend HTTP/1.1
+Ocp-Apim-Subscription-Key: YourSubscriptionKey
+Host: <YourServiceRegion>.customvoice.api.speech.microsoft.com
+Content-Type: application/json
+Content-Length: 0
+```
+
+cURL example:
+
+```Console
+curl -v -X POST "https://<YourServiceRegion>.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/endpoints/<YourEndpointId>/suspend" -H "Ocp-Apim-Subscription-Key: <YourSubscriptionKey >" -H "content-type: application/json" -H "content-length: 0"
+```
+
+Response header example:
+
+```
+Status code: 202 Accepted
+```
+
+For more information, see [response headers](#response-headers).
+
+#### Resume endpoint
+
+When you resume an endpoint, you can use the same endpoint URL that you used before it was suspended.
+
+You resume an endpoint with its unique deployment ID. The endpoint status must be `Disabled` before you can resume it.
+
+Use the [get endpoint](#get-endpoint) operation to poll and track the status progression from `Disabled`, to `Running`, and finally to `Succeeded`. If the resume operation failed, the endpoint status will be `Disabled`.
+
+##### Resume endpoint example
+
+For information about endpoint ID, region, and subscription key parameters, see [request parameters](#request-parameters).
+
+HTTP example:
+
+```HTTP
+POST api/texttospeech/v3.0/endpoints/<YourEndpointId>/resume HTTP/1.1
+Ocp-Apim-Subscription-Key: YourSubscriptionKey
+Host: <YourServiceRegion>.customvoice.api.speech.microsoft.com
+Content-Type: application/json
+Content-Length: 0
+```
+
+cURL example:
+
+```Console
+curl -v -X POST "https://<YourServiceRegion>.customvoice.api.speech.microsoft.com/api/texttospeech/v3.0/endpoints/<YourEndpointId>/resume" -H "Ocp-Apim-Subscription-Key: <YourSubscriptionKey >" -H "content-type: application/json" -H "content-length: 0"
+```
+
+Response header example:
+```
+Status code: 202 Accepted
+```
+
+For more information, see [response headers](#response-headers).
+
+#### Parameters and response codes
+
+##### Request parameters
+
+You use these request parameters with calls to the REST API. See [application settings](#application-settings) for information about where to get your region, endpoint ID, and subscription key in Speech Studio.
+
+| Name | Location | Required | Type | Description |
+| | | -- | | |
+| `YourServiceRegion` | Path | `True` | string | The Azure region the endpoint is associated with. |
+| `YourEndpointId` | Path | `True` | string | The identifier of the endpoint. |
+| `Ocp-Apim-Subscription-Key` | Header | `True` | string | The subscription key the endpoint is associated with. |
+
+##### Response headers
+
+Status code: 202 Accepted
+
+| Name | Type | Description |
+| - | | -- |
+| `Location` | string | The location of the endpoint that can be used as the full URL to get endpoint. |
+| `Retry-After` | string | The total seconds of recommended interval to retry to get endpoint status. |
+
+##### HTTP status codes
+
+The HTTP status code for each response indicates success or common errors.
+
+| HTTP status code | Description | Possible reason |
+| - | -- | |
+| 200 | OK | The request was successful. |
+| 202 | Accepted | The request has been accepted and is being processed. |
+| 400 | Bad Request | The value of a parameter is invalid, or a required parameter is missing, empty, or null. One common issue is a header that is too long. |
+| 401 | Unauthorized | The request isn't authorized. Check to make sure your subscription key or [token](rest-speech-to-text.md#authentication) is valid and in the correct region. |
+| 429 | Too Many Requests | You've exceeded the quota or rate of requests allowed for your subscription. |
+| 502 | Bad Gateway | Network or server-side issue. May also indicate invalid headers. |
+
+## Next steps
+
+- [How to record voice samples](record-custom-voice-samples.md)
+- [Text-to-Speech API reference](rest-text-to-speech.md)
+- [Long Audio API](long-audio-api.md)
cognitive-services Record Custom Voice Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/record-custom-voice-samples.md
Listen to each file carefully. At this stage, you can edit out small unwanted so
Convert each file to 16 bits and a sample rate of 24 KHz before saving and if you recorded the studio chatter, remove the second channel. Save each file in WAV format, naming the files with the utterance number from your script.
-Finally, create the *transcript* that associates each WAV file with a text version of the corresponding utterance. [Create and use your voice model](./how-to-custom-voice-create-voice.md) includes details of the required format. You can copy the text directly from your script. Then create a Zip file of the WAV files and the text transcript.
+Finally, create the *transcript* that associates each WAV file with a text version of the corresponding utterance. [Train your voice model](./how-to-custom-voice-create-voice.md) includes details of the required format. You can copy the text directly from your script. Then create a Zip file of the WAV files and the text transcript.
Archive the original recordings in a safe place in case you need them later. Preserve your script and notes, too.
Archive the original recordings in a safe place in case you need them later. Pre
You're ready to upload your recordings and create your custom neural voice. > [!div class="nextstepaction"]
-> [Create and use your voice model](./how-to-custom-voice-create-voice.md)
+> [Train your voice model](./how-to-custom-voice-create-voice.md)
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
The following table has descriptions of each supported style.
|`style="disgruntled"`|Expresses a disdainful and complaining tone. Speech of this emotion displays displeasure and contempt.| |`style="embarrassed"`|Expresses an uncertain and hesitant tone when the speaker is feeling uncomfortable.| |`style="empathetic"`|Expresses a sense of caring and understanding.|
+|`style="envious"`|Express a tone of admiration when you desire something that someone else has.|
|`style="fearful"`|Expresses a scared and nervous tone, with higher pitch, higher vocal energy, and faster rate. The speaker is in a state of tension and unease.| |`style="gentle"`|Expresses a mild, polite, and pleasant tone, with lower pitch and vocal energy.| |`style="lyrical"`|Expresses emotions in a melodic and sentimental way.| |`style="narration-professional"`|Expresses a professional, objective tone for content reading.|
+|`style="narration-relaxed"`|Express a soothing and melodious tone for content reading.|
|`style="newscast"`|Expresses a formal and professional tone for narrating news.| |`style="newscast-casual"`|Expresses a versatile and casual tone for general news delivery.| |`style="newscast-formal"`|Expresses a formal, confident, and authoritative tone for news delivery.|
cognitive-services Fail Over https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/fail-over.md
Use the following JSON in your request. Use the name of the model you wan to dep
```json {
- "trainedModelLabel": "{MODEL-NAME}"
+ "trainedModelLabel": "{MODEL-NAME}",
+ "deploymentName": {DEPLOYMENT-NAME}
} ```
cognitive-services Fail Over https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/fail-over.md
Use the following JSON in your request. Use the name of the model you wan to dep
```json {
- "trainedModelLabel": "{MODEL-NAME}"
+ "trainedModelLabel": "{MODEL-NAME}",
+ "deploymentName": "{DEPLOYMENT-NAME}"
} ```
cognitive-services Fail Over https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/fail-over.md
Use the following JSON in your request. Use the name of the model you wan to dep
```json {
- "trainedModelLabel": "{MODEL-NAME}"
+ "trainedModelLabel": "{MODEL-NAME}",
+ "deploymentName": "{DEPLOYMENT-NAME}"
} ```
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/overview.md
Previously updated : 11/19/2021 Last updated : 02/16/2022
Text Analytics for health is one of the features offered by [Azure Cognitive Service for Language](../overview.md), a collection of machine learning and AI algorithms in the cloud for developing intelligent applications that involve written language.
-Text Analytics for health extracts and labels relevant medical information from unstructured texts such as doctor's notes, discharge summaries, clinical documents, and electronic health records.
- This documentation contains the following types of articles: * [**Quickstarts**](quickstart.md) are getting-started instructions to guide you through making requests to the service. * [**How-to guides**](how-to/call-api.md) contain instructions for using the service in more specific or customized ways. * The [**conceptual articles**](concepts/health-entity-categories.md) provide in-depth explanations of the service's functionality and features.
-> [!VIDEO https://docs.microsoft.com/Shows/AI-Show/Introducing-Text-Analytics-for-Health/player]
+## Text Analytics for health features
-## Features
+Text Analytics for health extracts and labels relevant medical information from unstructured texts such as doctor's notes, discharge summaries, clinical documents, and electronic health records.
[!INCLUDE [Text Analytics for health](includes/features.md)]
+> [!VIDEO https://docs.microsoft.com/Shows/AI-Show/Introducing-Text-Analytics-for-Health/player]
+
+## Get started with Text analytics for health
+
+To use this feature, you submit raw unstructured text for analysis and handle the API output in your application. Analysis is performed as-is, with no additional customization to the model used on your data. There are three ways to use Text Analytics for health:
-## Deploy on premises using Docker containers
-Use the available Docker container to [deploy this feature on-premises](how-to/use-containers.md). These docker containers enable you to bring the service closer to your data for compliance, security, or other operational reasons.
+|Development option |Description | Links |
+||||
+| Language Studio | A web-based platform that enables you to try Text Analytics for health without needing writing code. | ΓÇó [Language Studio website](https://language.cognitive.azure.com/tryout/healthAnalysis) <br> ΓÇó [Quickstart: Use the Language studio](../language-studio.md) |
+| REST API or Client library (Azure SDK) | Integrate Text Analytics for health into your applications using the REST API, or the client library available in a variety of languages. | ΓÇó [Quickstart: Use Text Analytics for health](quickstart.md) |
+| Docker container | Use the available Docker container to deploy this feature on-premises, letting you bring the service closer to your data for compliance, security, or other operational reasons. | ΓÇó [How to deploy on-premises](how-to/use-containers.md) |
+
+## Input requirements and service limits
+
+* Text Analytics for health takes raw unstructured text for analysis. See the [data and service limits](how-to/call-api.md#data-limits) in the how-to guide for more information.
+* Text Analytics for health works with a variety of written languages. See [language support](language-support.md) for more information.
+
+## Reference documentation and code samples
+
+As you use Text Analytics for health in your applications, see the following reference documentation and samples for Azure Cognitive Services for Language:
+
+|Development option / language |Reference documentation |Samples |
+||||
+|REST API | [REST API documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/TextAnalytics-v3-2-Preview-2/operations/Analyze) | |
+|C# | [C# documentation](/dotnet/api/azure.ai.textanalytics?view=azure-dotnet-preview&preserve-view=true) | [C# samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/textanalytics/Azure.AI.TextAnalytics/samples) |
+| Java | [Java documentation](/java/api/overview/azure/ai-textanalytics-readme?view=azure-java-preview&preserve-view=true) | [Java Samples](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/textanalytics/azure-ai-textanalytics/src/samples) |
+|JavaScript | [JavaScript documentation](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true) | [JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/textanalytics/ai-text-analytics/samples/v5) |
+|Python | [Python documentation](/python/api/overview/azure/ai-textanalytics-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/textanalytics/azure-ai-textanalytics/samples) |
## Responsible AI An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for Text Analytics for health](/legal/cognitive-services/language-service/transparency-note-health?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information: [!INCLUDE [Responsible AI links](../includes/overview-responsible-ai-links.md)]-
-## Next steps
-
-There are two ways to get started using the entity linking feature:
-* [Language Studio](../language-studio.md), which is a web-based platform that enables you to try several Azure Cognitive Service for Language features without needing to write code.
-* The [quickstart article](quickstart.md) for instructions on making requests to the service using the REST API and client library SDK.
container-apps Azure Resource Manager Api Spec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/azure-resource-manager-api-spec.md
Entries in the `resources` array of the ARM template have the following properti
| `tags` | Collection of Azure tags associated with the container app. | array | | `type` | Always `Microsoft.Web/containerApps` ARM endpoint determines which API to forward to | string |
+> [!NOTE]
+> Azure Container Apps resources are in the process of migrating from the `Microsoft.Web` namespace to the `Microsoft.App` namespace. Refer to [Namespace migration from Microsoft.Web to Microsoft.App in March 2022](https://github.com/microsoft/azure-container-apps/issues/109) for more details.
+ In this example, you put your values in place of the placeholder tokens surrounded by `<>` brackets. ## properties
container-apps Microservices Dapr Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices-dapr-azure-resource-manager.md
az extension add `
Now that the extension is installed, register the `Microsoft.Web` namespace.
+> [!NOTE]
+> Azure Container Apps resources are in the process of migrating from the `Microsoft.Web` namespace to the `Microsoft.App` namespace. Refer to [Namespace migration from Microsoft.Web to Microsoft.App in March 2022](https://github.com/microsoft/azure-container-apps/issues/109) for more details.
+ # [Bash](#tab/bash) ```azurecli
container-apps Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/quotas.md
The following quotas exist per subscription for Azure Container Apps Preview.
| Feature | Quantity | |||
-| Environments | 2 |
+| Environments per region | 2 |
| Container apps per environment | 20 | | Replicas per container app | 25 | | Cores per replica | 2 |
cosmos-db How To Provision Throughput Cassandra https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/cassandra/how-to-provision-throughput-cassandra.md
Last updated 10/15/2020
+ms.devlang: csharp
cosmos-db Continuous Backup Restore Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-permissions.md
description: Learn how to isolate and restrict the restore permissions for conti
Previously updated : 07/29/2021 Last updated : 02/16/2022 -+ # Manage permissions to restore an Azure Cosmos DB account [!INCLUDE[appliesto-sql-mongodb-api](includes/appliesto-sql-mongodb-api.md)]
-Azure Cosmos DB allows you to isolate and restrict the restore permissions for continuous backup account to a specific role or a principal. The owner of the account can trigger a restore and assign a role to other principals to perform the restore operation. These permissions can be applied at the subscription scope or more granularly at the source account scope as shown in the following image:
+Azure Cosmos DB allows you to isolate and restrict the restore permissions for continuous backup account to a specific role or a principal. The owner of the account can trigger a restore and assign a role to other principals to perform the restore operation. These permissions can be applied at the subscription scope as shown in the following image:
Scope is a set of resources that have access, to learn more on scopes, see the [Azure RBAC](../role-based-access-control/scope-overview.md) documentation. In Azure Cosmos DB, applicable scopes are the source subscription and database account for most of the use cases. The principal performing the restore actions should have write permissions to the destination resource group.
To perform a restore, a user or a principal need the permission to restore (that
||| |Subscription | /subscriptions/00000000-0000-0000-0000-000000000000 | |Resource group | /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/Example-cosmosdb-rg |
-|CosmosDB restorable account resource | /subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.DocumentDB/locations/West US/restorableDatabaseAccounts/23e99a35-cd36-4df4-9614-f767a03b9995|
-The restorable account resource can be extracted from the output of the `az cosmosdb restorable-database-account list --account-name <accountname>` command in CLI or `Get-AzCosmosDBRestorableDatabaseAccount -DatabaseAccountName <accountname>` cmdlet in PowerShell. The name attribute in the output represents the `instanceID` of the restorable account.
-
-## Permissions
+## Permissions on the source account
Following permissions are required to perform the different activities pertaining to restore for continuous backup mode accounts: > [!NOTE]
-> Permission can be assigned to restorable database account at account scope or subscription scope. Assigning permissions at resource group scope is not supported.
-
-|Permission |Impact |Minimum scope |Maximum scope |
-|||||
-|`Microsoft.Resources/deployments/validate/action`, `Microsoft.Resources/deployments/write` | These permissions are required for the ARM template deployment to create the restored account. See the sample permission [RestorableAction](#custom-restorable-action) below for how to set this role. | Not applicable | Not applicable |
-|`Microsoft.DocumentDB/databaseAccounts/write` | This permission is required to restore an account into a resource group | Resource group under which the restored account is created. | Subscription under which the restored account is created |
-|`Microsoft.DocumentDB/locations/restorableDatabaseAccounts/restore/action` </br> You can't choose resource group as the permission scope. |This permission is required on the source restorable database account scope to allow restore actions to be performed on it. | The *RestorableDatabaseAccount* resource belonging to the source account being restored. This value is also given by the `ID` property of the restorable database account resource. An example of restorable account is */subscriptions/subscriptionId/providers/Microsoft.DocumentDB/locations/regionName/restorableDatabaseAccounts/\<guid-instanceid\>* | The subscription containing the restorable database account. |
-|`Microsoft.DocumentDB/locations/restorableDatabaseAccounts/read` </br> You can't choose resource group as the permission scope. |This permission is required on the source restorable database account scope to list the database accounts that can be restored. | The *RestorableDatabaseAccount* resource belonging to the source account being restored. This value is also given by the `ID` property of the restorable database account resource. An example of restorable account is */subscriptions/subscriptionId/providers/Microsoft.DocumentDB/locations/regionName/restorableDatabaseAccounts/\<guid-instanceid\>*| The subscription containing the restorable database account. |
-|`Microsoft.DocumentDB/locations/restorableDatabaseAccounts/*/read` </br> You can't choose resource group as the permission scope. | This permission is required on the source restorable account scope to allow reading of restorable resources such as list of databases and containers for a restorable account. | The *RestorableDatabaseAccount* resource belonging to the source account being restored. This value is also given by the `ID` property of the restorable database account resource. An example of restorable account is */subscriptions/subscriptionId/providers/Microsoft.DocumentDB/locations/regionName/restorableDatabaseAccounts/\<guid-instanceid\>*| The subscription containing the restorable database account. |
+> Assigning permissions at resource group scope is not supported.
+
+|Permission |Impact |
+|||
+|`Microsoft.DocumentDB/locations/restorableDatabaseAccounts/restore/action` </br> You can't choose resource group as the permission scope. |This permission is required on the source restorable database account scope to allow restore actions to be performed on it. |
+|`Microsoft.DocumentDB/locations/restorableDatabaseAccounts/read` </br> You can't choose resource group as the permission scope. |This permission is required on the source restorable database account scope to list the database accounts that can be restored. |
+|`Microsoft.DocumentDB/locations/restorableDatabaseAccounts/*/read` </br> You can't choose resource group as the permission scope. | This permission is required on the source restorable account scope to allow reading of restorable resources such as list of databases and containers for a restorable account. |
+## Permissions on the destination account
+
+Following permissions are required to perform the different activities pertaining to restore for continuous backup mode accounts:
+
+
+|Permission |Impact |
+|||
+|`Microsoft.Resources/deployments/validate/action`, `Microsoft.Resources/deployments/write` | These permissions are required for the ARM template deployment to create the restored account. See the sample permission [RestorableAction](#custom-restorable-action) below for how to set this role.
+|`Microsoft.DocumentDB/databaseAccounts/write` | This permission is required to restore an account into a resource group |
+
## Azure CLI role assignment scenarios to restore at different scopes
cosmos-db How To Provision Throughput Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/how-to-provision-throughput-gremlin.md
Last updated 10/15/2020
+ms.devlang: csharp
cosmos-db Tutorial Query Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/tutorial-query-graph.md
Previously updated : 11/08/2021 Last updated : 02/16/2022 ms.devlang: csharp
This article covers the following tasks:
## Prerequisites
-For these queries to work, you must have an Azure Cosmos DB account and have graph data in the container. Don't have any of those? Complete the [5-minute quickstart](create-graph-dotnet.md) or the [developer tutorial](tutorial-query-graph.md) to create an account and populate your database. You can run the following queries using the [Gremlin console](https://tinkerpop.apache.org/docs/current/reference/#gremlin-console), or your favorite Gremlin driver.
+For these queries to work, you must have an Azure Cosmos DB account and have graph data in the container. Don't have any of those? Complete the [5-minute quickstart](create-graph-dotnet.md) to create an account and populate your database. You can run the following queries using the [Gremlin console](https://tinkerpop.apache.org/docs/current/reference/#gremlin-console), or your favorite Gremlin driver.
## Count vertices in the graph
cosmos-db High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/high-availability.md
Title: High availability in Azure Cosmos DB description: This article describes how to build a highly available solution using Cosmos DB-+ Previously updated : 11/11/2021- Last updated : 02/17/2022+
# Achieve high availability with Cosmos DB [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
-To build a highly-available solution, you have to evaluate the reliability characteristics of all its components. Cosmos DB is designed to provide multiple features and configuration options to achieve high availability for all solutions' availability needs.
+To build a solution with high-availability, you have to evaluate the reliability characteristics of all its components. Cosmos DB is designed to provide multiple features and configuration options to achieve high availability for all solutions' availability needs.
-We will use the terms **RTO** (Recovery Time Objective), to indicate the time between the beginning of an outage impacting Cosmos DB and the recovery to full availability, and **RPO** (Recovery Point Objective), to indicate the time between the last write correctly restored and the time of the beginning of the outage affecting Cosmos DB.
+We'll use the terms **RTO** (Recovery Time Objective), to indicate the time between the beginning of an outage impacting Cosmos DB and the recovery to full availability, and **RPO** (Recovery Point Objective), to indicate the time between the last write correctly restored and the time of the beginning of the outage affecting Cosmos DB.
> [!NOTE] > Expected and maximum RPOs and RTOs depend on the kind of outage that Cosmos DB is experiencing. For instance, an outage of a single node will have different expected RTO and RPO than a whole region outage.
We will use the terms **RTO** (Recovery Time Objective), to indicate the time be
This article details the events that can affect Cosmos DB availability and the corresponding Cosmos DB configuration options to achieve the availability characteristics required by your solution. ## Replica maintenance
-Cosmos DB is a fully-managed multi-tenant service that manages all details of individual compute nodes transparently. Users do not have to worry about any kind of patching and planned maintenance. Using redundancy and with no user involvement, Cosmos DB guarantees SLAs for availability and P99 latency through all automatic maintenance operations performed by the system.
+Cosmos DB is a managed multi-tenant service that manages all details of individual compute nodes transparently. Users don't have to worry about any kind of patching and planned maintenance. Cosmos DB guarantees SLAs for availability and P99 latency through all automatic maintenance operations performed by the system.
Refer to the [SLAs section](#slas) for the guaranteed availability SLAs. ## Replica outages Replica outages refer to outages of individual nodes in a Cosmos DB cluster deployed in an Azure region.
-Cosmos DB automatically mitigates replica outages by guaranteeing at least two replicas of your data at all times in each Azure region where your account is deployed.
-This results in RTO = 0 and and RPO = 0, for individual node outages, with no application changes or configurations required.
+Cosmos DB automatically mitigates replica outages by guaranteeing at least three replicas of your data in each Azure region for your account within a four replica quorum.
+This results in RTO = 0 and RPO = 0, for individual node outages, with no application changes or configurations required.
-In many Azure regions, it is possible to distribute your Cosmos DB cluster across **availability zones**, which results increased SLAs, as availability zones are physically separate and provide distinct power source, network, and cooling. See [Availability Zones](/azure/architecture/reliability/architect).
-When using this option, Cosmos DB provides RTO = 0 and and RPO = 0 even in case of outages of a whole availability zone.
+In many Azure regions, it's possible to distribute your Cosmos DB cluster across **availability zones**, which results increased SLAs, as availability zones are physically separate and provide distinct power source, network, and cooling. See [Availability Zones](/azure/architecture/reliability/architect).
+When a Cosmos DB account is deployed using availability zones, Cosmos DB provides RTO = 0 and RPO = 0 even in a zone outage.
-When deploying in a single Azure region, with no extra user input, Cosmos DB is resilient to node outages. Enabling redundancy across availability zones makes Cosmos DB resilient to entire availability zone outages at the cost of increased charges. Both SLAs and price are reported in the [SLAs section](#slas).
+When users deploy in a single Azure region, with no extra user input, Cosmos DB is resilient to node outages. Enabling redundancy across availability zones makes Cosmos DB resilient to zone outages at the cost of increased charges. Both SLAs and price are reported in the [SLAs section](#slas).
-Zone redundancy can only be configured when adding a new region to an Azure Cosmos account. For existing regions, zone redundancy can be enabled by removing the region then adding it back with the zone redundancy enabled. For a single region account, this requires adding one additional region to temporarily failover to, then removing and adding the desired region with zone redundancy enabled.
+Zone redundancy can only be configured when adding a new region to an Azure Cosmos account. For existing regions, zone redundancy can be enabled by removing the region then adding it back with the zone redundancy enabled. For a single region account, this requires adding a region to temporarily fail over to, then removing and adding the desired region with zone redundancy enabled.
-By default, a Cosmos DB account does not use multiple availability zones. You can enable deployment across multiple availability zones in the following ways:
+By default, a Cosmos DB account doesn't use multiple availability zones. You can enable deployment across multiple availability zones in the following ways:
* [Azure portal](how-to-manage-database-account.md#addremove-regions-from-your-database-account)
Region outages refer to outages that affect all Cosmos DB nodes in an Azure regi
In the rare cases of region outages, Cosmos DB can be configured to support various outcomes of durability and availability. ### Durability
-In case of Cosmos DB accounts that use a single region, most of the times no data loss occurs and data access is restored after Cosmos DB services recovers in the affected region. Data loss may occur only in case of unrecoverable disasters in the Cosmos DB region.
+When a Cosmos DB account is deployed in a single region, generally no data loss occurs and data access is restored after Cosmos DB services recovers in the affected region. Data loss may occur only with an unrecoverable disaster in the Cosmos DB region.
To protect against complete data loss that may result from catastrophic disasters in a region, Azure Cosmos DB provides 2 different backup modes: - [Continuous backups](./continuous-backup-restore-introduction.md) ensure the backup is taken in each region every 100 seconds and provide the ability to restore your data to any desired point in time with second granularity. In each region, the backup is dependent on the data committed in that region. - [Periodic backups](./configure-periodic-backup-restore.md) take full backups of all partitions from all containers under your account, with no synchronization across partitions. The minimum backup interval is 1 hour.
-In case of Cosmos DB accounts in multiple regions, data durability depends on the consistency level configured on the account. The following table details, for all consistency levels, the RPO of Cosmos DB account deployed in at least 2 regions.
+When a Cosmos DB account is deployed in multiple regions, data durability depends on the consistency level configured on the account. The following table details, for all consistency levels, the RPO of Cosmos DB account deployed in at least 2 regions.
|**Consistency level**|**RPO in case of region outage**| |||
For multi-region accounts, the minimum value of *K* and *T* is 100,000 write ope
Refer to [Consistency levels](./consistency-levels.md) for more information on the differences between consistency levels. ### Availability
-If your solution requires continuous availability in case of region outages, Cosmos DB can be configured to replicate your data across multiple regions and to transparently failover to available regions when required.
+If your solution requires continuous availability during region outages, Cosmos DB can be configured to replicate your data across multiple regions and to transparently fail over to available regions when required.
Single-region accounts may lose availability following a regional outage. To ensure high availability at all times it's recommended to set up your Azure Cosmos DB account with **a single write region and at least a second (read) region** and enable **Service-Managed failover**.
-Service-managed failover allows Cosmos DB to failover the write region of multi-region account, in order to preserve availability at the cost of data loss as per [durability section](#durability). Regional failovers are detected and handled in the Azure Cosmos DB client. They don't require any changes from the application.
+Service-managed failover allows Cosmos DB to fail over the write region of multi-region account, in order to preserve availability at the cost of data loss as per [durability section](#durability). Regional failovers are detected and handled in the Azure Cosmos DB client. They don't require any changes from the application.
Refer to [How to manage an Azure Cosmos DB account](./how-to-manage-database-account.md) for the instructions on how to enable multiple read regions and service-managed failover. > [!IMPORTANT] > It is strongly recommended that you configure the Azure Cosmos accounts used for production workloads to **enable automatic failover**. This enables Cosmos DB to failover the account databases to available regions automatically. In the absence of this configuration, the account will experience loss of write availability for all the duration of the write region outage, as manual failover will not succeed due to lack of region connectivity. ### Multiple write regions
-Azure Cosmos DB can be configured to accept writes in multiple regions. This is useful to reduce write latency in geographically distributed applications. When using multiple write regions, strong consistency is not supported and write conflicts may arise. Refer to [Conflict types and resolution policies when using multiple write regions](./conflict-resolution-policies.md) for more information on how resolve conflicts in multiple write region configurations.
+Azure Cosmos DB can be configured to accept writes in multiple regions. This is useful to reduce write latency in geographically distributed applications. When a Cosmos DB account is configured for multiple write regions, strong consistency isn't supported and write conflicts may arise. Refer to [Conflict types and resolution policies when using multiple write regions](./conflict-resolution-policies.md) for more information on how to resolve conflicts in multiple write region configurations.
-Given the internal Azure Cosmos DB architecture, using multiple write regions does not guarantee write availability during a region outage. The best configuration to achieve high availability in case of region outage is single write region with service-managed failover.
+Given the internal Azure Cosmos DB architecture, using multiple write regions doesn't guarantee write availability during a region outage. The best configuration to achieve high availability during a region outage is single write region with service-managed failover.
#### Conflict-resolution region
-When a Cosmos DB account is configured with multi-region writes, one of the region acts as an arbiter in case of write conflicts. When such conflicts happen, they're routed to this region for consistent resolution.
+When a Cosmos DB account is configured with multi-region writes, one of the regions will act as an arbiter in case of write conflicts. When such conflicts happen, they're routed to this region for consistent resolution.
### What to expect during a region outage Client of single-region accounts will experience loss of read and write availability until service is restored.
Multi-region accounts will experience different behaviors depending on the follo
| Configuration | Outage | Availability impact | Durability impact| What to do | | -- | -- | -- | -- | -- | | Single write region | Read region outage | All clients will automatically redirect reads to other regions. No read or write availability loss for all configurations, except 2 regions with strong consistency which loses write availability until the service is restored or, if **service-managed failover** is enabled, the region is marked as failed and a failover occurs. | No data loss. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> When the outage is over, re-adjust provisioned RUs as appropriate. |
-| Single write region | Write region outage | Clients will redirect reads to other regions. <p/> **Without service-manages failover**, clients will experience write availability loss, until write availability is restored automatically when the outage ends. <p/> **With service-managed failover** clients will experience write availability loss until the services manages a failover to a new write region selected according to your preferences. | If strong consistency level is not selected, some data may not have been replicated to the remaining active regions. This depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> Do *not* trigger a manual failover during the outage, as it will not succeed. <p/> When the outage is over, re-adjust provisioned RUs as appropriate. Accounts using SQL APIs may also recover the non-replicated data in the failed region from your [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). |
+| Single write region | Write region outage | Clients will redirect reads to other regions. <p/> **Without service-manages failover**, clients will experience write availability loss, until write availability is restored automatically when the outage ends. <p/> **With service-managed failover** clients will experience write availability loss until the services manages a failover to a new write region selected according to your preferences. | If strong consistency level isn't selected, some data may not have been replicated to the remaining active regions. This depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> Do *not* trigger a manual failover during the outage, as it will not succeed. <p/> When the outage is over, re-adjust provisioned RUs as appropriate. Accounts using SQL APIs may also recover the non-replicated data in the failed region from your [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). |
| Multiple write regions | Any regional outage | Possibility of temporary write availability loss, analogously to single write region with service-managed failover. The failover of the [conflict-resolution region](#conflict-resolution-region) may also cause a loss of write availability if a high number of conflicting writes happen at the time of the outage. | Recently updated data in the failed region may be unavailable in the remaining active regions, depending on the selected [consistency level](consistency-levels.md). If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support additional traffic. <p/> When the outage is over, you may re-adjust provisioned RUs as appropriate. If possible, Cosmos DB will automatically recover non-replicated data in the failed region using the configured conflict resolution method for SQL API accounts, and Last Write Wins for accounts using other APIs. | ### Additional information on read region outages
Multi-region accounts will experience different behaviors depending on the follo
* Subsequent reads are redirected to the recovered region without requiring any changes to your application code. During both failover and rejoining of a previously failed region, read consistency guarantees continue to be honored by Azure Cosmos DB.
-* Even in a rare and unfortunate event when the Azure region is permanently irrecoverable, there is no data loss if your multi-region Azure Cosmos account is configured with *Strong* consistency. In the rare event of a permanently irrecoverable write region, a multi-region Azure Cosmos account has the durability characteristics specified in the [Durability](#durability) section.
+* Even in a rare and unfortunate event when the Azure region is permanently irrecoverable, there's no data loss if your multi-region Azure Cosmos account is configured with *Strong* consistency. In the rare event of a permanently irrecoverable write region, a multi-region Azure Cosmos account has the durability characteristics specified in the [Durability](#durability) section.
### Additional information on write region outages * During a write region outage, the Azure Cosmos account will automatically promote a secondary region to be the new primary write region when **automatic (service-managed) failover** is configured on the Azure Cosmos account. The failover will occur to another region in the order of region priority you've specified.
-* Note that manual failover should not be triggered and will not succeed in presence of an outage of the source or destination region. This is because of a consistency check required by the failover procedure which requires connectivity between the regions.
+* Note that manual failover shouldn't be triggered and will not succeed in presence of an outage of the source or destination region. This is because of a consistency check required by the failover procedure which requires connectivity between the regions.
-* When the previously impacted region is back online, any write data that was not replicated when the region failed, is made available through the [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). Applications can read the conflicts feed, resolve the conflicts based on the application-specific logic, and write the updated data back to the Azure Cosmos container as appropriate.
+* When the previously impacted region is back online, any write data that wasn't replicated when the region failed, is made available through the [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). Applications can read the conflicts feed, resolve the conflicts based on the application-specific logic, and write the updated data back to the Azure Cosmos container as appropriate.
* Once the previously impacted write region recovers, it becomes automatically available as a read region. You can switch back to the recovered region as the write region. You can switch the regions by using [PowerShell, Azure CLI or Azure portal](how-to-manage-database-account.md#manual-failover). There is **no data or availability loss** before, during or after you switch the write region and your application continues to be highly available.
The following table summarizes the high availability capability of various accou
* Review the expected [behavior of the Azure Cosmos SDKs](troubleshoot-sdk-availability.md) during these events and which are the configurations that affect it.
-* To ensure high write and read availability, configure your Azure Cosmos account to span at least two regions and three, if using strong consistency. Remember that the best configuration to achieve high availability in case of region outage is single write region with service-managed failover. To learn more, see how to [configure your Azure Cosmos account with multiple write-regions](tutorial-global-distribution-sql-api.md).
+* To ensure high write and read availability, configure your Azure Cosmos account to span at least two regions and three, if using strong consistency. Remember that the best configuration to achieve high availability for a region outage is single write region with service-managed failover. To learn more, see how to [configure your Azure Cosmos account with multiple write-regions](tutorial-global-distribution-sql-api.md).
-* For multi-region Azure Cosmos accounts that are configured with a single-write region, [enable service-managed failover by using Azure CLI or Azure portal](how-to-manage-database-account.md#automatic-failover). After you enable automatic failover, whenever there is a regional disaster, Cosmos DB will failover your account without any user inputs.
+* For multi-region Azure Cosmos accounts that are configured with a single-write region, [enable service-managed failover by using Azure CLI or Azure portal](how-to-manage-database-account.md#automatic-failover). After you enable automatic failover, whenever there's a regional disaster, Cosmos DB will fail over your account without any user inputs.
* Even if your Azure Cosmos account is highly available, your application may not be correctly designed to remain highly available. To test the end-to-end high availability of your application, as a part of your application testing or disaster recovery (DR) drills, temporarily disable automatic-failover for the account, invoke the [manual failover by using PowerShell, Azure CLI or Azure portal](how-to-manage-database-account.md#manual-failover), then monitor your application's failover. Once complete, you can fail back over to the primary region and restore automatic-failover for the account. > [!IMPORTANT] > Do not invoke manual failover during a Cosmos DB outage on either the source or destination regions, as it requires regions connectivity to maintain data consistency and it will not succeed.
-* Within a globally distributed database environment, there is a direct relationship between the consistency level and data durability in the presence of a region-wide outage. As you develop your business continuity plan, you need to understand the maximum acceptable time before the application fully recovers after a disruptive event. The time required for an application to fully recover is known as recovery time objective (RTO). You also need to understand the maximum period of recent data updates the application can tolerate losing when recovering after a disruptive event. The time period of updates that you might afford to lose is known as recovery point objective (RPO). To see the RPO and RTO for Azure Cosmos DB, see [Consistency levels and data durability](./consistency-levels.md#rto)
+* Within a globally distributed database environment, there's a direct relationship between the consistency level and data durability in the presence of a region-wide outage. As you develop your business continuity plan, you need to understand the maximum acceptable time before the application fully recovers after a disruptive event. The time required for an application to fully recover is known as recovery time objective (RTO). You also need to understand the maximum period of recent data updates the application can tolerate losing when recovering after a disruptive event. The time period of updates that you might afford to lose is known as recovery point objective (RPO). To see the RPO and RTO for Azure Cosmos DB, see [Consistency levels and data durability](./consistency-levels.md#rto)
## What to expect during a Cosmos DB region outage
Multi-region accounts will experience different behaviors depending on the follo
| Write regions | Automatic failover | What to expect | What to do | | -- | -- | -- | -- |
-| Single write region | Not enabled | In case of outage in a read region when not using strong consistency, all clients will redirect to other regions. No read or write availability loss. No data loss. When using strong consistency, read region outage can impact write availability if fewer than two read regions remaining.<p/> In case of an outage in the write region, clients will experience write availability loss. If strong consistency level is not selected, some data may not have been replicated to the remaining active regions. This depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, unreplicated data may be lost. <p/> Cosmos DB will restore write availability automatically when the outage ends. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> Do *not* trigger a manual failover during the outage, as it will not succeed. <p/> When the outage is over, re-adjust provisioned RUs as appropriate. |
-| Single write region | Enabled | In case of outage in a read region when not using strong consistency, all clients will redirect to other regions. No read or write availability loss. No data loss. When using strong consistency, read region outage can impact write availability if fewer than two read regions remaining.<p/> In case of an outage in the write region, clients will experience write availability loss until Cosmos DB automatically elects a new region as the new write region according to your preferences. If strong consistency level is not selected, some data may not have been replicated to the remaining active regions. This depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> Do *not* trigger a manual failover during the outage, as it will not succeed. <p/> When the outage is over, you may move the write region back to the original region, and re-adjust provisioned RUs as appropriate. Accounts using SQL APIs may also recover the non-replicated data in the failed region from your [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). |
-| Multiple write regions | Not applicable | No read or write availability loss. <p/> Recently updated data in the failed region may be unavailable in the remaining active regions. Eventual, consistent prefix, and session consistency levels guarantee a staleness of <15mins. Bounded staleness guarantees less than K updates or T seconds, depending on the configuration. If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support additional traffic. <p/> When the outage is over, you may re-adjust provisioned RUs as appropriate. If possible, Cosmos DB will automatically recover non-replicated data in the failed region using the configured conflict resolution method for SQL API accounts, and Last Write Wins for accounts using other APIs. |
+| Single write region | Not enabled | In case of outage in a read region when not using strong consistency, all clients will redirect to other regions. No read or write availability loss. No data loss. When using strong consistency, read region outage can impact write availability if fewer than two read regions remaining.<p/> In case of an outage in the write region, clients will experience write availability loss. If strong consistency level isn't selected, some data may not have been replicated to the remaining active regions. This depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, unreplicated data may be lost. <p/> Cosmos DB will restore write availability automatically when the outage ends. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> Do *not* trigger a manual failover during the outage, as it will not succeed. <p/> When the outage is over, re-adjust provisioned RUs as appropriate. |
+| Single write region | Enabled | In case of outage in a read region when not using strong consistency, all clients will redirect to other regions. No read or write availability loss. No data loss. When using strong consistency, read region outage can impact write availability if fewer than two read regions remaining.<p/> In case of an outage in the write region, clients will experience write availability loss until Cosmos DB automatically elects a new region as the new write region according to your preferences. If strong consistency level isn't selected, some data may not have been replicated to the remaining active regions. This depends on the consistency level selected as described in [this section](consistency-levels.md#rto). If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support read traffic. <p/> Do *not* trigger a manual failover during the outage, as it will not succeed. <p/> When the outage is over, you may move the write region back to the original region, and re-adjust provisioned RUs as appropriate. Accounts using SQL APIs may also recover the non-replicated data in the failed region from your [conflicts feed](how-to-manage-conflicts.md#read-from-conflict-feed). |
+| Multiple write regions | Not applicable | Recently updated data in the failed region may be unavailable in the remaining active regions. Eventual, consistent prefix, and session consistency levels guarantee a staleness of <15mins. Bounded staleness guarantees less than K updates or T seconds, depending on the configuration. If the affected region suffers permanent data loss, unreplicated data may be lost. | During the outage, ensure that there are enough provisioned RUs in the remaining regions to support additional traffic. <p/> When the outage is over, you may re-adjust provisioned RUs as appropriate. If possible, Cosmos DB will automatically recover non-replicated data in the failed region using the configured conflict resolution method for SQL API accounts, and Last Write Wins for accounts using other APIs. |
## Next steps
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-rbac.md
description: Learn how to configure role-based access control with Azure Active
Previously updated : 07/21/2021 Last updated : 02/16/2022 + # Configure role-based access control with Azure Active Directory for your Azure Cosmos DB account
The Azure Cosmos DB data plane RBAC is built on concepts that are commonly found
- An Azure Cosmos DB database, - An Azure Cosmos DB container.
- :::image type="content" source="./media/how-to-setup-rbac/concepts.png" alt-text="RBAC concepts":::
+ :::image type="content" source="./media/how-to-setup-rbac/concepts.svg" alt-text="RBAC concepts":::
## <a id="permission-model"></a> Permission model
The table below lists all the actions exposed by the permission model.
| `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/create` | Create a new item. | | `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/read` | Read an individual item by its ID and partition key (point-read). | | `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/replace` | Replace an existing item. |
-| `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/upsert` | "Upsert" an item, which means create it if it doesn't exist, or replace it if it exists. |
+| `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/upsert` | "Upsert" an item, which means to create or insert an item if it doesn't already exist, or to update or replace an item if it exists. |
| `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/delete` | Delete an item. | | `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/executeQuery` | Execute a [SQL query](sql-query-getting-started.md). | | `Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/readChangeFeed` | Read from the container's [change feed](read-change-feed.md). |
The actual metadata requests allowed by the `Microsoft.DocumentDB/databaseAccoun
## Built-in role definitions
-Azure Cosmos DB exposes 2 built-in role definitions:
+Azure Cosmos DB exposes two built-in role definitions:
| ID | Name | Included actions | ||||
See [this page](/rest/api/cosmos-db-resource-provider/2021-04-01-preview/sql-res
## Initialize the SDK with Azure AD
-To use the Azure Cosmos DB RBAC in your application, you have to update the way you initialize the Azure Cosmos DB SDK. Instead of passing your account's primary key, you have to pass an instance of a `TokenCredential` class. This instance provides the Azure Cosmos DB SDK with the context required to fetch an AAD token on behalf of the identity you wish to use.
+To use the Azure Cosmos DB RBAC in your application, you have to update the way you initialize the Azure Cosmos DB SDK. Instead of passing your account's primary key, you have to pass an instance of a `TokenCredential` class. This instance provides the Azure Cosmos DB SDK with the context required to fetch an Azure AD (AAD) token on behalf of the identity you wish to use.
-The way you create a `TokenCredential` instance is beyond the scope of this article. There are many ways to create such an instance depending on the type of AAD identity you want to use (user principal, service principal, group etc.). Most importantly, your `TokenCredential` instance must resolve to the identity (principal ID) that you've assigned your roles to. You can find examples of creating a `TokenCredential` class:
+The way you create a `TokenCredential` instance is beyond the scope of this article. There are many ways to create such an instance depending on the type of Azure AD identity you want to use (user principal, service principal, group etc.). Most importantly, your `TokenCredential` instance must resolve to the identity (principal ID) that you've assigned your roles to. You can find examples of creating a `TokenCredential` class:
- [In .NET](/dotnet/api/overview/azure/identity-readme#credential-classes) - [In Java](/java/api/overview/azure/identity-readme#credential-classes)
When you access the [Azure Cosmos DB Explorer](https://cosmos.azure.com/?feature
## Audit data requests
-When using the Azure Cosmos DB RBAC, [diagnostic logs](cosmosdb-monitor-resource-logs.md) get augmented with identity and authorization information for each data operation. This lets you perform detailed auditing and retrieve the AAD identity used for every data request sent to your Azure Cosmos DB account.
+When using the Azure Cosmos DB RBAC, [diagnostic logs](cosmosdb-monitor-resource-logs.md) get augmented with identity and authorization information for each data operation. This lets you perform detailed auditing and retrieve the Azure AD identity used for every data request sent to your Azure Cosmos DB account.
This additional information flows in the **DataPlaneRequests** log category and consists of two extra columns: -- `aadPrincipalId_g` shows the principal ID of the AAD identity that was used to authenticate the request.
+- `aadPrincipalId_g` shows the principal ID of the Azure AD identity that was used to authenticate the request.
- `aadAppliedRoleAssignmentId_g` shows the [role assignment](#role-assignments) that was honored when authorizing the request. ## <a id="disable-local-auth"></a> Enforcing RBAC as the only authentication method In situations where you want to force clients to connect to Azure Cosmos DB through RBAC exclusively, you have the option to disable the account's primary/secondary keys. When doing so, any incoming request using either a primary/secondary key or a resource token will be actively rejected.
-### Using Azure Resource Manager templates
+### Use Azure Resource Manager templates
When creating or updating your Azure Cosmos DB account using Azure Resource Manager templates, set the `disableLocalAuth` property to `true`:
cosmos-db Create Mongodb Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/create-mongodb-nodejs.md
+ms.devlang: javascript
Last updated 08/26/2021
cosmos-db Mongodb Time To Live https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/mongodb-time-to-live.md
ms.devlang: csharp, java, javascript Previously updated : 12/26/2018 Last updated : 02/16/2022 # Expire data with Azure Cosmos DB's API for MongoDB
Time-to-live (TTL) functionality allows the database to automatically expire dat
## TTL indexes To enable TTL universally on a collection, a ["TTL index" (time-to-live index)](mongodb-indexing.md) needs to be created. The TTL index is an index on the `_ts` field with an "expireAfterSeconds" value.
-JavaScript example:
+MongoShell example:
-```js
+```
globaldb:PRIMARY> db.coll.createIndex({"_ts":1}, {expireAfterSeconds: 10}) { "_t" : "CreateIndexesResponse",
cosmos-db Tutorial Develop Nodejs Part 4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/tutorial-develop-nodejs-part-4.md
description: Part 4 of the tutorial series on creating a MongoDB app with Angula
+ms.devlang: javascript
Last updated 08/26/2021
cosmos-db Monitor Normalized Request Units https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-normalized-request-units.md
Previously updated : 09/16/2021 Last updated : 02/17/2022
Last updated 09/16/2021
Azure Monitor for Azure Cosmos DB provides a metrics view to monitor your account and create dashboards. The Azure Cosmos DB metrics are collected by default, this feature does not require you to enable or configure anything explicitly. The **Normalized RU Consumption** metric is used to see how well saturated the partition key ranges are with respect to the traffic. Azure Cosmos DB distributes the throughput equally across all the partition key ranges. This metric provides a per second view of the maximum throughput utilization for partition key range. Use this metric to calculate the RU/s usage across partition key range for given container. By using this metric, if you see high percentage of request units utilization across all partition key ranges in Azure monitor, you should increase the throughput to meet the needs of your workload.
-Example - Normalized utilization is defined as the max of the RU/s utilization across all partition key ranges. For example, suppose your max throughput is 20,000 RU/s and you have two partition key ranges, P_1 and P_2, each capable of scaling to 10,000 RU/s. In a given second, if P_1 has used 6000 RUs, and P_2 8000 RUs, the normalized utilization is MAX(6000 RU / 10,000 RU, 8000 RU / 10,000 RU) = 0.8.
+Example - Normalized utilization is defined as the max of the RU/s utilization across all partition key ranges. For example, suppose your max throughput is 24,000 RU/s and you have three partition key ranges, P_1, P_2, and P_3 each capable of scaling to 8,000 RU/s. In a given second, if P_1 has used 6000 RUs, P_2 7000 RUs, and P_3 5000 RUs the normalized utilization is MAX(6000 RU / 8000 RU, 7000 RU / 8000 RU, 5000 RU / 8000 RU) = 0.875.
## What to expect and do when normalized RU/s is higher
cosmos-db Partitioning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partitioning-overview.md
Transactions (in stored procedures or triggers) are allowed only against items i
## Replica sets
-Each physical partition consists of a set of replicas, also referred to as a [*replica set*](global-dist-under-the-hood.md). Each replica set hosts an instance of the database engine. A replica set makes the data stored within the physical partition durable, highly available, and consistent. Each replica that makes up the physical partition inherits the partition's storage quota. All replicas of a physical partition collectively support the throughput that's allocated to the physical partition. Azure Cosmos DB automatically manages replica sets.
+Each physical partition consists of a set of replicas, also referred to as a [*replica set*](global-dist-under-the-hood.md). Each replica hosts an instance of the database engine. A replica set makes the data stored within the physical partition durable, highly available, and consistent. Each replica that makes up the physical partition inherits the partition's storage quota. All replicas of a physical partition collectively support the throughput that's allocated to the physical partition. Azure Cosmos DB automatically manages replica sets.
Typically, smaller containers only require a single physical partition, but they will still have at least 4 replicas.
cosmos-db Create Sql Api Dotnet V4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-dotnet-v4.md
+ms.devlang: csharp
Last updated 08/26/2021
cosmos-db Create Sql Api Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/create-sql-api-dotnet.md
+ms.devlang: csharp
Last updated 08/26/2021
cosmos-db How To Manage Consistency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-manage-consistency.md
Previously updated : 07/02/2021 Last updated : 02/16/2022 ms.devlang: csharp, java, javascript
client = cosmos_client.CosmosClient(self.account_endpoint, {
## Utilize session tokens
-One of the consistency levels in Azure Cosmos DB is *Session* consistency. This is the default level applied to Cosmos accounts by default. When working with *Session* consistency, the client will use a session token internally with each read/query request to ensure that the set consistency level is maintained.
+One of the consistency levels in Azure Cosmos DB is *Session* consistency. This is the default level applied to Cosmos accounts by default. When working with Session consistency, each new write request to Azure Cosmos DB is assigned a new SessionToken. The CosmosClient will use this token internally with each read/query request to ensure that the set consistency level is maintained.
+
+In some scenarios you need to manage this Session yourself. Consider a web application with multiple nodes, each node will have its own instance of CosmosClient. If you wanted these nodes to participate in the same session (to be able read your own writes consistently across web tiers) you would have to send the SessionToken from FeedResponse<T> of the write action to the end-user using a cookie or some other mechanism, and have that token flow back to the web tier and ultimately the CosmosClient for subsequent reads. If you are using a round-robin load balancer which does not maintain session affinity between requests, such as the Azure Load Balancer, the read could potentially land on a different node to the write request, where the session was created.
+
+If you do not flow the Azure Cosmos DB SessionToken across as described above you could end up with inconsistent read results for a period of time.
To manage session tokens manually, get the session token from the response and set them per request. If you don't need to manage session tokens manually, you don't need to use these samples. The SDK keeps track of session tokens automatically. If you don't set the session token manually, by default, the SDK uses the most recent session token.
cosmos-db Troubleshoot Dot Net Sdk Slow Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/troubleshoot-dot-net-sdk-slow-request.md
description: Learn how to diagnose and fix slow requests when using Azure Cosmos
Previously updated : 02/02/2022 Last updated : 02/17/2022
Consider the following when developing your application:
* Use Direct + TCP connectivity mode * Avoid High CPU. Make sure to look at Max CPU and not average, which is the default for most logging systems. Anything above roughly 40% can increase the latency.
+## Metadata operations
+
+Do not verify a Database and/or Container exists by calling `Create...IfNotExistsAsync` and/or `Read...Async` in the hot path and/or before doing an item operation. The validation should only be done on application startup when it is necessary, if you expect them to be deleted (otherwise it's not needed). These metadata operations will generate extra end-to-end latency, have no SLA, and their own separate [limitations](https://aka.ms/CosmosDB/sql/errors/metadata-429) that do not scale like data operations.
## <a name="capture-diagnostics"></a>Capture the diagnostics
cosmos-db Tutorial Springboot Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/tutorial-springboot-azure-kubernetes-service.md
description: This tutorial demonstrates how to deploy a Spring Boot application
+ms.devlang: java
Last updated 10/01/2021
cosmos-db Create Table Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/create-table-dotnet.md
description: This quickstart shows how to access the Azure Cosmos DB Table API f
+ms.devlang: csharp
Last updated 09/26/2021
cost-management-billing Quick Acm Cost Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/quick-acm-cost-analysis.md
Title: Quickstart - Explore Azure costs with cost analysis
description: This quickstart helps you use cost analysis to explore and analyze your Azure organizational costs. Previously updated : 12/03/2021 Last updated : 02/17/2021
Here's a view of Azure service costs for the current month, grouped by Service n
![Grouped daily accumulated view showing example Azure service costs for last month](./media/quick-acm-cost-analysis/grouped-daily-accum-view.png) - The following image shows resource group names. You can group by tag to view total costs per tag or use the **Cost by resource** view to see all tags for a particular resource. ![Full data for current view showing resource group names](./media/quick-acm-cost-analysis/full-data-set.png)
By default, cost analysis shows all usage and purchase costs as they're accrued
![Change between actual and amortized cost to see reservation purchases spread across the term and allocated to the resources that used the reservation](./media/quick-acm-cost-analysis/metric-picker.png)
-Amortized cost breaks down reservation purchases into daily chunks and spreads them over the duration of the reservation term. For example, instead of seeing a $365 purchase on January 1, you'll see a $1.00 purchase every day from January 1 to December 31. In addition to basic amortization, these costs are also reallocated and associated by using the specific resources that used the reservation. For example, if that $1.00 daily charge was split between two virtual machines, you'd see two $0.50 charges for the day. If part of the reservation isn't utilized for the day, you'd see one $0.50 charge associated with the applicable virtual machine and another $0.50 charge with a charge type of `UnusedReservation`. Unused reservation costs can be seen only when viewing amortized cost.
+Amortized cost breaks down reservation purchases into daily chunks and spreads them over the duration of the reservation term. Most reservation terms are one or three years. Let's look at a one-year reservation example. Instead of seeing a $365 purchase on January 1, you'll see a $1.00 purchase every day from January 1 to December 31. In addition to basic amortization, these costs are also reallocated and associated by using the specific resources that used the reservation. For example, if that $1.00 daily charge was split between two virtual machines, you'd see two $0.50 charges for the day. If part of the reservation isn't utilized for the day, you'd see one $0.50 charge associated with the applicable virtual machine and another $0.50 charge with a charge type of `UnusedReservation`. Unused reservation costs can be seen only when viewing amortized cost.
+
+If you buy a one-year reservation on May 26 with an upfront payment, the amortized cost is divided by 365 (assuming it's not a leap year) and spread from May 26 through May 25 of the next year. If you pay monthly, the monthly fee is divided by the number of days in that month and spread evenly across May 26 through June 25, with the next month's fee spread across June 26 through July 25.
Because of the change in how costs are represented, it's important to note that actual cost and amortized cost views will show different total numbers. In general, the total cost of months with a reservation purchase will decrease when viewing amortized costs, and months following a reservation purchase will increase. Amortization is available only for reservation purchases and doesn't apply to Azure Marketplace purchases at this time.
cost-management-billing Download Azure Invoice Daily Usage Date https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/download-azure-invoice-daily-usage-date.md
tags: billing
Previously updated : 01/06/2022 Last updated : 02/17/2022
If you have a Microsoft Customer Agreement, you must be a billing profile Owner,
## Download your Azure invoices (.pdf)
-For most subscriptions, you can download your invoice from the Azure portal. If you have a Microsoft Customer Agreement, see Download invoices for a billing profile.
+For most subscriptions you can download your invoice from the Azure portal. If you have a Microsoft Customer Agreement, see [Download invoices for a Microsoft Customer Agreement](#download-invoices-for-a-microsoft-customer-agreement).
### Download invoices for an individual subscription
Invoices are generated for each [billing profile](../understand/mca-overview.md#
5. Click on the download button at the end of the row. 6. In the download context menu, select **Invoice**.
-If you don't see an invoice for the last billing period, see **Additional information**. <!-- Fix this -->
+If you don't see an invoice for the last billing period, see the following section.
+ ### <a name="noinvoice"></a> Why don't I see an invoice for the last billing period? There could be several reasons that you don't see an invoice:
If you have a Microsoft Customer Agreement, you can opt in to get your invoice i
You can opt out of getting your invoice by email by following the steps above and clicking **Opt out**. All Owners, Contributors, Readers, and Invoice managers will be opted out of getting the invoice by email, too. If you are a Reader, you cannot change the email invoice preference.
+## Azure Government support for invoices
+
+Azure Government users use the same agreement types as other Azure users.
+
+Azure Government billing billing owners can opt in to receive invoices by email. However, they can't allow others to get invoices by email.
+
+To download your invoice, follow the steps above at [Download invoices for an individual subscription](#download-invoices-for-an-individual-subscription).
+ ## Next steps To learn more about your invoice and charges, see:
cost-management-billing Download Azure Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/download-azure-invoice.md
tags: billing
Previously updated : 05/17/2021 Last updated : 02/17/2022
You must have an account admin role on a subscription or a support plan to opt i
## Share subscription and support plan invoice
-You may want to share the invoice for your subscription and support plan every month with your accounting team or send them to one of your other email addresses.
+You may want to share the invoice for your subscription and support plan every month with your accounting team or send them to one of your other email addresses.
1. Follow the steps in [Get your subscription's and support plan's invoices in email](#get-mosp-subscription-invoice-in-email) and select **Configure recipients**. [![Screenshot that shows a user selecting configure recipients](./media/download-azure-invoice/invoice-article-step03.png)](./media/download-azure-invoice/invoice-article-step03-zoomed.png#lightbox)
You may want to share your invoice every month with your accounting team or send
[![Screenshot that shows additional recipients for the invoice email](./media/download-azure-invoice/mca-billing-profile-add-invoice-recipients.png)](./media/download-azure-invoice/mca-billing-profile-add-invoice-recipients-zoomed.png#lightbox) 1. Select **Save**.
+## Azure Government support for invoices
+
+Azure Government users use the same agreement types as other Azure users.
+
+Azure Government billing billing owners can opt in to receive invoices by email. However, they can't allow others to get invoices by email.
+
+To download your invoice, follow the steps above at [Download your MOSP Azure subscription invoice](#download-your-mosp-azure-subscription-invoice).
+ ## Why you might not see an invoice <a name="noinvoice"></a>
data-catalog Data Catalog Adopting Data Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-adopting-data-catalog.md
Title: Approach and process for adopting Azure Data Catalog description: This article presents an approach and process for organizations considering adopting Azure Data Catalog, including defining a vision, identifying key business use cases, and choosing a pilot project.--++ Previously updated : 08/01/2019 Last updated : 02/17/2022 # Approach and process for adopting Azure Data Catalog [!INCLUDE [Azure Purview redirect](../../includes/data-catalog-use-purview.md)]
-This article helps you get started adopting **Azure Data Catalog** in your organization. To successfully adopt **Azure Data Catalog**, you focus on three key items: define your vision, identify key business use cases within your organization, and choose a pilot project.
+This article helps you get started adopting **Azure Data Catalog** in your organization. To successfully adopt **Azure Data Catalog**, focus on three key items: define your vision, identify key business use cases within your organization, and choose a pilot project.
## Introducing the Azure Data Catalog
This article presents an approach to getting started using **Azure Data Catalog*
## Azure Data Catalog adoption plan
-An **Azure Data Catalog** adoption plan describes how the benefits of using the service are communicated to stakeholders and users, and what kind of training you provide to users. One key success driver to adopt Data Catalog is how effectively you communicate the value of the service to users and stakeholders. The primary audiences in an initial adoption plan are the users of the service. No matter how much buy-in you get from stakeholders, if the users, or customers, of your Data Catalog offering do not incorporate it into their usage, the adoption will not be successful. Therefore, this article assumes you have stakeholder buy-in, and focuses on creating a plan for user adoption of Data Catalog.
+An **Azure Data Catalog** adoption plan describes how the benefits of using the service are communicated to stakeholders and users, and what kind of training you provide to users. One key success driver to adopt Data Catalog is how effectively you communicate the value of the service to users and stakeholders. The primary audiences in an initial adoption plan are the users of the service. No matter how much buy-in you get from stakeholders, if the users, or customers, of your Data Catalog offering don't incorporate it into their usage, the adoption won't be successful. Therefore, this article assumes you have stakeholder buy-in, and focuses on creating a plan for user adoption of Data Catalog.
An effective adoption plan successfully engages people in what is possible with Data Catalog and gives them the information and guidance to achieve it. Users need to understand the value that Data Catalog provides to help them succeed in their jobs. When people see how Data Catalog can help them achieve more results with data, the value of adopting Data Catalog becomes clear. Change is hard, so an effective plan needs to take the challenges of change into account. An adoption plan helps you communicate what is critical for people to succeed and achieve their goals. A typical plan explains how Data Catalog is going to make users' lives easier, and includes the following parts:
-* **Vision Statement** - It helps you concisely discuss the adoption plan with users, and stakeholders. It's your elevator pitch.
+* **Vision Statement** - It helps concisely discuss the adoption plan with users, and stakeholders. It's your elevator pitch.
* **Pilot team and Influencers** - Learning from a pilot team and influencers help you refine how to introduce teams and users to Data Catalog. Influencers can peer coach fellow users. It also helps you identify blockers and drivers to adoption. * **Plan for Communications and Buzz** - It helps users to understand how Data Catalog can help them, and can foster organic adoption within teams, and ultimately the entire organization. * **Training Plan** - Comprehensive training generally leads to adoption success and favorable results.
Here are some tips to define an **Azure Data Catalog** adoption plan.
The first step to define an **Azure Data Catalog** adoption plan is to write an aspirational description of what you are trying to accomplish. It's best to keep the vision statement fairly broad, yet concise enough to define specific short-term, and long-term goals.
-Here are some tips to help you define you vision:
+Here are some tips to help you define your vision:
* **Identify the key deployment driver** - Think about the specific data source management needs from the business that can be addressed with Data Catalog. It helps you state the top advantages of using Data Catalog. For example, there may be common data sources that all new employees need to learn about and use, or important and complex data sources that only a few key people deeply understand. **Azure Data Catalog** can help make these data sources easy to discover and understand, so that these well-known pain points can be addressed directly and early in the adoption of the service. * **Be crisp and clear** - A clear understanding of the vision gets everyone on the same page about the value Data Catalog brings to the organization, and how the vision supports organizational goals. * **Inspire people to want to use Data Catalog** - Your vision, and communication plan should inspire folks to recognize that Data Catalog can benefit them to find and connect to data sources to achieve more with data. * **Specify goals and timeline** - It ensures your adoption plan has specific, achievable deliverables. A timeline keeps everyone focused, and allows for checkpoints to measure success.
-Here is an example vision statement for a Data Catalog adoption plan for the fictitious company called Adventure Works:
+Here's an example vision statement for a Data Catalog adoption plan for the fictitious company called Adventure Works:
**Azure Data Catalog** empowers the Adventure Works Finance team to collaborate on key data sources, so every team member can easily find and use the data they need and can share their knowledge with the team as a whole.
Once you have a crisp vision statement, you should identify a suitable pilot pro
To identify use cases that are relevant to Data Catalog, engage with experts from various business units to identify relevant use cases and business issues to solve. Review existing challenges people have identifying and understanding data assets. For example, do teams learn about data assets only after asking several people in the organization who has relevant data sources?
-It is best to choose use cases that represent low hanging fruit: cases that are important yet have a high likelihood of success if solved with Data Catalog.
+It's best to choose use cases that represent low hanging fruit: cases that are important yet have a high likelihood of success if solved with Data Catalog.
Here are some tips to identify use cases: * **Define the goals of the team** - How does the team achieve their goals? Don't focus on Data Catalog yet since you want to be objective at this stage. Remember it's about the business results, not about the technology. * **Define the business problem** - What are the issues faced by the team regarding finding and learning about data assets? For example, information about important data sources may be found in Excel workbooks in a network folder, and the team may spend much time locating the workbooks.
-* **Understand team culture related to change** - Many adoption challenges relate to resistance to change rather than the implementation of a new tool. How a team responds to change is important when identifying use cases since the existing process could be in place because "this is how we've always done it" or "if it ain't broke, why fix it?". Adopting any new tool or process is always easiest when the people affected understand the value to be realized from the change, and appreciate the importance of the problems to be solved.
-* **Keep focus related to data assets** - When discussing the business problems a team faces, you need to "cut through the weeds", and focus on what's relevant to leveraging enterprise data assets more effectively.
+* **Understand team culture related to change** - Many adoption challenges relate to resistance to change rather than the implementation of a new tool. How a team responds to change is important when identifying use cases since the existing process could be in place because "this is how we've always done it" or "if it isn't broken, why fix it?". Adopting any new tool or process is always easiest when the people affected understand the value to be realized from the change, and appreciate the importance of the problems to be solved.
+* **Keep focus related to data assets** - When discussing the business problems a team faces, you need to "cut through the weeds", and focus on what's relevant to using enterprise data assets more effectively.
Here are some example use cases related to Data Catalog:
Once you identify some use cases for Data Catalog, common scenarios should emerg
## Choose a Data Catalog pilot project
-A key success factor is to simplify, and start small. A well-defined pilot with a constrained scope helps keep the project moving forward without getting bogged down with a project that is too complex, or which has too many participants. But it is also important to include a mix of users, from early adopters to skeptics. Users who embrace the solution help you refine your future communication and buzz plan. Skeptics help you identify and address blocking issues. As skeptics become champions, you can use their feedback to identify success drivers.
+A key success factor is to simplify, and start small. A well-defined pilot with a constrained scope helps keep the project moving forward without getting bogged down with a project that is too complex, or which has too many participants. But it's also important to include a mix of users, from early adopters to skeptics. Users who embrace the solution help you refine your future communication and buzz plan. Skeptics help you identify and address blocking issues. As skeptics become champions, you can use their feedback to identify success drivers.
Your pilot plan should phase in business goals that you want to achieve with Data Catalog. As you learn from the initial pilot, you can expand your user base. An initial closed pilot is good to establish measurable success, but the ultimate goal is for organic or viral growth. With organic growth of Data Catalog, users are in control of their own data usage, and can influence and encourage others to adopt and contribute to the catalog.
Your first pilot project should have a few individuals who produce data and cons
**Data Consumers** are people with expertise on the use of the data to solve business problems. For example, Nancy is a business analyst uses Adventure Works SQL Server data sources to analyze data.
-One of the business problems that **Azure Data Catalog** solves is to connect **Data Producers** to **Data Consumers**. It does so by serving as a central repository for information about enterprise data sources. Using Data Catalog, David registers Adventure Works and SQL Server data sources. Using crowdsourcing any user who discovers this data source can share her opinions on the data, in addition to using the data they have discovered. For example, Nancy discovers the data sources by searching the catalog, and shares her specialized knowledge about the data. Now, others in the organization benefit from shared knowledge by searching the data catalog.
+One of the business problems that **Azure Data Catalog** solves is to connect **Data Producers** to **Data Consumers**. It does so by serving as a central repository for information about enterprise data sources. David registers Adventure Works and SQL Server data sources in Data Catalog. Using crowdsourcing any user who discovers this data source can share her opinions on the data, in addition to using the data they've discovered. For example, Nancy discovers the data sources by searching the catalog, and shares her specialized knowledge about the data. Now, others in the organization benefit from shared knowledge by searching the data catalog.
* To learn more about registering data sources, see [Register data sources](data-catalog-get-started.md). * To learn more about discovering data sources, see [Search data sources](data-catalog-get-started.md).
One of the business problems that **Azure Data Catalog** solves is to connect **
For most enterprise pilot projects, you should seed the catalog with high-value data sources so that business users can quickly see the value of Data Catalog. IT is a good place to start identifying common data sources that would be of interest to your pilot team. For supported data sources, such as SQL Server, we recommend using the **Azure Data Catalog** data source registration tool. With the data source registration tool, you can register a wide range of data sources including SQL Server and Oracle databases, and SQL Server Reporting Services reports. For a complete list of current data sources, see [Azure Data Catalog supported data sources](data-catalog-dsr.md).
-Once you have identified and registered key data sources, it is possible to also import data source descriptions stored in other locations. The Data Catalog API allows developers to load descriptions and annotations from another location, such as the Excel Workbook that David created and maintains.
+Once you have identified and registered key data sources, it's possible to also import data source descriptions stored in other locations. The Data Catalog API allows developers to load descriptions and annotations from another location, such as the Excel Workbook that David created and maintains.
The next section describes an example project from the Adventure Works company.
After the pilot project is in place, it's time to execute your Data Catalog adop
### Execute
-At this point you have identified use cases for Data Catalog, and you have identified your first project. In addition, you have registered the key Adventure Works data sources and have added information from the existing Excel workbook using the tool that IT built. Now it's time to work with the pilot team to start the Data Catalog adoption process.
+At this point you have identified use cases for Data Catalog, and you've identified your first project. In addition, you've registered the key Adventure Works data sources and have added information from the existing Excel workbook using the tool that IT built. Now it's time to work with the pilot team to start the Data Catalog adoption process.
Here are some tips to get you started: * **Create excitement** - Business users get excited if they believe that **Azure Data Catalog** makes their lives easier. Try to make the conversation around the solution and the benefits it provides, not the technology. * **Facilitate change** - Start small and communicate the plan to business users. To be successful, it's crucial to involve users from the beginning so that they influence the outcome and develop a sense of ownership about the solution. * **Groom early adopters** - Early adopters are business users that are passionate about what they do, and excited to evangelize the benefits of **Azure Data Catalog** to their peers.
-* **Target training** - Business users do not need to know everything about Data Catalog, so target training to address specific team goals. Focus on what users do, and how some of their tasks might change, to incorporate **Azure Data Catalog** into their daily routine.
+* **Target training** - Business users don't need to know everything about Data Catalog, so target training to address specific team goals. Focus on what users do, and how some of their tasks might change, to incorporate **Azure Data Catalog** into their daily routine.
* **Be willing to fail** - If the pilot isn't achieving the desired results, reevaluate, and identify areas to change - fix problems in the pilot before moving on to a larger scope. Before your pilot team jumps into using Data Catalog, schedule a kick-off meeting to discuss expectations for the pilot project, and provide initial training. ### Set expectations
-Setting expectations and goals helps business users focus on specific deliverables. To keep the project on track, assign regular (for example: daily or weekly based on the scope and duration of the pilot) homework assignments. One of the most valuable capabilities of Data Catalog is crowdsourcing data assets so that business users can benefit from knowledge of enterprise data. A great homework assignment is for each pilot team member to register or annotate at least one data source they have used. See [Register a data source](data-catalog-get-started.md) and [How to annotate data sources](data-catalog-get-started.md).
+Setting expectations and goals helps business users focus on specific deliverables. To keep the project on track, assign regular (for example: daily or weekly based on the scope and duration of the pilot) homework assignments. One of the most valuable capabilities of Data Catalog is crowdsourcing data assets so that business users can benefit from knowledge of enterprise data. A great homework assignment is for each pilot team member to register or annotate at least one data source they've used. See [Register a data source](data-catalog-get-started.md) and [How to annotate data sources](data-catalog-get-started.md).
Meet with the team on a regular schedule to review some of the annotations. Good annotations about data sources are at the heart of a successful Data Catalog adoption because they provide meaningful data source insights in a central location. Without good annotations, knowledge about data sources remains scattered throughout the enterprise. See [How to annotate data sources](data-catalog-get-started.md).
-And, the ultimate test of the project is whether users can discover and understand the data sources they need to use. Pilot users should regularly test the catalog to ensure that the data sources they use for their day to day work are relevant. When a required data source is missing or not properly annotated, this should serve as a reminder to register additional data sources or to provide additional annotations. This practice does not only add value to the pilot effort but also builds effective habits that carry over to other teams after the pilot is complete.
+And, the ultimate test of the project is whether users can discover and understand the data sources they need to use. Pilot users should regularly test the catalog to ensure that the data sources they use for their day to day work are relevant. When a required data source is missing or not properly annotated, this should serve as a reminder to register more data sources or to provide more annotations. This practice doesn't only add value to the pilot effort but also builds effective habits that carry over to other teams after the pilot is complete.
### Provide training
Training should be enough to get the users started, and tailored to the specific
## Conclusion
-Once your pilot team is running fairly smoothly and you have achieved your initial goals, you should expand Data Catalog adoption to more teams. Apply and refine what you learned from your pilot project to expand Data Catalog throughout your organization.
+Once your pilot team is running fairly smoothly and you've achieved your initial goals, you should expand Data Catalog adoption to more teams. Apply and refine what you learned from your pilot project to expand Data Catalog throughout your organization.
-The early adopters who participated in the pilot can be helpful to get the word out about the benefits of adopting Data Catalog. They can share with other teams how Data Catalog helped their team solve business problems, discover data sources more easily, and share insights about the data sources they use. For example, early adopters on the Adventure Works pilot team could show others how easy it is to find information about Adventure Works data assets that were once hard to find and understand.
+The early adopters who participated in the pilot can be helpful to communicate about the benefits of adopting Data Catalog. They can share with other teams how Data Catalog helped their team solve business problems, discover data sources more easily, and share insights about the data sources they use. For example, early adopters on the Adventure Works pilot team could show others how easy it's to find information about Adventure Works data assets that were once hard to find and understand.
This article was about getting started with **Azure Data Catalog** in your organization. We hope you were able to start a Data Catalog pilot project, and expand Data Catalog throughout your organization.
data-catalog Data Catalog Developer Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-developer-concepts.md
Title: Azure Data Catalog developer concepts description: Introduction to the key concepts in Azure Data Catalog conceptual model, as exposed through the Catalog REST API.--++ Previously updated : 08/01/2019 Last updated : 02/16/2022 # Azure Data Catalog developer concepts
Last updated 08/01/2019
Microsoft **Azure Data Catalog** is a fully managed cloud service that provides capabilities for data source discovery and for crowdsourcing data source metadata. Developers can use the service via its REST APIs. Understanding the concepts implemented in the service is important for developers to successfully integrate with **Azure Data Catalog**.
-## Key concepts
+## Key concepts
+ The **Azure Data Catalog** conceptual model is based on four key concepts: The **Catalog**, **Users**, **Assets**, and **Annotations**.
-![Azure Data Catalog conceptual model illustration](./media/data-catalog-developer-concepts/concept2.png)
### Catalog
-A **Catalog** is the top-level container for all the metadata that an organization stores. There is one **Catalog** allowed per Azure Account. Catalogs are tied to an Azure subscription, but only one **Catalog** can be created for any given Azure account, even though an account can have multiple subscriptions.
+
+A **Catalog** is the top-level container for all the metadata that an organization stores. There's one **Catalog** allowed per Azure Account. Catalogs are tied to an Azure subscription, but only one **Catalog** can be created for any given Azure account, even though an account can have multiple subscriptions.
A catalog contains **Users** and **Assets**. ### Users
-Users are security principals that have permissions to perform actions (search the catalog, add, edit or remove items, etc.) in the Catalog.
+
+**Users** are security principals that have permissions to perform actions (search the catalog, add, edit or remove items, etc.) in the Catalog.
There are several different roles a user can have. For information on roles, see the section Roles and Authorization.
Individual users and security groups can be added.
Azure Data Catalog uses Azure Active Directory for identity and access management. Each Catalog user must be a member of the Active Directory for the account. ### Assets+ A **Catalog** contains data assets. **Assets** are the unit of granularity managed by the catalog. The granularity of an asset varies by data source. For SQL Server or Oracle Database, an asset can be a Table or a View. For SQL Server Analysis Services, an asset can be a Measure, a Dimension, or a Key Performance Indicator (KPI). For SQL Server Reporting Services, an asset is a Report.
-An **Asset** is the thing you add or remove from a Catalog. It is the unit of result you get back from **Search**.
+An **Asset** is the thing you add or remove from a Catalog. It's the unit of result you get back from **Search**.
An **Asset** is made up from its name, location, and type, and annotations that further describe it. ### Annotations
-Annotations are items that represent metadata about Assets.
+
+**Annotations** are items that represent metadata about Assets.
Examples of annotations are description, tags, schema, documentation, etc. See the [Asset Object model section](#asset-object-model) for a full list of the asset types and annotation types. ## Crowdsourcing annotations and user perspective (multiplicity of opinion)
-A key aspect of Azure Data Catalog is how it supports the crowdsourcing of metadata in the system. As opposed to a wiki approach ΓÇô where there is only one opinion and the last writer wins ΓÇô the Azure Data Catalog model allows multiple opinions to live side by side in the system.
+
+A key aspect of Azure Data Catalog is how it supports the crowdsourcing of metadata in the system. As opposed to a wiki approach ΓÇô where there's only one opinion and the last writer wins ΓÇô the Azure Data Catalog model allows multiple opinions to live side by side in the system.
This approach reflects the real world of enterprise data where different users can have different perspectives on a given asset:
The UX can then choose how to display the combination. There are three different
* A third pattern is ΓÇ£last writer winsΓÇ¥. In this pattern, only the most recent value typed in is shown. friendlyName is an example of this pattern. ## Asset object model+ As introduced in the Key Concepts section, the **Azure Data Catalog** object model includes items, which can be assets or annotations. Items have properties, which can be optional or required. Some properties apply to all items. Some properties apply to all assets. Some properties apply only to specific asset types. ### System properties
-<table><tr><td><b>Property Name</b></td><td><b>Data Type</b></td><td><b>Comments</b></td></tr><tr><td>timestamp</td><td>DateTime</td><td>The last time the item was modified. This field is generated by the server when an item is inserted and every time an item is updated. The value of this property is ignored on input of publish operations.</td></tr><tr><td>ID</td><td>Uri</td><td>Absolute url of the item (read-only). It is the unique addressable URI for the item. The value of this property is ignored on input of publish operations.</td></tr><tr><td>type</td><td>String</td><td>The type of the asset (read-only).</td></tr><tr><td>etag</td><td>String</td><td>A string corresponding to the version of the item that can be used for optimistic concurrency control when performing operations that update items in the catalog. "*" can be used to match any value.</td></tr></table>
+
+|Property name |Data type |Comments|
+|-|--||
+|timestamp |DateTime |The last time the item was modified. This field is generated by the server when an item is inserted and every time an item is updated. The value of this property is ignored on input of publish operations. |
+|ID|Uri |Absolute url of the item (read-only). It's the unique addressable URI for the item. The value of this property is ignored on input of publish operations.|
+|type|String |The type of the asset (read-only).|
+|etag|String< |A string corresponding to the version of the item that can be used for optimistic concurrency control when performing operations that update items in the catalog. "*" can be used to match any value.|
### Common properties+ These properties apply to all root asset types and all annotation types.
-<table>
-<tr><td><b>Property Name</b></td><td><b>Data Type</b></td><td><b>Comments</b></td></tr>
-<tr><td>fromSourceSystem</td><td>Boolean</td><td>Indicates whether item's data is derived from a source system (like SQL Server Database, Oracle Database) or authored by a user.</td></tr>
-</table>
+|Property name |Data type |Comments|
+|-|--||
+|fromSourceSystem |Boolean |Indicates whether item's data is derived from a source system (like SQL Server Database, Oracle Database) or authored by a user. |
### Common root properties
-<p>
+ These properties apply to all root asset types.
-<table><tr><td><b>Property Name</b></td><td><b>Data Type</b></td><td><b>Comments</b></td></tr><tr><td>name</td><td>String</td><td>A name derived from the data source location information</td></tr><tr><td>dsl</td><td>DataSourceLocation</td><td>Uniquely describes the data source and is one of the identifiers for the asset. (See dual identity section). The structure of the dsl varies by the protocol and source type.</td></tr><tr><td>dataSource</td><td>DataSourceInfo</td><td>More detail on the type of asset.</td></tr><tr><td>lastRegisteredBy</td><td>SecurityPrincipal</td><td>Describes the user who most recently registered this asset. Contains both the unique ID for the user (the upn) and a display name (lastName and firstName).</td></tr><tr><td>containerID</td><td>String</td><td>ID of the container asset for the data source. This property is not supported for the Container type.</td></tr></table>
+|Property name |Data type |Comments|
+|-|--||
+|name |String |A name derived from the data source location information |
+|dsl |DataSourceLocation |Uniquely describes the data source and is one of the identifiers for the asset. (See dual identity section). The structure of the dsl varies by the protocol and source type. |
+|dataSource |DataSourceInfo |More detail on the type of asset. |
+|lastRegisteredBy |SecurityPrincipal |Describes the user who most recently registered this asset. Contains both the unique ID for the user (the upn) and a display name (lastName and firstName). |
+|containerID |String |ID of the container asset for the data source. This property isn't supported for the Container type. |
### Common non-singleton annotation properties+ These properties apply to all non-singleton annotation types (annotations, which allowed to be multiple per asset).
-<table>
-<tr><td><b>Property Name</b></td><td><b>Data Type</b></td><td><b>Comments</b></td></tr>
-<tr><td>key</td><td>String</td><td>A user specified key, which uniquely identifies the annotation in the current collection. The key length cannot exceed 256 characters.</td></tr>
-</table>
+|Property name |Data type |Comments|
+|-|--||
+|key |String |A user specified key, which uniquely identifies the annotation in the current collection. The key length canΓÇÖt exceed 256 characters. |
### Root asset types
-Root asset types are those types that represent the various types of data assets that can be registered in the catalog. For each root type, there is a view, which describes asset and annotations included in the view. View name should be used in the corresponding {view_name} url segment when publishing an asset using REST API.
-<table><tr><td><b>Asset Type (View name)</b></td><td><b>Additional Properties</b></td><td><b>Data Type</b></td><td><b>Allowed Annotations</b></td><td><b>Comments</b></td></tr><tr><td>Table ("tables")</td><td></td><td></td><td>Description<p>FriendlyName<p>Tag<p>Schema<p>ColumnDescription<p>ColumnTag<p> Expert<p>Preview<p>AccessInstruction<p>TableDataProfile<p>ColumnDataProfile<p>ColumnDataClassification<p>Documentation<p></td><td>A Table represents any tabular data. For example: SQL Table, SQL View, Analysis Services Tabular Table, Analysis Services Multidimensional dimension, Oracle Table, etc. </td></tr><tr><td>Measure ("measures")</td><td></td><td></td><td>Description<p>FriendlyName<p>Tag<p>Expert<p>AccessInstruction<p>Documentation<p></td><td>This type represents an Analysis Services measure.</td></tr><tr><td></td><td>measure</td><td>Column</td><td></td><td>Metadata describing the measure</td></tr><tr><td></td><td>isCalculated </td><td>Boolean</td><td></td><td>Specifies if the measure is calculated or not.</td></tr><tr><td></td><td>measureGroup</td><td>String</td><td></td><td>Physical container for measure</td></tr><td>KPI ("kpis")</td><td></td><td></td><td>Description<p>FriendlyName<p>Tag<p>Expert<p>AccessInstruction<p>Documentation</td><td></td></tr><tr><td></td><td>measureGroup</td><td>String</td><td></td><td>Physical container for measure</td></tr><tr><td></td><td>goalExpression</td><td>String</td><td></td><td>An MDX numeric expression or a calculation that returns the target value of the KPI.</td></tr><tr><td></td><td>valueExpression</td><td>String</td><td></td><td>An MDX numeric expression that returns the actual value of the KPI.</td></tr><tr><td></td><td>statusExpression</td><td>String</td><td></td><td>An MDX expression that represents the state of the KPI at a specified point in time.</td></tr><tr><td></td><td>trendExpression</td><td>String</td><td></td><td>An MDX expression that evaluates the value of the KPI over time. The trend can be any time-based criterion that is useful in a specific business context.</td>
-<tr><td>Report ("reports")</td><td></td><td></td><td>Description<p>FriendlyName<p>Tag<p>Expert<p>AccessInstruction<p>Documentation<p></td><td>This type represents a SQL Server Reporting Services report </td></tr><tr><td></td><td>assetCreatedDate</td><td>String</td><td></td><td></td></tr><tr><td></td><td>assetCreatedBy</td><td>String</td><td></td><td></td></tr><tr><td></td><td>assetModifiedDate</td><td>String</td><td></td><td></td></tr><tr><td></td><td>assetModifiedBy</td><td>String</td><td></td><td></td></tr><tr><td>Container ("containers")</td><td></td><td></td><td>Description<p>FriendlyName<p>Tag<p>Expert<p>AccessInstruction<p>Documentation<p></td><td>This type represents a container of other assets such as a SQL database, an Azure Blobs container, or an Analysis Services model.</td></tr></table>
+Root asset types are those types that represent the various types of data assets that can be registered in the catalog. For each root type, there's a view, which describes asset and annotations included in the view. View name should be used in the corresponding {view_name} url segment when publishing an asset using REST API.
+
+|Asset type (view name) |Additional properties |Data type|Allowed annotations|Comments|
+|-|--||||
+|Table ("tables") | ||Description|A Table represents any tabular data. For example: SQL Table, SQL View, Analysis Services Tabular Table, Analysis Services Multidimensional dimension, Oracle Table, etc.|
+||||FriendlyName||
+||||Tag||
+||||Schema||
+||||ColumnDescription||
+||||ColumnTag||
+||||Expert||
+||||Preview||
+||||AccessInstruction||
+||||TableDataProfile||
+||||ColumnDataProfile||
+||||ColumnDataClassification||
+||||Documentation||
+|Measure ("measures") |||Description|This type represents an Analysis Services measure.|
+||||FriendlyName||
+||||Tag||
+||||Expert||
+||||AccessInstruction||
+||||Documentation||
+||measure|Column||Metadata describing the measure.|
+||isCalculated |Boolean||Specifies if the measure is calculated or not.|
+||measureGroup |String||Specifies if the measure is calculated or not.|
+|KPI ("kpis") |||Description||
+||||FriendlyName||
+||||Tag||
+||||Expert||
+||||AccessInstruction||
+||||Documentation||
+||measureGroup|String||Physical container for measure.|
+||goalExpression|String||An MDX numeric expression or a calculation that returns the target value of the KPI.|
+||valueExpression|String||An MDX numeric expression that returns the actual value of the KPI.|
+||statusExpression|String||An MDX expression that represents the state of the KPI at a specified point in time.|
+||trendExpression|String||An MDX expression that evaluates the value of the KPI over time. The trend can be any time-based criterion that is useful in a specific business context.|
+|Report ("reports") |||Description|This type represents a SQL Server Reporting Services report.|
+||||FriendlyName||
+||||Tag||
+||||Expert||
+||||AccessInstruction||
+||||Documentation||
+||assetCreatedDate|String|||
+||assetCreatedDate|String|||
+||assetModifiedDate|String|||
+||assetModifiedBy|String|||
+|Report ("reports") |||Description|This type represents a container of other assets such as an SQL database, an Azure Blobs container, or an Analysis Services model.|
+||||Tag||
+||||Expert||
+||||AccessInstruction||
+||||Documentation||
### Annotation types
-Annotation types represent types of metadata that can be assigned to other types within the catalog.
-
-<table>
-<tr><td><b>Annotation Type (Nested view name)</b></td><td><b>Additional Properties</b></td><td><b>Data Type</b></td><td><b>Comments</b></td></tr>
-
-<tr><td>Description ("descriptions")</td><td></td><td></td><td>This property contains a description for an asset. Each user of the system can add their own description. Only that user can edit the Description object. (Admins and Asset owners can delete the Description object but not edit it). The system maintains users' descriptions separately. Thus there is an array of descriptions on each asset (one for each user who has contributed their knowledge about the asset, in addition to possibly one that contains information derived from the data source).</td></tr>
-<tr><td></td><td>description</td><td>string</td><td>A short description (2-3 lines) of the asset</td></tr>
-
-<tr><td>Tag ("tags")</td><td></td><td></td><td>This property defines a tag for an asset. Each user of the system can add multiple tags for an asset. Only the user who created Tag objects can edit them. (Admins and Asset owners can delete the Tag object but not edit it). The system maintains users' tags separately. Thus there is an array of Tag objects on each asset.</td></tr>
-<tr><td></td><td>tag</td><td>string</td><td>A tag describing the asset.</td></tr>
-
-<tr><td>FriendlyName ("friendlyName")</td><td></td><td></td><td>This property contains a friendly name for an asset. FriendlyName is a singleton annotation - only one FriendlyName can be added to an asset. Only the user who created FriendlyName object can edit it. (Admins and Asset owners can delete the FriendlyName object but not edit it). The system maintains users' friendly names separately.</td></tr>
-<tr><td></td><td>friendlyName</td><td>string</td><td>A friendly name of the asset.</td></tr>
-
-<tr><td>Schema ("schema")</td><td></td><td></td><td>The Schema describes the structure of the data. It lists the attribute (column, attribute, field, etc.) names, types as well other metadata. This information is all derived from the data source. Schema is a singleton annotation - only one Schema can be added for an asset.</td></tr>
-<tr><td></td><td>columns</td><td>Column[]</td><td>An array of column objects. They describe the column with information derived from the data source.</td></tr>
-
-<tr><td>ColumnDescription ("columnDescriptions")</td><td></td><td></td><td>This property contains a description for a column. Each user of the system can add their own descriptions for multiple columns (at most one per column). Only the user who created ColumnDescription objects can edit them. (Admins and Asset owners can delete the ColumnDescription object but not edit it). The system maintains these user's column descriptions separately. Thus there is an array of ColumnDescription objects on each asset (one per column for each user who has contributed their knowledge about the column in addition to possibly one that contains information derived from the data source). The ColumnDescription is loosely bound to the Schema so it can get out of sync. The ColumnDescription might describe a column that no longer exists in the schema. It is up to the writer to keep description and schema in sync. The data source may also have columns description information and they are additional ColumnDescription objects that would be created when running the tool.</td></tr>
-<tr><td></td><td>columnName</td><td>String</td><td>The name of the column this description refers to.</td></tr>
-<tr><td></td><td>description</td><td>String</td><td>a short description (2-3 lines) of the column.</td></tr>
-
-<tr><td>ColumnTag ("columnTags")</td><td></td><td></td><td>This property contains a tag for a column. Each user of the system can add multiple tags for a given column and can add tags for multiple columns. Only the user who created ColumnTag objects can edit them. (Admins and Asset owners can delete the ColumnTag object but not edit it). The system maintains these users'
-column tags separately. Thus there is an array of ColumnTag objects on each asset. The ColumnTag is loosely bound to the schema so it can get out of sync. The ColumnTag might describe a column that no longer exists in the schema. It is up to the writer to keep column tag and schema in sync.</td></tr>
-<tr><td></td><td>columnName</td><td>String</td><td>The name of the column this tag refers to.</td></tr>
-<tr><td></td><td>tag</td><td>String</td><td>A tag describing the column.</td></tr>
-
-<tr><td>Expert ("experts")</td><td></td><td></td><td>This property contains a user who is considered an expert in the data set. The expertsΓÇÖ opinions(descriptions) bubble to the top of the UX when listing descriptions. Each user can specify their own experts. Only that user can edit the experts' object. (Admins and Asset owners can delete the Expert objects but not edit it).</td></tr>
-<tr><td></td><td>expert</td><td>SecurityPrincipal</td><td></td></tr>
-<tr><td>Preview ("previews")</td><td></td><td></td><td>The preview contains a snapshot of the top 20 rows of data for the asset. Preview only make sense for some types of assets (it makes sense for Table but not for Measure).</td></tr>
-<tr><td></td><td>preview</td><td>object[]</td><td>Array of objects that represent a column. Each object has a property mapping to a column with a value for that column for the row.</td></tr>
-
-<tr><td>AccessInstruction ("accessInstructions")</td><td></td><td></td><td></td></tr>
-<tr><td></td><td>mimeType</td><td>string</td><td>The mime type of the content.</td></tr>
-<tr><td></td><td>content</td><td>string</td><td>The instructions for how to get access to this data asset. The content could be a URL, an email address, or a set of instructions.</td></tr>
-
-<tr><td>TableDataProfile ("tableDataProfiles")</td><td></td><td></td><td></td></tr>
-<tr><td></td><td>numberOfRows</td></td><td>int</td><td>The number of rows in the data set</td></tr>
-<tr><td></td><td>size</td><td>long</td><td>The size in bytes of the data set. </td></tr>
-<tr><td></td><td>schemaModifiedTime</td><td>string</td><td>The last time the schema was modified</td></tr>
-<tr><td></td><td>dataModifiedTime</td><td>string</td><td>The last time the data set was modified (data was added, modified, or delete)</td></tr>
-
-<tr><td>ColumnsDataProfile ("columnsDataProfiles")</td><td></td><td></td><td></td></tr>
-<tr><td></td><td>columns</td></td><td>ColumnDataProfile[]</td><td>An array of column data profiles.</td></tr>
-
-<tr><td>ColumnDataClassification ("columnDataClassifications")</td><td></td><td></td><td></td></tr>
-<tr><td></td><td>columnName</td><td>String</td><td>The name of the column this classification refers to.</td></tr>
-<tr><td></td><td>classification</td><td>String</td><td>The classification of the data in this column.</td></tr>
-
-<tr><td>Documentation ("documentation")</td><td></td><td></td><td>A given asset can have only one documentation associated with it.</td></tr>
-<tr><td></td><td>mimeType</td><td>string</td><td>The mime type of the content.</td></tr>
-<tr><td></td><td>content</td><td>string</td><td>The documentation content.</td></tr>
+Annotation types represent types of metadata that can be assigned to other types within the catalog.
-</table>
+|Annotation type (nested view name) |Additional properties |Data type|Comments|
+|-|--|||
+|Description ("descriptions") |||This property contains a description for an asset. Each user of the system can add their own description. Only that user can edit the Description object. (Admins and Asset owners can delete the Description object but not edit it). The system maintains users' descriptions separately. Thus there's an array of descriptions on each asset (one for each user who has contributed their knowledge about the asset, in addition to possibly one that contains information derived from the data source).|
+||description|string|A short description (2-3 lines) of the asset.|
+|Tag ("tags") |||This property defines a tag for an asset. Each user of the system can add multiple tags for an asset. Only the user who created Tag objects can edit them. (Admins and Asset owners can delete the Tag object but not edit it). The system maintains users' tags separately. Thus there's an array of Tag objects on each asset.|
+||tag|string|A tag describing the asset.|
+|FriendlyName ("friendlyName") |||This property contains a friendly name for an asset. FriendlyName is a singleton annotation - only one FriendlyName can be added to an asset. Only the user who created FriendlyName object can edit it. (Admins and Asset owners can delete the FriendlyName object but not edit it). The system maintains users' friendly names separately.|
+||friendlyName|string|A friendly name of the asset.|
+|FriendlyName ("friendlyName") |||This property contains a friendly name for an asset. FriendlyName is a singleton annotation - only one FriendlyName can be added to an asset. Only the user who created FriendlyName object can edit it. (Admins and Asset owners can delete the FriendlyName object but not edit it). The system maintains users' friendly names separately.|
+||friendlyName|string|A friendly name of the asset.|
+|Schema ("schema") |||The Schema describes the structure of the data. It lists the attribute (column, attribute, field, etc.) names, types as well other metadata. This information is all derived from the data source. Schema is a singleton annotation - only one Schema can be added for an asset.|
+||columns|Column[]|An array of column objects. They describe the column with information derived from the data source.|
+|ColumnDescription ("columnDescriptions") |||This property contains a description for a column. Each user of the system can add their own descriptions for multiple columns (at most one per column). Only the user who created ColumnDescription objects can edit them. (Admins and Asset owners can delete the ColumnDescription object but not edit it). The system maintains these user's column descriptions separately. Thus there's an array of ColumnDescription objects on each asset (one per column for each user who has contributed their knowledge about the column in addition to possibly one that contains information derived from the data source). The ColumnDescription is loosely bound to the Schema so it can get out of sync. The ColumnDescription might describe a column that no longer exists in the schema. It's up to the writer to keep description and schema in sync. The data source may also have columns description information and they're other ColumnDescription objects that would be created when running the tool.|
+||columnName|String|The name of the column this description refers to.|
+||description|String|A short description (2-3 lines) of the column.|
+|ColumnTag ("columnTags") |||This property contains a tag for a column. Each user of the system can add multiple tags for a given column and can add tags for multiple columns. Only the user who created ColumnTag objects can edit them. (Admins and Asset owners can delete the ColumnTag object but not edit it). The system maintains these users' column tags separately. Thus there's an array of ColumnTag objects on each asset. The ColumnTag is loosely bound to the schema so it can get out of sync. The ColumnTag might describe a column that no longer exists in the schema. It's up to the writer to keep column tag and schema in sync.|
+||columnName|String|The name of the column this tag refers to.|
+||tag|String|A tag describing the column.|
+|Expert ("experts") |||This property contains a user who is considered an expert in the data set. The expertsΓÇÖ opinions(descriptions) bubble to the top of the UX when listing descriptions. Each user can specify their own experts. Only that user can edit the experts' object. (Admins and Asset owners can delete the Expert objects but not edit it).|
+||expert|SecurityPrincipal||
+|Preview ("previews") |||The preview contains a snapshot of the top 20 rows of data for the asset. Preview only make sense for some types of assets (it makes sense for Table but not for Measure).|
+||preview|object[]|Array of objects that represent a column. Each object has a property mapping to a column with a value for that column for the row.|
+|AccessInstruction ("accessInstructions") ||||
+||mimeType|string|The mime type of the content.|
+||content|string|The instructions for how to get access to this data asset. The content could be a URL, an email address, or a set of instructions.|
+|TableDataProfile ("tableDataProfiles") ||||
+||numberOfRows|int|The number of rows in the data set.|
+||size|long|The size in bytes of the data set.|
+||schemaModifiedTime|string|The last time the schema was modified.|
+||dataModifiedTime|string|The last time the data set was modified (data was added, modified, or delete).|
+|ColumnsDataProfile ("columnsDataProfiles") ||||
+||columns|ColumnDataProfile[]|An array of column data profiles.|
+|ColumnDataClassification ("columnDataClassifications") ||||
+||columnName|String|The name of the column this classification refers to.|
+||classification|String|The classification of the data in this column.|
+|Documentation ("documentation") |||A given asset can have only one documentation associated with it.|
+||mimeType|string|The mime type of the content.|
+||content|string|The documentation content.|
### Common types
-Common types can be used as the types for properties, but are not Items.
-
-<table>
-<tr><td><b>Common Type</b></td><td><b>Properties</b></td><td><b>Data Type</b></td><td><b>Comments</b></td></tr>
-<tr><td>DataSourceInfo</td><td></td><td></td><td></td></tr>
-<tr><td></td><td>sourceType</td><td>string</td><td>Describes the type of data source. For example: SQL Server, Oracle Database, etc. </td></tr>
-<tr><td></td><td>objectType</td><td>string</td><td>Describes the type of object in the data source. For example: Table, View for SQL Server.</td></tr>
-
-<tr><td>DataSourceLocation</td><td></td><td></td><td></td></tr>
-<tr><td></td><td>protocol</td><td>string</td><td>Required. Describes a protocol used to communicate with the data source. For example: `tds` for SQL Server, `oracle` for Oracle, etc. Refer to [Data source reference specification - DSL Structure](data-catalog-dsr.md) for the list of currently supported protocols.</td></tr>
-<tr><td></td><td>address</td><td>Dictionary&lt;string, object&gt;</td><td>Required. Address is a set of data specific to the protocol that is used to identify the data source being referenced. The address data scoped to a particular protocol, meaning it is meaningless without knowing the protocol.</td></tr>
-<tr><td></td><td>authentication</td><td>string</td><td>Optional. The authentication scheme used to communicate with the data source. For example: windows, oauth, etc.</td></tr>
-<tr><td></td><td>connectionProperties</td><td>Dictionary&lt;string, object&gt;</td><td>Optional. Additional information on how to connect to a data source.</td></tr>
-
-<tr><td>SecurityPrincipal</td><td></td><td></td><td>The backend does not perform any validation of provided properties against Azure Active Directory during publishing.</td></tr>
-<tr><td></td><td>upn</td><td>string</td><td>Unique email address of user. Must be specified if objectId is not provided or in the context of "lastRegisteredBy" property, otherwise optional.</td></tr>
-<tr><td></td><td>objectId</td><td>Guid</td><td>User or security group Azure Active Directory identity. Optional. Must be specified if upn is not provided, otherwise optional.</td></tr>
-<tr><td></td><td>firstName</td><td>string</td><td>First name of user (for display purposes). Optional. Only valid in the context of "lastRegisteredBy" property. Cannot be specified when providing security principal for "roles", "permissions" and "experts".</td></tr>
-<tr><td></td><td>lastName</td><td>string</td><td>Last name of user (for display purposes). Optional. Only valid in the context of "lastRegisteredBy" property. Cannot be specified when providing security principal for "roles", "permissions" and "experts".</td></tr>
-
-<tr><td>Column</td><td></td><td></td><td></td></tr>
-<tr><td></td><td>name</td><td>string</td><td>Name of the column or attribute.</td></tr>
-<tr><td></td><td>type</td><td>string</td><td>data type of the column or attribute. The Allowable types depend on data sourceType of the asset. Only a subset of types is supported.</td></tr>
-<tr><td></td><td>maxLength</td><td>int</td><td>The maximum length allowed for the column or attribute. Derived from data source. Only applicable to some source types.</td></tr>
-<tr><td></td><td>precision</td><td>byte</td><td>The precision for the column or attribute. Derived from data source. Only applicable to some source types.</td></tr>
-<tr><td></td><td>isNullable</td><td>Boolean</td><td>Whether the column is allowed to have a null value or not. Derived from data source. Only applicable to some source types.</td></tr>
-<tr><td></td><td>expression</td><td>string</td><td>If the value is a calculated column, this field contains the expression that expresses the value. Derived from data source. Only applicable to some source types.</td></tr>
-
-<tr><td>ColumnDataProfile</td><td></td><td></td><td></td></tr>
-<tr><td></td><td>columnName </td><td>string</td><td>The name of the column</td></tr>
-<tr><td></td><td>type </td><td>string</td><td>The type of the column</td></tr>
-<tr><td></td><td>min </td><td>string</td><td>The minimum value in the data set</td></tr>
-<tr><td></td><td>max </td><td>string</td><td>The maximum value in the data set</td></tr>
-<tr><td></td><td>avg </td><td>double</td><td>The average value in the data set</td></tr>
-<tr><td></td><td>stdev </td><td>double</td><td>The standard deviation for the data set</td></tr>
-<tr><td></td><td>nullCount </td><td>int</td><td>The count of null values in the data set</td></tr>
-<tr><td></td><td>distinctCount </td><td>int</td><td>The count of distinct values in the data set</td></tr>
-</table>
+
+Common types can be used as the types for properties, but aren't Items.
+
+|Common type |Properties |Data type|Comments|
+|-|--|||
+|DataSourceInfo|sourceType|string|Describes the type of data source. For example: SQL Server, Oracle Database, etc. |
+||objectType|string|Describes the type of object in the data source. For example: Table, View for SQL Server.|
+|DataSourceLocation|protocol|string|Required. Describes a protocol used to communicate with the data source. For example: `tds` for SQL Server, `oracle` for Oracle, etc. Refer to [Data source reference specification - DSL Structure](data-catalog-dsr.md) for the list of currently supported protocols.|
+||address|Dictionary\<string,object\>|Required. Address is a set of data specific to the protocol that is used to identify the data source being referenced. The address data scoped to a particular protocol, meaning it's meaningless without knowing the protocol.|
+||authentication|string|Optional. The authentication scheme used to communicate with the data source. For example: windows, oauth, etc.|
+||connectionProperties|Dictionary\<string,object\>|Optional. Additional information on how to connect to a data source.|
+|DataSourceLocation|||The backend doesn't perform any validation of provided properties against Azure Active Directory during publishing.|
+||upn|string|Required. Unique email address of user. Must be specified if objectId isn't provided or in the context of "lastRegisteredBy" property, otherwise optional.|
+||objectId|Guid|Optional. User or security group Azure Active Directory identity. Optional. Must be specified if upn isn't provided, otherwise optional.|
+||firstName|string|First name of user (for display purposes). Optional. Only valid in the context of "lastRegisteredBy" property. CanΓÇÖt be specified when providing security principal for "roles", "permissions" and "experts".|
+||lastName|string|Last name of user (for display purposes). Optional. Only valid in the context of "lastRegisteredBy" property. CanΓÇÖt be specified when providing security principal for "roles", "permissions" and "experts".|
+|Column|name|string|Name of the column or attribute.|
+||type|string|Data type of the column or attribute. The Allowable types depend on data sourceType of the asset. Only a subset of types is supported.|
+||maxLength|int|The maximum length allowed for the column or attribute. Derived from data source. Only applicable to some source types.|
+||precision|int|The precision for the column or attribute. Derived from data source. Only applicable to some source types.|
+||isNullable|isNullable|Whether the column is allowed to have a null value or not. Derived from data source. Only applicable to some source types.|
+||expression|string|If the value is a calculated column, this field contains the expression that expresses the value. Derived from data source. Only applicable to some source types.|
+|ColumnDataProfile|columnName|string|Name of the column.|
+||type|string|The type of the column.|
+||min|string|The minimum value in the data set.|
+||max|string|The maximum value in the data set.|
+||avg|double|The average value in the data set.|
+||stdev|double|The standard deviation for the data set|
+||nullCount|int|The count of null values in the data set.|
+||distinctCount |int|The count of distinct values in the data set.|
## Asset identity+ Azure Data Catalog uses "protocol" and identity properties from the "address" property bag of the DataSourceLocation "dsl" property to generate identity of the asset, which is used to address the asset inside the Catalog. For example, the Tabular Data Stream (TDS) protocol has identity properties "server", "database", "schema", and "object". The combinations of the protocol and the identity properties are used to generate the identity of the SQL Server Table Asset. Azure Data Catalog provides several built-in data source protocols, which are listed at [Data source reference specification - DSL Structure](data-catalog-dsr.md). The set of supported protocols can be extended programmatically (Refer to Data Catalog REST API reference). Administrators of the Catalog can register custom data source protocols. The following table describes the properties needed to register a custom protocol. ### Custom data source protocol specification
-<table>
-<tr><td><b>Type</b></td><td><b>Properties</b></td><td><b>Data Type</b></td><td><b>Comments</b></td></tr>
-<tr><td>DataSourceProtocol</td><td></td><td></td><td></td></tr>
-<tr><td></td><td>namespace</td><td>string</td><td>The namespace of the protocol. Namespace must be from 1 to 255 characters long, contain one or more non-empty parts separated by dot (.). Each part must be from 1 to 255 characters long, start with a letter and contain only letters and numbers.</td></tr>
-<tr><td></td><td>name</td><td>string</td><td>The name of the protocol. Name must be from 1 to 255 characters long, start with a letter and contain only letters, numbers, and the dash (-) character.</td></tr>
-<tr><td></td><td>identityProperties</td><td>DataSourceProtocolIdentityProperty[]</td><td>List of identity properties, must contain at least one, but no more than 20 properties. For example: "server", "database", "schema", "object" are identity properties of the "tds" protocol.</td></tr>
-<tr><td></td><td>identitySets</td><td>DataSourceProtocolIdentitySet[]</td><td>List of identity sets. Defines sets of identity properties, which represent valid asset's identity. Must contain at least one, but no more than 20 sets. For example: {"server", "database", "schema" and "object"} is an identity set for the TDS protocol, which defines identity of SQL Server Table asset.</td></tr>
+There are three different types of data source protocol specificiations. Listed below are the types, followed by a table of their properties.
+
+#### DataSourceProtocol
+
+|Properties |Data type|Comments|
+|--|||
+|namespace|string|The namespace of the protocol. Namespace must be from 1 to 255 characters long, contain one or more non-empty parts separated by dot (.). Each part must be from 1 to 255 characters long, start with a letter and contain only letters and numbers. |
+|name|string|The name of the protocol. Name must be from 1 to 255 characters long, start with a letter and contain only letters, numbers, and the dash (-) character.|
+|identityProperties|DataSourceProtocolIdentityProperty[]|List of identity properties, must contain at least one, but no more than 20 properties. For example: "server", "database", "schema", "object" are identity properties of the "tds" protocol.|
+|identitySets|DataSourceProtocolIdentitySet[]|List of identity sets. Defines sets of identity properties, which represent valid asset's identity. Must contain at least one, but no more than 20 sets. For example: {"server", "database", "schema" and "object"} is an identity set for the TDS protocol, which defines identity of SQL Server Table asset.|
-<tr><td>DataSourceProtocolIdentityProperty</td><td></td><td></td><td></td></tr>
-<tr><td></td><td>name</td><td>string</td><td>The name of the property. Name must be from 1 to 100 characters long, start with a letter and can contain only letters and numbers.</td></tr>
-<tr><td></td><td>type</td><td>string</td><td>The type of the property. Supported values: "bool", boolean", "byte", "guid", "int", "integer", "long", "string", "url"</td></tr>
-<tr><td></td><td>ignoreCase</td><td>bool</td><td>Indicates whether case should be ignored when using property's value. Can only be specified for properties with "string" type. Default value is false.</td></tr>
-<tr><td></td><td>urlPathSegmentsIgnoreCase</td><td>bool[]</td><td>Indicates whether case should be ignored for each segment of the url's path. Can only be specified for properties with "url" type. Default value is [false].</td></tr>
+#### DataSourceProtocolIdentityProperty
-<tr><td>DataSourceProtocolIdentitySet</td><td></td><td></td><td></td></tr>
-<tr><td></td><td>name</td><td>string</td><td>The name of the identity set.</td></tr>
-<tr><td></td><td>properties</td><td>string[]</td><td>The list of identity properties included into this identity set. It cannot contain duplicates. Each property referenced by identity set must be defined in the list of "identityProperties" of the protocol.</td></tr>
+|Properties |Data type|Comments|
+|--|||
+|name|string|The name of the property. Name must be from 1 to 100 characters long, start with a letter and can contain only letters and numbers.|
+|type|string|The type of the property. Supported values: "bool", boolean", "byte", "guid", "int", "integer", "long", "string", "url"|
+|ignoreCase|bool|Indicates whether case should be ignored when using property's value. Can only be specified for properties with "string" type. Default value is false.|
+|urlPathSegmentsIgnoreCase|bool[]|Indicates whether case should be ignored for each segment of the url's path. Can only be specified for properties with "url" type. Default value is [false].|
-</table>
+#### DataSourceProtocolIdentitySet
+
+|Properties |Data type|Comments|
+|--|||
+|name|string|The name of the identity set.|
+|properties|string[]|The list of identity properties included into this identity set. It canΓÇÖt contain duplicates. Each property referenced by identity set must be defined in the list of "identityProperties" of the protocol.|
## Roles and authorization+ Microsoft Azure Data Catalog provides authorization capabilities for CRUD operations on assets and annotations. The Azure Data Catalog uses two authorization mechanisms:
The Azure Data Catalog uses two authorization mechanisms:
* Permission-based authorization ### Roles
-There are three roles: **Administrator**, **Owner**, and **Contributor**. Each role has its scope and rights, which are summarized in the following table.
-<table><tr><td><b>Role</b></td><td><b>Scope</b></td><td><b>Rights</b></td></tr><tr><td>Administrator</td><td>Catalog (all assets/annotations in the Catalog)</td><td>Read
-Delete
-ViewRoles
+There are three roles: **Administrator**, **Owner**, and **Contributor**. Each role has its scope and rights, which are summarized in the following table.
-ChangeOwnership
-ChangeVisibility
-ViewPermissions</td></tr><tr><td>Owner</td><td>Each asset (root item)</td><td>Read
-Delete
-ViewRoles
+|Role |Scope |Rights|
+|-|--||
+|Administrator|Catalog (all assets/annotations in the Catalog)|Read, Delete, ViewRoles, ChangeOwnership, ChangeVisibility, ViewPermissions|
+|Owner|Each asset (root item)|Read, Delete, ViewRoles, ChangeOwnership, ChangeVisibility, ViewPermissions|
+|Contributor|Each individual asset and annotation|Read*, Update, Delete, ViewRoles|
-ChangeOwnership
-ChangeVisibility
-ViewPermissions</td></tr><tr><td>Contributor</td><td>Each individual asset and annotation</td><td>Read
-Update
-Delete
-ViewRoles
-Note: all the rights are revoked if the Read right on the item is revoked from the Contributor</td></tr></table>
+> [!NOTE]
+> *All the rights are revoked if the Read right on the item is revoked from the Contributor
> [!NOTE] > **Read**, **Update**, **Delete**, **ViewRoles** rights are applicable to any item (asset or annotation) while **TakeOwnership**, **ChangeOwnership**, **ChangeVisibility**, **ViewPermissions** are only applicable to the root asset.
->
> **Delete** right applies to an item and any subitems or single item underneath it. For example, deleting an asset also deletes any annotations for that asset.
->
### Permissions+ Permission is as list of access control entries. Each access control entry assigns set of rights to a security principal. Permissions can only be specified on an asset (that is, root item) and apply to the asset and any subitems. During the **Azure Data Catalog** preview, only **Read** right is supported in the permissions list to enable scenario to restrict visibility of an asset.
During the **Azure Data Catalog** preview, only **Read** right is supported in t
By default any authenticated user has **Read** right for any item in the catalog unless visibility is restricted to the set of principals in the permissions. ## REST API+ **PUT** and **POST** view item requests can be used to control roles and permissions: in addition to item payload, two system properties can be specified **roles** and **permissions**. > [!NOTE] > **permissions** only applicable to a root item.
->
> **Owner** role only applicable to a root item.
->
> By default when an item is created in the catalog its **Contributor** is set to the currently authenticated user. If item should be updatable by everyone, **Contributor** should be set to &lt;Everyone&gt; special security principal in the **roles** property when item is first published (refer to the following example). **Contributor** cannot be changed and stays the same during life-time of an item (even **Administrator** or **Owner** doesnΓÇÖt have the right to change the **Contributor**). The only value supported for the explicit setting of the **Contributor** is &lt;Everyone&gt;: **Contributor** can only be a user who created an item or &lt;Everyone&gt;.
->
### Examples+ **Set Contributor to &lt;Everyone&gt; when publishing an item.** Special security principal &lt;Everyone&gt; has objectId "00000000-0000-0000-0000-000000000201". **POST** https:\//api.azuredatacatalog.com/catalogs/default/views/tables/?api-version=2016-03-30 > [!NOTE] > Some HTTP client implementations may automatically reissue requests in response to a 302 from the server, but typically strip Authorization headers from the request. Since the Authorization header is required to make requests to Azure Data Catalog, you must ensure the Authorization header is still provided when reissuing a request to a redirect location specified by Azure Data Catalog. The following sample code demonstrates it using the .NET HttpWebRequest object.
->
-**Body**
+#### Body
+ ```json { "roles": [
Special security principal &lt;Everyone&gt; has objectId "00000000-0000-0000-000
> [!NOTE] > In PUT itΓÇÖs not required to specify an item payload in the body: PUT can be used to update just roles and/or permissions.
->
## Next steps+ [Azure Data Catalog REST API reference](/rest/api/datacatalog/)
data-catalog Data Catalog How To Documentation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-how-to-documentation.md
Title: How to document data sources in Azure Data Catalog description: How-to article highlighting how to document data assets in Azure Data Catalog.--++ Previously updated : 08/01/2019 Last updated : 02/17/2022 # How to document data sources in Azure Data Catalog [!INCLUDE [Azure Purview redirect](../../includes/data-catalog-use-purview.md)] ## Introduction+ **Microsoft Azure Data Catalog** is a fully managed cloud service that serves as a system of registration and system of discovery for enterprise data sources. In other words, **Azure Data Catalog** is all about helping people discover, *understand*, and use data sources, and helping organizations to get more value from their existing data. When a data source is registered with **Azure Data Catalog**, its metadata is copied and indexed by the service, but the story doesnΓÇÖt end there. **Azure Data Catalog** also allows users to provide their own complete documentation that can describe the usage and common scenarios for the data source.
In [How to annotate data sources](data-catalog-how-to-annotate.md), you learn th
Tags and descriptions are great for simple annotations. However, to help data consumers better understand the use of a data source, and business scenarios for a data source, an expert can provide complete, detailed documentation. It's easy to document a data source. Select a data asset or container, and choose **Documentation**.
-![Documentation tab in a Data Catalog](media/data-catalog-documentation/data-catalog-documentation.png)
## Documenting data assets+ The benefit of **Azure Data Catalog** documentation allows you to use your Data Catalog as a content repository to create a complete narrative of your data assets. You can explore detailed content that describes containers and tables. If you already have content in another content repository, such as SharePoint or a file share, you can add to the asset documentation links to reference this existing content. This feature makes your existing documents more discoverable. > [!NOTE] > Documentation is not included in search index.
->
-![Documentation tab and hyperlink to web link](media/data-catalog-documentation/data-catalog-documentation2.png)
The level of documentation can range from describing the characteristics and value of a data asset container to a detailed description of table schema within a container. The level of documentation provided should be driven by your business needs. But in general, here are a few pros and cons of documenting data assets:
The level of documentation can range from describing the characteristics and val
* Document containers and tables: Most comprehensive approach, but might introduce more maintenance of the documents. ## Summary+ Documenting data sources with **Azure Data Catalog** can create a narrative about your data assets in as much detail as you need. By using links, you can link to content stored in an existing content repository, which brings your existing docs and data assets together. Once your users discover appropriate data assets, they can have a complete set of documentation.
data-catalog Data Catalog Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-samples.md
Title: Azure Data Catalog developer samples description: This article provides an overview of the available developer samples for the Data Catalog REST API. --++ Previously updated : 08/01/2019 Last updated : 02/16/2022 # Azure Data Catalog developer samples
Get started developing Azure Data Catalog apps using the Data Catalog REST API.
* [Get started with Azure Data Catalog](https://github.com/Azure-Samples/data-catalog-dotnet-get-started/) The get started sample shows you how to authenticate with Azure AD to Register, Search, and Delete a data asset using the Data Catalog REST API.
-
+ * [Get started with Azure Data Catalog using Service Principal](https://github.com/Azure-Samples/data-catalog-dotnet-service-principal-get-started/) This sample shows you how to register, search, and delete a data asset using the Data Catalog REST API. This sample uses the Service Principal authentication.
Get started developing Azure Data Catalog apps using the Data Catalog REST API.
* [Publish relationships into Azure Data Catalog](https://github.com/Azure-Samples/data-catalog-dotnet-publish-relationships/) This sample shows you how can programmatically publish relationship information to a data catalog.
-
+ ## Next steps+ [Azure Data Catalog REST API reference](/rest/api/datacatalog/)
data-factory Concepts Data Flow Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-overview.md
Mapping data flows are available in the following regions in ADF:
| Germany Non-Regional (Sovereign) | | | Germany North (Public) | | | Germany Northeast (Sovereign) | |
-| Germany West Central (Public) | |
+| Germany West Central (Public) | Γ£ô |
| Japan East | Γ£ô |
-| Japan West | |
+| Japan West | Γ£ô |
| Korea Central | Γ£ô | | Korea South | | | North Central US | Γ£ô |
Mapping data flows are available in the following regions in ADF:
| South Africa North | Γ£ô | | South Africa West | | | South Central US | |
-| South India | |
+| South India | Γ£ô |
| Southeast Asia | Γ£ô | | Switzerland North | Γ£ô | | Switzerland West | |
Mapping data flows are available in the following regions in ADF:
| US Gov Virginia | Γ£ô | | West Central US | | | West Europe | Γ£ô |
-| West India | |
+| West India | Γ£ô |
| West US | Γ£ô | | West US 2 | Γ£ô |
+| West US 3 | Γ£ô |
## Next steps
data-factory Connector Troubleshoot Sharepoint Online List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-sharepoint-online-list.md
Previously updated : 10/01/2021 Last updated : 01/25/2022
This article provides suggestions to troubleshoot common problems with the Share
- **Recommendation**: Check your registered application (service principal ID) and key to see whether they're set correctly.
+## Connection failed after granting permission in SharePoint Online List
+
+### Symptoms
+
+You granted permission to your data factory in SharePoint Online List, but you still fail with the following error message:
+
+`Failed to get metadata of odata service, please check if service url and credential is correct and your application has permission to the resource. Expected status code: 200, actual status code: Unauthorized, response is : {"error":"invalid_request","error_description":"Token type is not allowed."}.`
+
+### Cause
+
+The SharePoint Online List uses ACS to acquire the access token to grant access to other applications. But for the tenant built after November 7, 2018, ACS is disabled by default.
+
+### Recommendation
+
+You need to enable ACS to acquire the access token. Take the following steps:
+
+1. Download [SharePoint Online Management Shell](https://www.microsoft.com/download/details.aspx?id=35588#:~:text=The%20SharePoint%20Online%20Management%20Shell%20has%20a%20new,and%20saving%20the%20file%20to%20your%20hard%20disk.), and ensure that you have a tenant admin account.
+1. Run the following command in the SharePoint Online Management Shell. Replace `<tenant name>` with your tenant name and add `-admin` after it.
+
+ ```powershell
+ Connect-SPOService -Url https://<tenant name>-admin.sharepoint.com/
+ ```
+1. Enter your tenant admin information in the pop-up authentication window.
+1. Run the following command:
+
+ ```powershell
+ Set-SPOTenant -DisableCustomAppAuthentication $false
+ ```
+ :::image type="content" source="./media/connector-troubleshoot-guide/sharepoint-online-management-shell-command.png" alt-text="Diagram of Azure Data Lake Storage Gen1 connections for troubleshooting issues.":::
+
+1. Use ACS to get the access token.
++ ## Next steps For more troubleshooting help, try these resources:
databox-online Azure Stack Edge Gpu 2202 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2202-release-notes.md
This article applies to the **Azure Stack Edge 2202** release, which maps to sof
## What's new
-The 2202 release introduces clustering for Azure Stack Edge. You can now deploy a two-node device cluster in addition to a single node device. The clustering feature is in preview and is available only for the Azure Stack Edge Pro GPU devices.
+The 2202 release has the following features and enhancements:
-For more information, see [What is clustering on Azure Stack Edge?](azure-stack-edge-gpu-clustering-overview.md).
+- **Clustering support** - This release introduces clustering support for Azure Stack Edge. You can now deploy a two-node device cluster in addition to a single node device. The clustering feature is in preview and is available only for the Azure Stack Edge Pro GPU devices.
+ For more information, see [What is clustering on Azure Stack Edge?](azure-stack-edge-gpu-clustering-overview.md).
-<!--## Issues fixed in 2202 release
+- **Password reset extension** - Starting this release, password reset extension for both Windows and Linux virtual machines (VMs) are enabled.
+- **VM improvements** - A new VM size F12 was added in this release.
+- **Multi-Access Edge Computing (MEC) and Virtual Network Functions (VNF) improvements**:
+ - In this release, VM create and delete for VNF create and delete were parallelized. This has significantly reduced the creation time for VNFs that contain multiple VMs.
+ - The VHD ingestion job resource clean up was moved out of VNF create and delete. This reduced the VNF creation and deletion times.
+- **Updates for Azure Arc and Edge container registry** - Azure Arc and Edge container registry versions were updated. For more information, see [About updates](azure-stack-edge-gpu-install-update.md#about-latest-update).
+- **Security fixes** - Starting this release, a pod security policy is set up on the Kubernetes cluster on your Azure Stack Edge device. If you are using root privileges in your containerized solution, you may experience some change in the behavior. No action is required on your part.
+++
+## Issues fixed in 2202 release
The following table lists the issues that were release noted in previous releases and fixed in the current release. | No. | Feature | Issue | | | | |
-|**1.**|Multi-Access Edge Compute | In previous releases, the Azure Stack Edge device did not send VNF operation results back to the Azure Network Function Manager, owing to the MEC Operation Manager (a component of MEC agent) being reset. |-->
+|**1.**|Azure Arc | In the previous releases, there was a bug in the proxy implementation that resulted in Azure Arc not functioning properly. In this version, a web proxy bypass list was added to the Azure Arc *no_proxy* list. |
## Known issues in 2202 release
databox-online Azure Stack Edge Gpu Deploy Arc Kubernetes Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-arc-kubernetes-cluster.md
Previously updated : 10/05/2021 Last updated : 02/17/2022
Before you can enable Azure Arc on Kubernetes cluster, make sure that you have c
- You can have any other client with a [Supported operating system](azure-stack-edge-gpu-system-requirements.md#supported-os-for-clients-connected-to-device) as well. This article describes the procedure when using a Windows client.
+
1. You have completed the procedure described in [Access the Kubernetes cluster on Azure Stack Edge Pro device](azure-stack-edge-gpu-create-kubernetes-cluster.md). You have: - Installed `kubectl` on the client.
You can also register resource providers via the `az cli`. For more information,
`az ad sp create-for-rbac --name "<Informative name for service principal>"`
- For information on how to log into the `az cli`, [Start Cloud Shell in Azure portal](../cloud-shell/quickstart-powershell.md#start-cloud-shell)
+ For information on how to log into the `az cli`, [Start Cloud Shell in Azure portal](../cloud-shell/quickstart-powershell.md#start-cloud-shell). If using `az cli` on a local client to create the service principal, make sure that you are running version 2.25 or later.
Here is an example.
databox-online Azure Stack Edge Gpu Install Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-install-update.md
The current update is Update 2202. This update installs two updates, the device
For information on what's new in this update, go to [Release notes](azure-stack-edge-gpu-2202-release-notes.md).
-**To apply 2202 update, your device must be running 2106.**
+**To apply 2202 update, your device must be running 2106 or later.**
- If you are not running the minimal supported version, you'll see this error: *Update package cannot be installed as its dependencies are not met*. - You can update to 2106 from an older version and then install 2202.
The procedure to update an Azure Stack Edge is the same whether it is a single-n
- **Single node** - For a single node device, installing an update or hotfix is disruptive and will restart your device. Your device will experience a downtime for the entire duration of the update. -- **Two-node** - For a two-node cluster, this is an optimized update. The two-node cluster may experience short, intermittent disruptions while the update is in progress. We recommend that you shouldn't perform any operations on the other node when update is in progress on the first node of the cluster.
+- **Two-node** - For a two-node cluster, this is an optimized update. The two-node cluster may experience short, intermittent disruptions while the update is in progress. We recommend that you shouldn't perform any operations on the device node when update is in progress.
The Kubernetes worker VMs will go down when a node goes down. The Kubernetes master VM will fail over to the other node. Workloads will continue to run. For more information, see [Kubernetes failover scenarios for Azure Stack Edge](azure-stack-edge-gpu-kubernetes-failover-scenarios.md).
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Title: Container security with Microsoft Defender for Cloud description: Learn about Microsoft Defender for Containers Previously updated : 02/14/2022 Last updated : 02/16/2022 # Overview of Microsoft Defender for Containers
When reviewing the outstanding recommendations for your container-related resour
-### Workload protection best-practices using Kubernetes admission control
+### Environment hardening
For a bundle of recommendations to protect the workloads of your Kubernetes containers, install the **Azure Policy for Kubernetes**. You can also auto deploy this component as explained in [enable auto provisioning of agents and extensions](enable-data-collection.md#auto-provision-mma). By default, auto provisioning is enabled when you enable Defender for Containers.
defender-for-cloud Kubernetes Workload Protections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/kubernetes-workload-protections.md
Title: Workload protections for your Kubernetes workloads description: Learn how to use Microsoft Defender for Cloud's set of Kubernetes workload protection security recommendations Previously updated : 02/15/2022 Last updated : 02/16/2022 # Protect your Kubernetes workloads
If you disabled any of the default protections when you enabled Microsoft Defend
## Deploy the add-on to specified clusters
-You can manually configure the Kubernetes workload add-on, or extension protection through the Recommendations page. This can be accomplished by remediating the `Azure Policy add-on for Kubernetes should be installed and enabled on your clusters` recommendation.
+You can manually configure the Kubernetes workload add-on, or extension protection through the Recommendations page. This can be accomplished by remediating the `Azure Policy add-on for Kubernetes should be installed and enabled on your clusters` recommendation, or `Azure policy extension for Kubernetes should be installed and enabled on your clusters`.
**To Deploy the add-on to specified clusters**:
-1. From the recommendations page, search for the recommendation `Azure Policy add-on for Kubernetes should be installed and enabled on your clusters`.
+1. From the recommendations page, search for the recommendation `Azure Policy add-on for Kubernetes should be installed and enabled on your clusters`, or `Azure policy extension for Kubernetes should be installed and enabled on your clusters`.
:::image type="content" source="./media/defender-for-kubernetes-usage/recommendation-to-install-policy-add-on-for-kubernetes.png" alt-text="Recommendation **Azure Policy add-on for Kubernetes should be installed and enabled on your clusters**.":::
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
Title: Archive of what's new in Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud from six months ago and earlier. Previously updated : 02/13/2022 Last updated : 02/17/2022 # Archive for what's new in Defender for Cloud?
When the Azure Policy add-on for Kubernetes is installed on your Azure Kubernete
For example, you can mandate that privileged containers shouldn't be created, and any future requests to do so will be blocked.
-Learn more in [Workload protection best-practices using Kubernetes admission control](defender-for-containers-introduction.md#workload-protection-best-practices-using-kubernetes-admission-control).
+Learn more in [Workload protection best-practices using Kubernetes admission control](defender-for-containers-introduction.md#environment-hardening).
> [!NOTE] > While the recommendations were in preview, they didn't render an AKS cluster resource unhealthy, and they weren't included in the calculations of your secure score. with this GA announcement these will be included in the score calculation. If you haven't remediated them already, this might result in a slight impact on your secure score. Remediate them wherever possible as described in [Remediate recommendations in Azure Security Center](implement-security-recommendations.md).
When you've installed the Azure Policy add-on for Kubernetes on your AKS cluster
For example, you can mandate that privileged containers shouldn't be created, and any future requests to do so will be blocked.
-Learn more in [Workload protection best-practices using Kubernetes admission control](defender-for-containers-introduction.md#workload-protection-best-practices-using-kubernetes-admission-control).
+Learn more in [Workload protection best-practices using Kubernetes admission control](defender-for-containers-introduction.md#environment-hardening).
### Vulnerability assessment findings are now available in continuous export
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 01/20/2022 Last updated : 02/17/2022 # What's new in Microsoft Defender for Cloud?
To learn about *planned* changes that are coming soon to Defender for Cloud, see
> [!TIP] > If you're looking for items older than six months, you'll find them in the [Archive for What's new in Microsoft Defender for Cloud](release-notes-archive.md).
+## February 2022
+
+Updates in February include:
+
+- [Kubernetes workload protection for Arc enabled K8s clusters](#kubernetes-workload-protection-for-arc-enabled-k8s-clusters)
+
+### Kubernetes workload protection for Arc enabled K8s clusters
+
+Defender for Containers for Kubernetes workloads previously only protected AKS. We have now extended the protective coverage to include Azure Arc enabled Kubernetes clusters.
+
+Learn how to [set up your Kubernetes workload protection](kubernetes-workload-protections.md#set-up-your-workload-protection) for AKS and Azure Arc enabled Kubernetes clusters.
## January 2022
devtest-labs Configure Lab Remote Desktop Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/configure-lab-remote-desktop-gateway.md
In Azure DevTest Labs, you can configure a remote desktop gateway for your lab t
This approach is more secure because the lab user authenticates directly to the gateway machine or can use company credentials on a domain-joined gateway machine to connect to their machines. The lab also supports using token authentication to the gateway machine that allows users to connect to their lab virtual machines without having the RDP port exposed to the internet. This article walks through an example on how to set up a lab that uses token authentication to connect to lab machines.
+Looking to connect through Bastion, read "[Enable browser connection to DevTest Labs VMs with Azure Bastion](enable-browser-connection-lab-virtual-machines.md)".
+ ## Architecture of the solution ![Architecture of the solution](./media/configure-lab-remote-desktop-gateway/architecture.png)
Follow these steps to set up a sample solution for the remote desktop gateway fa
Once both gateway and lab are configured, the connection file created when the lab user clicks on the **Connect** will automatically include information necessary to connect using token authentication. ## Next steps
-See the following article to learn more about Remote Desktop
+See the following article to learn more about Remote Desktop
event-grid Cloudevents Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/cloudevents-schema.md
Title: Use Azure Event Grid with events in CloudEvents schema
description: Describes how to use the CloudEvents schema for events in Azure Event Grid. The service supports events in the JSON implementation of CloudEvents. Last updated 07/22/2021
+ms.devlang: csharp, javascript
event-grid Resize Images On Storage Blob Upload Event https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/resize-images-on-storage-blob-upload-event.md
Title: 'Tutorial: Use Azure Event Grid to automate resizing uploaded images'
description: 'Tutorial: Azure Event Grid can trigger on blob uploads in Azure Storage. You can use this to send image files uploaded to Azure Storage to other services, such as Azure Functions, for resizing and other improvements.' Last updated 09/28/2021
+ms.devlang: csharp, javascript
firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/overview.md
Previously updated : 01/20/2022 Last updated : 02/17/2022 # Customer intent: As an administrator, I want to evaluate Azure Firewall so I can determine if I want to use it.
Azure Firewall Standard has the following known issues:
|Unable to see Network Rule Name in Azure Firewall Logs|Azure Firewall network rule log data does not show the Rule name for network traffic.|A feature is being investigated to support this.| |XFF header in HTTP/S|XFF headers are overwritten with the original source IP address as seen by the firewall. This is applicable for the following use cases:<br>- HTTP requests<br>- HTTPS requests with TLS termination|A fix is being investigated.| | Firewall logs (Resource specific tables - Preview) | Resource specific log queries are in preview mode and aren't currently supported. | A fix is being investigated.|
-|Availability Zones for Firewall Premium in the Southeast Asia region|You can't currently deploy Azure Firewall Premium with Availability Zones in the Southeast Asia region.|Deploy the firewall in Southeast Asia without Availability Zones, or deploy in a region that supports Availability Zones.|
+|Can't upgrade to Premium with Availability Zones in the Southeast Asia region.|You can't currently upgrade to Azure Firewall Premium with Availability Zones in the Southeast Asia region.|Deploy a new Premium firewall in Southeast Asia without Availability Zones, or deploy in a region that supports Availability Zones.|
### Azure Firewall Premium
Untrusted customer signed certificates|Customer signed certificates are not trus
|Certificate Propagation|After a CA certificate is applied on the firewall, it may take between 5-10 minutes for the certificate to take effect.|A fix is being investigated.| |TLS 1.3 support|TLS 1.3 is partially supported. The TLS tunnel from client to the firewall is based on TLS 1.2, and from the firewall to the external Web server is based on TLS 1.3.|Updates are being investigated.| |KeyVault Private Endpoint|KeyVault supports Private Endpoint access to limit its network exposure. Trusted Azure Services can bypass this limitation if an exception is configured as described in the [KeyVault documentation](../key-vault/general/overview-vnet-service-endpoints.md#trusted-services). Azure Firewall is not currently listed as a trusted service and can't access the Key Vault.|A fix is being investigated.|
-|IDPS Bypass list|IDPS Bypass list doesn't support IP Groups.|A fix is being investigated.|
+|IDPS Bypass list|If you enable IDPS (either ΓÇÿAlertΓÇÖ or ΓÇÿAlert and DenyΓÇÖ mode) and actively delete one or more existing rules in IDPS Bypass list, you may be subject to packet loss which is correlated to the deleted rules source/destination IP addresses. |A fix is being investigated.<br><br>You may respond to this issue by taking one of the following actions:<br><br>- Do a start/stop procedure as explained [here](firewall-faq.yml#how-can-i-stop-and-start-azure-firewall).<br>- Open a support ticket and we will re-image your effected firewall virtual machines.|
+|Availability Zones for Firewall Premium in the Southeast Asia region|You can't currently deploy Azure Firewall Premium with Availability Zones in the Southeast Asia region.|Deploy the firewall in Southeast Asia without Availability Zones, or deploy in a region that supports Availability Zones.|
++ ## Next steps
governance Effects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/effects.md
related resources to match.
- **Type** (required) - Specifies the type of the related resource to match.
- - If **details.type** is a resource type underneath the **if** condition resource, the policy
+ - If **type** is a resource type underneath the **if** condition resource, the policy
queries for resources of this **type** within the scope of the evaluated resource. Otherwise,
- policy queries within the same resource group as the evaluated resource.
+ policy queries within the same resource group or subscription as the evaluated resource depending on the **existenceScope**.
- **Name** (optional) - Specifies the exact name of the resource to match and causes the policy to fetch one specific resource instead of all resources of the specified type.
related resources to match and the template deployment to execute.
- **Type** (required) - Specifies the type of the related resource to match.
- - Starts by trying to fetch a resource underneath the **if** condition resource, then queries
- within the same resource group as the **if** condition resource.
+ - If **type** is a resource type underneath the **if** condition resource, the policy
+ queries for resources of this **type** within the scope of the evaluated resource. Otherwise,
+ policy queries within the same resource group or subscription as the evaluated resource depending on the **existenceScope**.
- **Name** (optional) - Specifies the exact name of the resource to match and causes the policy to fetch one specific resource instead of all resources of the specified type.
governance Get Resource Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/how-to/get-resource-changes.md
Title: Get resource changes
-description: Understand how to find when a resource was changed, get a list of the properties that changed, and evaluate the diffs.
Previously updated : 08/17/2021
+description: Understand how to find when a resource was changed and query the list of resource configuration changes at scale
Last updated : 01/27/2022 # Get resource changes Resources get changed through the course of daily use, reconfiguration, and even redeployment. Change can come from an individual or by an automated process. Most change is by design, but
-sometimes it isn't. With the last 14 days of change history, Azure Resource Graph enables you to:
+sometimes it isn't. With the last seven days of changes, Resource configuration changes enables you to:
- Find when changes were detected on an Azure Resource Manager property - For each resource change, see property change details-- See a full comparison of the resource before and after the detected change
+- Query changes at scale across your subscriptions, Management group, or tenant
Change detection and details are valuable for the following example scenarios:
Change detection and details are valuable for the following example scenarios:
- Keeping a Configuration Management Database, known as a CMDB, up-to-date. Instead of refreshing all resources and their full property sets on a scheduled frequency, only get what changed. - Understanding what other properties may have been changed when a resource changed compliance
- state. Evaluation of these additional properties can provide insights into other properties that
+ state. Evaluation of these extra properties can provide insights into other properties that
may need to be managed via an Azure Policy definition.
-This article shows how to gather this information through Resource Graph's SDK. To see this
-information in the Azure portal, see Azure Policy's
-[Change history](../../policy/how-to/determine-non-compliance.md#change-history) or Azure Activity
+This article shows how to query Resource configuration changes through Resource Graph. To see this
+information in the Azure portal, see [Azure Resource Graph Explorer](../first-query-portal.md), Azure Policy's
+[Change history](../../policy/how-to/determine-non-compliance.md#change-history), or Azure Activity
Log [Change history](../../../azure-monitor/essentials/activity-log.md#view-the-activity-log). For details about changes to your applications from the infrastructure layer all the way to application deployment, see
deployment, see
Monitor. > [!NOTE]
-> Change details in Resource Graph are for Resource Manager properties. For tracking changes inside
+> Resource configuration changes is for Azure Resource Manager properties. For tracking changes inside
> a virtual machine, see Azure Automation's > [Change tracking](../../../automation/change-tracking/overview.md) or Azure Policy's > [Guest Configuration for VMs](../../policy/concepts/guest-configuration.md). > [!IMPORTANT]
-> Change history in Azure Resource Graph is in Public Preview.
+> Resource configuration changes is in Public Preview and only supports changes to resource types from the [Resources table](..//reference/supported-tables-resources.md#resources) in Resource Graph. This does not yet include changes to the resource container resources, such as Management groups, Subscriptions, and Resource groups.
## Find detected change events and view change details
-The first step in seeing what changed on a resource is to find the change events related to that
-resource within a window of time. Each change event also includes details about what changed on the
-resource. This step is done through the **resourceChanges** REST endpoint.
+When a resource is created, updated, or deleted, a new change resource (Microsoft.Resources/changes) is created to extend the modified resource and represent the changed properties.
-The **resourceChanges** endpoint accepts the following parameters in the request body:
--- **resourceId** \[required\]: The Azure resource to look for changes on.-- **interval** \[required\]: A property with _start_ and _end_ dates for when to check for a change
- event using the **Zulu Time Zone (Z)**.
-- **fetchPropertyChanges** (optional): A Boolean property that sets if the response object includes
- property changes.
-
-Example request body:
+Example change resource property bag:
```json {
- "resourceId": "/subscriptions/{subscriptionId}/resourceGroups/MyResourceGroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount",
- "interval": {
- "start": "2019-09-28T00:00:00.000Z",
- "end": "2019-09-29T00:00:00.000Z"
+ "targetResourceId": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myResourceGroup/providers/microsoft.compute/virtualmachines/myVM",
+ "targetResourceType": "microsoft.compute/virtualmachines",
+ "changeType": "Update",
+ "changeAttributes": {
+ "changesCount": 2,
+ "correlationId": "88420d5d-8d0e-471f-9115-10d34750c617",
+ "timestamp": "2021-12-07T09:25:41.756Z",
+ "previousResourceSnapshotId": "ed90e35a-1661-42cc-a44c-e27f508005be",
+ "newResourceSnapshotId": "6eac9d0f-63b4-4e7f-97a5-740c73757efb"
+ },
+ "changes": {
+ "properties.provisioningState": {
+ "newValue": "Succeeded",
+ "previousValue": "Updating",
+ "changeCategory": "System",
+ "propertyChangeType": "Update"
},
- "fetchPropertyChanges": true
-}
-```
-
-With the above request body, the REST API URI for **resourceChanges** is:
-
-```http
-POST https://management.azure.com/providers/Microsoft.ResourceGraph/resourceChanges?api-version=2018-09-01-preview
-```
-
-The response looks similar to this example:
-
-```json
-{
- "changes": [
- {
- "changeId": "{\"beforeId\":\"3262e382-9f73-4866-a2e9-9d9dbee6a796\",\"beforeTime\":\"2019-09-28T00:45:35.012Z\",\"afterId\":\"6178968e-981e-4dac-ac37-340ee73eb577\",\"afterTime\":\"2019-09-28T00:52:53.371Z\"}",
- "beforeSnapshot": {
- "snapshotId": "3262e382-9f73-4866-a2e9-9d9dbee6a796",
- "timestamp": "2019-09-28T00:45:35.012Z"
- },
- "afterSnapshot": {
- "snapshotId": "6178968e-981e-4dac-ac37-340ee73eb577",
- "timestamp": "2019-09-28T00:52:53.371Z"
- },
- "changeType": "Create"
- },
- {
- "changeId": "{\"beforeId\":\"a00f5dac-86a1-4d86-a1c5-a9f7c8147b7c\",\"beforeTime\":\"2019-09-28T00:43:38.366Z\",\"afterId\":\"3262e382-9f73-4866-a2e9-9d9dbee6a796\",\"afterTime\":\"2019-09-28T00:45:35.012Z\"}",
- "beforeSnapshot": {
- "snapshotId": "a00f5dac-86a1-4d86-a1c5-a9f7c8147b7c",
- "timestamp": "2019-09-28T00:43:38.366Z"
- },
- "afterSnapshot": {
- "snapshotId": "3262e382-9f73-4866-a2e9-9d9dbee6a796",
- "timestamp": "2019-09-28T00:45:35.012Z"
- },
- "changeType": "Delete"
- },
- {
- "changeId": "{\"beforeId\":\"b37a90d1-7ebf-41cd-8766-eb95e7ee4f1c\",\"beforeTime\":\"2019-09-28T00:43:15.518Z\",\"afterId\":\"a00f5dac-86a1-4d86-a1c5-a9f7c8147b7c\",\"afterTime\":\"2019-09-28T00:43:38.366Z\"}",
- "beforeSnapshot": {
- "snapshotId": "b37a90d1-7ebf-41cd-8766-eb95e7ee4f1c",
- "timestamp": "2019-09-28T00:43:15.518Z"
- },
- "afterSnapshot": {
- "snapshotId": "a00f5dac-86a1-4d86-a1c5-a9f7c8147b7c",
- "timestamp": "2019-09-28T00:43:38.366Z"
- },
- "propertyChanges": [
- {
- "propertyName": "tags.org",
- "afterValue": "compute",
- "changeCategory": "User",
- "changeType": "Insert"
- },
- {
- "propertyName": "tags.team",
- "afterValue": "ARG",
- "changeCategory": "User",
- "changeType": "Insert"
- }
- ],
- "changeType": "Update"
- },
- {
- "changeId": "{\"beforeId\":\"19d12ab1-6ac6-4cd7-a2fe-d453a8e5b268\",\"beforeTime\":\"2019-09-28T00:42:46.839Z\",\"afterId\":\"b37a90d1-7ebf-41cd-8766-eb95e7ee4f1c\",\"afterTime\":\"2019-09-28T00:43:15.518Z\"}",
- "beforeSnapshot": {
- "snapshotId": "19d12ab1-6ac6-4cd7-a2fe-d453a8e5b268",
- "timestamp": "2019-09-28T00:42:46.839Z"
- },
- "afterSnapshot": {
- "snapshotId": "b37a90d1-7ebf-41cd-8766-eb95e7ee4f1c",
- "timestamp": "2019-09-28T00:43:15.518Z"
- },
- "propertyChanges": [{
- "propertyName": "tags.cgtest",
- "afterValue": "hello",
- "changeCategory": "User",
- "changeType": "Insert"
- }],
- "changeType": "Update"
- }
- ]
+ "tags.key1": {
+ "newValue": "NewTagValue",
+ "previousValue": "null",
+ "changeCategory": "User",
+ "propertyChangeType": "Insert"
+ }
+ }
} ```
-Each detected change event for the **resourceId** has the following properties:
+Each change resource has the following properties:
-- **changeId** - This value is unique to that resource. While the **changeId** string may sometimes
- contain other properties, it's only guaranteed to be unique.
-- **beforeSnapshot** - Contains the **snapshotId** and **timestamp** of the resource snapshot that
- was taken before a change was detected.
-- **afterSnapshot** - Contains the **snapshotId** and **timestamp** of the resource snapshot that
- was taken after a change was detected.
-- **changeType** - Describes the type of change detected for the entire change record between the
- **beforeSnapshot** and **afterSnapshot**. Values are: _Create_, _Update_, and _Delete_. The
- **propertyChanges** property array is only included when **changeType** is _Update_.
+- **targetResourceId** - The resourceID of the resource on which the change occurred.
+ - **targetResourceType** - The resource type of the resource on which the change occurred.
+- **changeType** - Describes the type of change detected for the entire change record. Values are: _Create_, _Update_, and _Delete_. The
+ **changes** property dictionary is only included when **changeType** is _Update_. For the _Delete_ case, the change resource will still be maintained as an extension of the deleted resource for seven days, even if the entire Resource group has been deleted. The change resource will not block deletions or impact any existing delete behavior.
- > [!IMPORTANT]
- > _Create_ is only available on resources that previously existed and were deleted within the last
- > 14 days.
-- **propertyChanges** - This array of properties details all of the resource properties that were
- updated between the **beforeSnapshot** and the **afterSnapshot**:
- - **propertyName** - The name of the resource property that was altered.
- - **changeCategory** - Describes what made the change. Values are: _System_ and _User_.
- - **changeType** - Describes the type of change detected for the individual resource property.
+- **changes** - Dictionary of the resource properties (with property name as the key) that were updated as part of the change:
+ - **propertyChangeType** - Describes the type of change detected for the individual resource property.
Values are: _Insert_, _Update_, _Remove_.
- - **beforeValue** - The value of the resource property in the **beforeSnapshot**. Isn't displayed
- when **changeType** is _Insert_.
- - **afterValue** - The value of the resource property in the **afterSnapshot**. Isn't displayed
- when **changeType** is _Remove_.
-
-## Compare resource changes
-
-With the **changeId** from the **resourceChanges** endpoint, the **resourceChangeDetails** REST
-endpoint is then used to get the before and after snapshots of the resource that was changed.
-
-The **resourceChangeDetails** endpoint requires two parameters in the request body:
--- **resourceId**: The Azure resource to compare changes on.-- **changeId**: The unique change event for the **resourceId** gathered from **resourceChanges**.-
-Example request body:
-
-```json
-{
- "resourceId": "/subscriptions/{subscriptionId}/resourceGroups/MyResourceGroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount",
- "changeId": "{\"beforeId\":\"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\",\"beforeTime\":'2019-05-09T00:00:00.000Z\",\"afterId\":\"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\",\"afterTime\":'2019-05-10T00:00:00.000Z\"}"
-}
+ - **previousValue** - The value of the resource property in the previous snapshot. Value is _null_ when **changeType** is _Insert_.
+ - **newValue** - The value of the resource property in the new snapshot. Value is _null_ when **changeType** is _Remove_.
+ - **changeCategory** - Describes if the property change was the result of a change in value (_User_) or a difference in referenced API versions (_System_). Values are: _System_ and _User_.
+
+- **changeAttributes** - Array of metadata related to the change:
+ - **changesCount** - The number of properties changed as part of this change record.
+ - **correlationId** - Contains the ID for tracking related events. Each deployment has a correlation ID, and all actions in a single template will share the same correlation ID.
+ - **timestamp** - The datetime of when the change was detected.
+ - **previousResourceSnapshotId** - Contains the ID of the resource snapshot that was used as the previous state of the resource.
+ - **newResourceSnapshotId** - Contains the ID of the resource snapshot that was used as the new state of the resource.
+
+## Resource Graph Query samples
+
+With Resource Graph, you can query the **ResourceChanges** table to filter or sort by any of the change resource properties:
+
+### All changes in the past one day
+```kusto
+ResourceChanges
+| extend changeTime = todatetime(properties.changeAttributes.timestamp), targetResourceId = tostring(properties.targetResourceId),
+changeType = tostring(properties.changeType), correlationId = properties.changeAttributes.correlationId, 
+changedProperties = properties.changes, changeCount = properties.changeAttributes.changesCount
+| where changeTime > ago(1d)
+| order by changeTime desc
+| project changeTime, targetResourceId, changeType, correlationId, changeCount, changedProperties
```
-With the above request body, the REST API URI for **resourceChangeDetails** is:
-
-```http
-POST https://management.azure.com/providers/Microsoft.ResourceGraph/resourceChangeDetails?api-version=2018-09-01-preview
+### Resources deleted in a specific resource group
+```kusto
+ResourceChanges
+| where resourceGroup == "myResourceGroup"
+| extend changeTime = todatetime(properties.changeAttributes.timestamp), targetResourceId = tostring(properties.targetResourceId),
+changeType = tostring(properties.changeType), correlationId = properties.changeAttributes.correlationId
+| where changeType == "Delete"
+| order by changeTime desc
+| project changeTime, resourceGroup, targetResourceId, changeType, correlationId
```
-The response looks similar to this example:
-
-```json
-{
- "changeId": "{\"beforeId\":\"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\",\"beforeTime\":'2019-05-09T00:00:00.000Z\",\"afterId\":\"xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx\",\"beforeTime\":'2019-05-10T00:00:00.000Z\"}",
- "beforeSnapshot": {
- "timestamp": "2019-03-29T01:32:05.993Z",
- "content": {
- "sku": {
- "name": "Standard_LRS",
- "tier": "Standard"
- },
- "kind": "Storage",
- "id": "/subscriptions/{subscriptionId}/resourceGroups/MyResourceGroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount",
- "name": "mystorageaccount",
- "type": "Microsoft.Storage/storageAccounts",
- "location": "westus",
- "tags": {},
- "properties": {
- "networkAcls": {
- "bypass": "AzureServices",
- "virtualNetworkRules": [],
- "ipRules": [],
- "defaultAction": "Allow"
- },
- "supportsHttpsTrafficOnly": false,
- "encryption": {
- "services": {
- "file": {
- "enabled": true,
- "lastEnabledTime": "2018-07-27T18:37:21.8333895Z"
- },
- "blob": {
- "enabled": true,
- "lastEnabledTime": "2018-07-27T18:37:21.8333895Z"
- }
- },
- "keySource": "Microsoft.Storage"
- },
- "provisioningState": "Succeeded",
- "creationTime": "2018-07-27T18:37:21.7708872Z",
- "primaryEndpoints": {
- "blob": "https://mystorageaccount.blob.core.windows.net/",
- "queue": "https://mystorageaccount.queue.core.windows.net/",
- "table": "https://mystorageaccount.table.core.windows.net/",
- "file": "https://mystorageaccount.file.core.windows.net/"
- },
- "primaryLocation": "westus",
- "statusOfPrimary": "available"
- }
- }
- },
- "afterSnapshot": {
- "timestamp": "2019-03-29T01:54:24.42Z",
- "content": {
- "sku": {
- "name": "Standard_LRS",
- "tier": "Standard"
- },
- "kind": "Storage",
- "id": "/subscriptions/{subscriptionId}/resourceGroups/MyResourceGroup/providers/Microsoft.Storage/storageAccounts/mystorageaccount",
- "name": "mystorageaccount",
- "type": "Microsoft.Storage/storageAccounts",
- "location": "westus",
- "tags": {},
- "properties": {
- "networkAcls": {
- "bypass": "AzureServices",
- "virtualNetworkRules": [],
- "ipRules": [],
- "defaultAction": "Allow"
- },
- "supportsHttpsTrafficOnly": true,
- "encryption": {
- "services": {
- "file": {
- "enabled": true,
- "lastEnabledTime": "2018-07-27T18:37:21.8333895Z"
- },
- "blob": {
- "enabled": true,
- "lastEnabledTime": "2018-07-27T18:37:21.8333895Z"
- }
- },
- "keySource": "Microsoft.Storage"
- },
- "provisioningState": "Succeeded",
- "creationTime": "2018-07-27T18:37:21.7708872Z",
- "primaryEndpoints": {
- "blob": "https://mystorageaccount.blob.core.windows.net/",
- "queue": "https://mystorageaccount.queue.core.windows.net/",
- "table": "https://mystorageaccount.table.core.windows.net/",
- "file": "https://mystorageaccount.file.core.windows.net/"
- },
- "primaryLocation": "westus",
- "statusOfPrimary": "available"
- }
- }
- }
-}
+### Changes to a specific property value
+```kusto
+ResourceChanges
+| extend provisioningStateChange = properties.changes["properties.provisioningState"], changeTime = todatetime(properties.changeAttributes.timestamp), targetResourceId = tostring(properties.targetResourceId), changeType = tostring(properties.changeType)
+| where isnotempty(provisioningStateChange)and provisioningStateChange.newValue == "Succeeded"
+| order by changeTime desc
+| project changeTime, targetResourceId, changeType, provisioningStateChange.previousValue, provisioningStateChange.newValue
```
-**beforeSnapshot** and **afterSnapshot** each give the time the snapshot was taken and the
-properties at that time. The change happened at some point between these snapshots. Looking at the
-previous example, we can see that the property that changed was **supportsHttpsTrafficOnly**.
-
-To compare the results, either use the **changes** property in **resourceChanges** or evaluate the
-**content** portion of each snapshot in **resourceChangeDetails** to determine the difference. If
-you compare the snapshots, the **timestamp** always shows as a difference despite being expected.
+### Query the latest resource configuration for resources created in the last seven days
+```kusto
+ResourceChanges
+| extend targetResourceId = tostring(properties.targetResourceId), changeType = tostring(properties.changeType), changeTime = todatetime(properties.changeAttributes.timestamp)
+| where changeTime > ago(7d) and changeType == "Create"
+| project targetResourceId, changeType, changeTime
+| join ( Resources | extend targetResourceId=id) on targetResourceId
+| order by changeTime desc
+| project changeTime, changeType, id, resourceGroup, type, properties
+```
## Next steps
hdinsight Hdinsight 40 Component Versioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-40-component-versioning.md
The OSS component versions associated with HDInsight 4.0 are listed in the follo
| Apache Zeppelin | 0.8.0 |
-This table lists certain HDInsight 4.0 cluster types that have retired.
+This table lists certain HDInsight 4.0 cluster types that have retired or will be retired soon.
| Cluster Type | Framework version | Support expiration date | Retirement date | ||-||--| | HDInsight 4.0 Spark | 2.3 | June 30, 2020 | June 30, 2020 | | HDInsight 4.0 Kafka | 1.1 | Dec 31, 2020 | Dec 31, 2020 |
+| HDInsight 4.0 Kafka | 2.1.0 * | Sep 30, 2022 | Oct 1, 2022 |
+
+* Customers cannot create new Kafka 2.1.0 clusters but existing 2.1.0 clusters will not be impacted and will get basic support till September 30, 2022.
## Next steps
healthcare-apis Fhir Rest Api Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-rest-api-capabilities.md
After you've found the record you want to restore, use the `PUT` operation to re
## Patch and Conditional Patch
-Patch is a valuable RESTful operation when you need to update only a portion of the FHIR resource. Using Patch allows you to specify the element(s) that you want to update in the resource without having to update the entire record. FHIR defines three types of ways to Patch resources in FHIR: JSON Patch, XML Patch, and FHIR Path Patch. Azure API for FHIR supports JSON Patch and Conditional JSON Patch (which allows you to Patch a resource based on a search criteria instead of an ID). To walk through some examples of using JSON Patch, refer to the sample [REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PatchRequests.http).
+Patch is a valuable RESTful operation when you need to update only a portion of the FHIR resource. Using Patch allows you to specify the element(s) that you want to update in the resource without having to update the entire record. FHIR defines three types of ways to Patch resources in FHIR: JSON Patch, XML Patch, and FHIR Path Patch. Azure API for FHIR supports JSON Patch and Conditional JSON Patch (which allows you to Patch a resource based on a search criteria instead of an ID). To walk through some examples of using JSON Patch, refer to the sample [REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/FhirPatchRequests.http).
> [!NOTE] > When using `PATCH` against STU3, and if you are requesting a History bundle, the patched resource's `Bundle.entry.request.method` is mapped to `PUT`. This is because STU3 doesn't contain a definition for the `PATCH` verb in the [HTTPVerb value set](http://hl7.org/fhir/STU3/valueset-http-verb.html).
In this article, you learned about some of the REST capabilities of Azure API fo
>[!div class="nextstepaction"] >[Overview of search in Azure API for FHIR](overview-of-search.md)
-(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+(FHIR&#174;) is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Fhir Rest Api Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-rest-api-capabilities.md
After you've found the record you want to restore, use the `PUT` operation to re
## Patch and Conditional Patch
-Patch is a valuable RESTful operation when you need to update only a portion of the FHIR resource. Using Patch allows you to specify the element(s) that you want to update in the resource without having to update the entire record. FHIR defines three types of ways to Patch resources in FHIR: JSON Patch, XML Patch, and FHIR Path Patch. The FHIR service support JSON Patch and Conditional JSON Patch (which allows you to Patch a resource based on a search criteria instead of an ID). To walk through some examples of using JSON Patch, refer to the sample [REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PatchRequests.http).
+Patch is a valuable RESTful operation when you need to update only a portion of the FHIR resource. Using Patch allows you to specify the element(s) that you want to update in the resource without having to update the entire record. FHIR defines three types of ways to Patch resources in FHIR: JSON Patch, XML Patch, and FHIR Path Patch. The FHIR service support JSON Patch and Conditional JSON Patch (which allows you to Patch a resource based on a search criteria instead of an ID). To walk through some examples of using JSON Patch, refer to the sample [REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/FhirPatchRequests.http).
> [!NOTE] > When using `PATCH` against STU3, and if you are requesting a History bundle, the patched resource's `Bundle.entry.request.method` is mapped to `PUT`. This is because STU3 doesn't contain a definition for the `PATCH` verb in the [HTTPVerb value set](http://hl7.org/fhir/STU3/valueset-http-verb.html).
iot-central Concepts Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-architecture.md
# Azure IoT Central architecture
-This article provides an overview of the key elements in an IoT Central solution architecture.
--
-An IoT Central application:
--- Lets you manage the IoT devices in your solution.-- Lets you view and analyze the data from your devices.-- Can export to and integrate with other services that are part of the solution.
+IoT Central is a ready-made environment for IoT solution development. It's an application platform as a service (aPaaS) IoT solution and its primary interface is a web UI. There's also a [REST API](#extend-with-rest-api) that lets you interact with your application programmatically.
-## IoT Central
+This article provides an overview of the key elements in an IoT Central solution architecture.
-IoT Central is a ready-made environment for IoT solution development. It's a platform as a service (PaaS) IoT solution and its primary interface is a web UI. There's also a [REST API](#rest-api) that lets you interact with your application programmatically.
-This section describes the key capabilities of an IoT Central application.
+Key capabilities in an IoT Central application include:
### Manage devices
IoT Central lets you manage the fleet of [IoT devices](#devices) that are sendin
In an IoT Central application, you can view and analyze data for individual devices or for aggregated data from multiple devices:
+- Use [mapping](howto-map-data.md) to transform complex device telemetry into structured data inside IoT Central.
- Use device templates to define [custom views](howto-set-up-template.md#views) for individual devices of specific types. For example, you can plot temperature over time for an individual thermostat or show the live location of a delivery truck. - Use the built-in [analytics](tutorial-use-device-groups.md) to view aggregate data for multiple devices. For example, you can see the total occupancy across multiple retail stores or identifying the stores with the highest or lowest occupancy rates. - Create custom [dashboards](howto-manage-dashboards.md) to help you manage your devices. For example, you can add maps, tiles, and charts to show device telemetry.
In an IoT Central application you can manage the following security aspects of y
- [User management](howto-manage-users-roles.md): Manage the users that can sign in to the application and the roles that determine what permissions those users have. - [Organizations](howto-create-organizations.md): Define a hierarchy to manage which users can see which devices in your IoT Central application.
-### REST API
-
-Build integrations that let other applications and services manage your application. For example, programmatically [manage the devices](howto-control-devices-with-rest-api.md) in your application or synchronize [user information](howto-manage-users-roles-with-rest-api.md) with an external system.
- ## Devices Devices collect data from sensors to send as a stream of telemetry to an IoT Central application. For example, a refrigeration unit sends a stream of temperature values or a delivery truck streams its location.
A device can use properties to report its state, such as whether a valve is open
IoT Central can also control devices by calling commands on the device. For example, instructing a device to download and install a firmware update.
-The [telemetry, properties, and commands](concepts-telemetry-properties-commands.md) that a device implements are collectively known as the device capabilities. You define these capabilities in a model that's shared between the device and the IoT Central application. In IoT Central, this model is part of the device template that defines a specific type of device.
+The [telemetry, properties, and commands](concepts-telemetry-properties-commands.md) that a device implements are collectively known as the device capabilities. You define these capabilities in a model that's shared between the device and the IoT Central application. In IoT Central, this model is part of the device template that defines a specific type of device. To learn more, see [Associate a device with a device template](concepts-get-connected.md#associate-a-device-with-a-device-template).
The [device implementation](tutorial-connect-device.md) should follow the [IoT Plug and Play conventions](../../iot-develop/concepts-convention.md) to ensure that it can communicate with IoT Central. For more information, see the various language [SDKs and samples](../../iot-develop/libraries-sdks.md).
Local gateway devices are useful in several scenarios, such as:
Gateway devices typically require more processing power than a standalone device. One option to implement a gateway device is to use [Azure IoT Edge and apply one of the standard IoT Edge gateway patterns](concepts-iot-edge.md). You can also run your own custom gateway code on a suitable device.
-## Data export
+## Export data
+
+Although IoT Central has built-in analytics features, you can export data to other services and applications.
+
+[Transformations](howto-transform-data-internally.md) in an IoT Central data export definition let you manipulate the format and structure of the device data before it's exported to a destination.
-Although IoT Central has built-in analytics features, you can export data to other services and applications. Reasons to export data include:
+Reasons to export data include:
### Storage and analysis
For long-term storage and control over archiving and retention policies, you can
You may need to [transform or do computations](howto-transform-data.md) on your data before it can be used either in IoT Central or another service. For example, you could add local weather information to the location data reported by a delivery truck.
+## Extend with REST API
+
+Build integrations that let other applications and services manage your application. For example, programmatically [manage the devices](howto-control-devices-with-rest-api.md) in your application or synchronize [user information](howto-manage-users-roles-with-rest-api.md) with an external system.
+ ## Next steps Now that you've learned about the architecture of Azure IoT Central, the suggested next step is to learn about [scalability and high availability](concepts-scalability-availability.md) in Azure IoT Central.
iot-central Tutorial In Store Analytics Create App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-in-store-analytics-create-app.md
To create a custom theme:
:::image type="content" source="media/tutorial-in-store-analytics-create-app/dashboard-expand.png" alt-text="Azure IoT Central left pane.":::
-1. Select **Administration > Customize your application**.
+1. Select **Customization > App appearance**.
1. Use the **Change** button to choose an image to upload as the **Application logo**. Optionally, specify a value for **Logo alt text**.
To create a custom theme:
To update the application image:
-1. Select **Administration > Your Application**.
+1. Select **Customization > App appearance.**
1. Use the **Select image** button to choose an image to upload as the application image. This image appears on the application tile in the **My Apps** page of the IoT Central application manager.
iot-central Tutorial In Store Analytics Customize Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-in-store-analytics-customize-dashboard.md
To customize the image tile that displays a brand image on the dashboard:
1. Select **Edit** on the dashboard toolbar.
-1. Select **Configure** on the image tile that displays the Northwind brand image.
+1. Select **Edit** on the image tile that displays the Northwind brand image.
:::image type="content" source="media/tutorial-in-store-analytics-customize-dashboard/brand-image-edit.png" alt-text="Azure IoT Central edit brand image.":::
iot-central Tutorial Iot Central Connected Logistics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-iot-central-connected-logistics.md
If you're not going to continue to use this application, delete the application
Learn more about : > [!div class="nextstepaction"]
-> [Connected logistics concepts](./architecture-connected-logistics.md)
+> [IoT Central data integration](../core/overview-iot-central-solution-builder.md)
iot-central Tutorial Iot Central Digital Distribution Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-iot-central-digital-distribution-center.md
If you're not going to continue to use this application, delete the application
## Next steps
-Learn more about digital distribution center solution architecture:
+Learn more about :
> [!div class="nextstepaction"]
-> [digital distribution center concept](./architecture-digital-distribution-center.md)
+> [IoT Central data integration](../core/overview-iot-central-solution-builder.md)
iot-central Tutorial Iot Central Smart Inventory Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-iot-central-smart-inventory-management.md
If you're not going to continue to use this application, delete the application
## Next steps
-Learn more about smart inventory management:
+Learn more about :
> [!div class="nextstepaction"]
-> [Smart inventory management concept](./architecture-smart-inventory-management.md)
+> [IoT Central data integration](../core/overview-iot-central-solution-builder.md)
+
iot-central Tutorial Micro Fulfillment Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-micro-fulfillment-center.md
If you're not going to continue to use this application, delete the application
## Next steps
-Learn more about:
+Learn more about :
> [!div class="nextstepaction"]
-> [micro-fulfillment center solution architecture](./architecture-micro-fulfillment-center.md)
+> [IoT Central data integration](../core/overview-iot-central-solution-builder.md)
+
iot-develop Quickstart Devkit Mxchip Az3166 Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-mxchip-az3166-iot-hub.md
description: Use Azure RTOS embedded software to connect an MXCHIP AZ3166 device
+ms.devlang: c
Last updated 06/09/2021
iot-hub Iot Hub Java Java Device Management Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-java-java-device-management-getstarted.md
+ms.devlang: java
Last updated 08/20/2019
iot-hub Iot Hub Node Node Schedule Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-node-node-schedule-jobs.md
+ms.devlang: javascript
Last updated 08/16/2019
iot-hub Iot Hub Node Node Twin Getstarted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-node-node-twin-getstarted.md
description: How to use Azure IoT Hub device twins to add tags and then use an I
+ms.devlang: javascript
Last updated 08/26/2019
iot-hub Iot Hub Python Python Device Management Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-python-python-device-management-get-started.md
description: How to use IoT Hub device management to initiate a remote device re
+ms.devlang: python
Last updated 01/17/2020
iot-hub Quickstart Control Device Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/quickstart-control-device-android.md
description: In this quickstart, you run two sample Java applications. One appli
+ms.devlang: java
Last updated 06/21/2019
iot-hub Tutorial Device Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-device-twins.md
+ms.devlang: javascript
Last updated 10/13/2021
key-vault Quick Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-java.md
Last updated 12/18/2020
+ms.devlang: java
# Quickstart: Azure Key Vault Certificate client library for Java (Certificates)
key-vault Tutorial Javascript Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-javascript-virtual-machine.md
Last updated 12/10/2021
+ms.devlang: javascript
# Customer intent: As a developer I want to use Azure Key vault to store secrets for my app, so that they are kept secure.
key-vault Tutorial Net Create Vault Azure Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-net-create-vault-azure-web-app.md
Last updated 05/06/2020
+ms.devlang: csharp
#Customer intent: As a developer, I want to use Azure Key Vault to store secrets for my app to help keep them secure.
key-vault Tutorial Net Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-net-virtual-machine.md
Last updated 03/17/2021
+ms.devlang: csharp
#Customer intent: As a developer I want to use Azure Key Vault to store secrets for my app, so that they are kept secure.
key-vault Tutorial Python Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-python-virtual-machine.md
Last updated 07/20/2020
+ms.devlang: python
# Customer intent: As a developer I want to use Azure Key vault to store secrets for my app, so that they are kept secure.
key-vault Quick Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-java.md
Last updated 01/05/2021
+ms.devlang: java
# Quickstart: Azure Key Vault Key client library for Java
key-vault Quick Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-java.md
Last updated 10/20/2019
+ms.devlang: java
# Quickstart: Azure Key Vault Secret client library for Java
load-testing How To Compare Multiple Test Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-compare-multiple-test-runs.md
Title: Compare load testing runs to find regressions
+ Title: Compare load test runs to find regressions
-description: 'Learn how you can visually compare multiple test runs with Azure Load Testing to better understand performance regressions.'
+description: 'Learn how you can visually compare multiple test runs with Azure Load Testing to identify and analyze performance regressions.'
Previously updated : 11/30/2021 Last updated : 02/16/2022
-# Identify performance regressions by comparing load test runs
+# Identify performance regressions by comparing test runs in Azure Load Testing Preview
-In this article, you'll learn how to identify performance regressions by visually comparing multiple load test runs in the Azure Load Testing Preview dashboard.
+In this article, you'll learn how you can identify performance regressions by comparing test runs in the Azure Load Testing Preview dashboard. The dashboard overlays the client-side and server-side metric graphs for each run, which allows you to quickly analyze performance issues.
-A test run contains client-side and server-side metrics. The test engine reports client-side metrics, such as the number of virtual users. The server-side metrics provide application- and resource-specific information.
+You can compare load test runs for the following scenarios:
-By overlaying multiple metrics charts, you can more easily pinpoint performance changes and identify which application component is causing problems.
-
-There are two entry points for comparing load test runs in the Azure portal:
--- Starting from the test runs page, select multiple results to compare.-- Starting from a specific test run, select other results to compare the runs with.
+- [Identify performance regressions](#identify-performance-regressions) between application builds or configurations. You could run a load test at each development sprint to ensure that the previous sprint didn't introduce performance issues.
+- [Identify which application component is responsible](#identify-the-root-cause) for a performance problem (root cause analysis). For example, an application redesign might result in slower application response times. Comparing load test runs might reveal that the root cause was a lack of database resources.
> [!IMPORTANT] > Azure Load Testing is currently in preview. For legal terms that apply to Azure features that are in beta, in preview, or otherwise not yet released into general availability, see the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
There are two entry points for comparing load test runs in the Azure portal:
- An Azure Load Testing resource with a test plan that has multiple test runs. To create a Load Testing resource, see [Create and run a load test](./quickstart-create-and-run-load-test.md).
- > [!NOTE]
- > If you want to compare a test run, it needs to be in a *Done*, *Stopped*, or *Failed* state.
-
-## Compare test runs from the test runs page
+## Select test runs
+
+To compare test runs in Azure Load Testing, you'll first have to select up to five runs within a load test. You can only compare runs that belong to the same load test.
-In this section, you'll compare multiple results by selecting runs from the test runs page.
+A test run needs to be in the *Done*, *Stopped*, or *Failed* state to compare it.
+
+Use the following steps to select the test runs:
1. Sign in to the [Azure portal](https://portal.azure.com) by using the credentials for your Azure subscription.
In this section, you'll compare multiple results by selecting runs from the test
You can also use the filters to find your load test.
-1. In the list of tests, select the test whose runs you want to compare.
+1. Select the test whose runs you want to compare by selecting its name.
-1. Select two or more test runs, and then select **Compare**.
+1. Select two or more test runs by selecting the corresponding checkboxes in the list.
:::image type="content" source="media/how-to-compare-multiple-test-runs/compare-test-results-from-list.png" alt-text="Screenshot that shows a list of test runs and the 'Compare' button.":::
- > [!NOTE]
- > You can choose a maximum of five test runs to compare.
+ You can choose a maximum of five test runs to compare.
+
+## Compare multiple test runs
- The selected test runs are presented in the dashboard. Each run is shown as an overlay in the different charts.
+After you've selected the test runs you want to compare, you can visually compare the client-side and server-side metrics for each test run in the load test dashboard.
+
+1. Select the **Compare** button to open the load test dashboard.
+
+ Each test run is shown as an overlay in the different graphs.
:::image type="content" source="media/how-to-compare-multiple-test-runs/compare-screen.png" alt-text="Screenshot of the 'Compare' page, displaying a comparison of two test runs.":::
- You can use filters to customize the graphs. There are separate filters for the client and server metrics.
+1. Optionally, use the filters to customize the graphs.
+
+ :::image type="content" source="media/how-to-compare-multiple-test-runs/compare-client-side-filters.png" alt-text="Screenshot of the client-side filter controls on the load test dashboard.":::
> [!TIP]
- > The time filter is based on the relative duration of the tests. A value of zero indicates the beginning of the test, and the maximum value marks the duration of the longest test run. For client-side metrics, test runs show only data for the duration of the test.
+ > The time filter is based on the duration of the tests. A value of zero indicates the start of the test, and the maximum value marks the duration of the longest test run.
-## Compare test runs from the run details page
+## Identify performance regressions
-In this section, you'll use the test run details page and add other test runs to compare.
+You can compare multiple test runs to identify performance regressions. For example, before deploying a new application version in production, you can verify that the performance hasn't degraded.
-1. Go to the test run details page, and then select **Compare**.
+Use the client-side metrics, such as requests per second or response time, on the load test dashboard to quickly spot performance changes between different load test runs:
- :::image type="content" source="media/how-to-compare-multiple-test-runs/test-run-details.png" alt-text="Screenshot of the 'Test run details' page, displaying the 'Compare' button.":::
+1. Hover over the client-side metrics graphs to compare the values across the different test runs.
-1. On the **Compare** page, select two or more test runs that you want to compare.
+ In the following screenshot, you notice that the metric values for **Requests per second** and **Response Time** are significantly different. This difference indicates that the application performance dropped significantly between the two test runs.
- :::image type="content" source="media/how-to-compare-multiple-test-runs/choose-runs-to-compare.png" alt-text="Screenshot of the 'Compare' page, displaying test runs to be compared.":::
+ :::image type="content" source="media/how-to-compare-multiple-test-runs/compare-client-side-metrics.png" alt-text="Screenshot of the client-side metrics, highlighting the difference in requests per second and response time.":::
- > [!NOTE]
- > You can choose a maximum of five test runs to compare.
+1. Optionally, use the **Requests** filter to compare a specific application request in the JMeter script.
-1. Select **Compare**.
+ :::image type="content" source="media/how-to-compare-multiple-test-runs/compare-client-side-requests-filter.png" alt-text="Screenshot of the client-side 'requests' filter, which allows you to filter specific application requests.":::
- The selected test runs are presented in the dashboard. Each run is shown as an overlay in the different charts.
+## Identify the root cause
- :::image type="content" source="media/how-to-compare-multiple-test-runs/compare-screen.png" alt-text="Screenshot of the 'Compare' page, displaying a comparison of two test runs.":::
+When there's a performance issue, you can use the server-side metrics to analyze what the root cause of the problem is. Azure Load Testing can [capture server-side resource metrics](./how-to-update-rerun-test.md) for Azure-hosted applications.
- You can use filters to customize the graphs. There are separate filters for the client and server metrics.
+1. Hover over the server-side metrics graphs to compare the values across the different test runs.
-## Next steps
+ In the following screenshot, you notice from the **Response time** and **Requests** that the application performance has degraded. You can also see that for one test run, the database **RU Consumption** peeks at 100%. This indicates that the root cause is likely the database **Provisioned Throughput**.
+
+ :::image type="content" source="media/how-to-compare-multiple-test-runs/compare-server-side-metrics.png" alt-text="Screenshot of the server-side metrics, highlighting the difference in database resource consumption and provisioning throughput.":::
-- For information about high-scale load tests, see [Set up a high-scale load test](./how-to-high-scale-load.md).
+1. Optionally, select **Configure metrics** to add or remove server-side metrics.
+
+ You can add more server-side metrics for the selected Azure app components to further investigate performance problems. The dashboard immediately shows the additional metrics data, and you don't have to rerun the load test.
+
+1. Optionally, use the **Resource** filter to hide or show all metric graphs for an Azure component.
+
+## Next steps
-- To learn about performance test automation, see [Configure automated performance testing](./tutorial-cicd-azure-pipelines.md).
+- Learn more about [exporting the load test results for reporting](./how-to-export-test-results.md).
+- Learn more about [troubleshooting load test execution errors](./how-to-find-download-logs.md).
+- Learn more about [configuring automated performance testing with Azure Pipelines](./tutorial-cicd-azure-pipelines.md).
logic-apps Set Up Devops Deployment Single Tenant Azure Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/set-up-devops-deployment-single-tenant-azure-logic-apps.md
Title: Set up DevOps for single-tenant Azure Logic Apps
-description: How to set up DevOps deployment for workflows in single-tenant Azure Logic Apps.
+ Title: Set up DevOps for Standard logic apps
+description: How to set up DevOps deployment for Standard logic app workflows in single-tenant Azure Logic Apps.
ms.suite: integration Previously updated : 11/02/2021 Last updated : 02/14/2022 # As a developer, I want to automate deployment for workflows hosted in single-tenant Azure Logic Apps by using DevOps tools and processes.
-# Set up DevOps deployment for single-tenant Azure Logic Apps
+# Set up DevOps deployment for Standard logic app workflows in single-tenant Azure Logic Apps
-This article shows how to deploy a single-tenant based logic app project from Visual Studio Code to your infrastructure by using DevOps tools and processes. Based on whether you prefer GitHub or Azure DevOps for deployment, choose the path and tools that work best for your scenario. You can use the included samples that contain example logic app projects plus examples for Azure deployment using either GitHub or Azure DevOps. For more information about DevOps for single-tenant, review [DevOps deployment overview for single-tenant Azure Logic Apps](devops-deployment-single-tenant-azure-logic-apps.md).
+This article shows how to deploy a Standard logic app project to single-tenant Azure Logic Apps from Visual Studio Code to your infrastructure by using DevOps tools and processes. Based on whether you prefer GitHub or Azure DevOps for deployment, choose the path and tools that work best for your scenario. You can use the included samples that contain example logic app projects plus examples for Azure deployment using either GitHub or Azure DevOps. For more information about DevOps for single-tenant, review [DevOps deployment overview for single-tenant Azure Logic Apps](devops-deployment-single-tenant-azure-logic-apps.md).
## Prerequisites - An Azure account with an active subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- A single-tenant based logic app project created with [Visual Studio Code and the Azure Logic Apps (Standard) extension](create-single-tenant-workflows-visual-studio-code.md#prerequisites).
+- A Standard logic app project created with [Visual Studio Code and the Azure Logic Apps (Standard) extension](create-single-tenant-workflows-visual-studio-code.md#prerequisites).
If you haven't already set up your logic app project or infrastructure, you can use the included sample projects to deploy an example app and infrastructure, based on the source and deployment options you prefer to use. For more information about these sample projects and the resources included to run the example logic app, review [Deploy your infrastructure](#deploy-infrastructure).
Both samples include the following resources that a logic app uses to run.
| Azure storage account | Yes, for both stateful and stateless workflows | This Azure resource stores the metadata, keys for access control, state, inputs, outputs, run history, and other information about your workflows. | | Application Insights | Optional | This Azure resource provides monitoring capabilities for your workflows. | | API connections | Optional, if none exist | These Azure resources define any managed API connections that your workflows use to run managed connector operations, such as Office 365, SharePoint, and so on. <p><p>**Important**: In your logic app project, the **connections.json** file contains metadata, endpoints, and keys for any managed API connections and Azure functions that your workflows use. To use different connections and functions in each environment, make sure that you parameterize the **connections.json** file and update the endpoints. <p><p>For more information, review [API connection resources and access policies](#api-connection-resources). |
-| Azure Resource Manager (ARM) template | Optional | This Azure resource defines a baseline infrastructure deployment that you can reuse or [export](../azure-resource-manager/templates/template-tutorial-export-template.md). The template also includes the required access policies, for example, to use managed API connections. <p><p>**Important**: Exporting the ARM template won't include all the related parameters for any API connection resources that your workflows use. For more information, review [Find API connection parameters](#find-api-connection-parameters). |
+| Azure Resource Manager (ARM) template | Optional | This Azure resource defines a baseline infrastructure deployment that you can reuse or [export](../azure-resource-manager/templates/template-tutorial-export-template.md). |
|||| <a name="api-connection-resources"></a>
The following diagram shows the dependencies between your logic app project and
![Conceptual diagram showing infrastructure dependencies for a logic app project in the single-tenant Azure Logic Apps model.](./media/set-up-devops-deployment-single-tenant-azure-logic-apps/infrastructure-dependencies.png)
-<a name="find-api-connection-parameters"></a>
+<a name="deploy-logic-app-resources"></a>
-### Find API connection parameters
+## Deploy logic app resources (zip deploy)
+
+After you push your logic app project to your source repository, you can set up build and release pipelines that deploy logic apps to infrastructure either inside or outside Azure.
+
+### Build your project
-If your workflows use managed API connections, using the export template capability won't include all related parameters. In an ARM template, every [API connection resource definition](logic-apps-azure-resource-manager-templates-overview.md#connection-resource-definitions) has the following general format:
+To set up a build pipeline based on your logic app project type, complete the corresponding actions in the following table:
+
+| Project type | Description and steps |
+|--|--|
+| Nuget-based | The NuGet-based project structure is based on the .NET Framework. To build these projects, make sure to follow the build steps for .NET Standard. For more information, review the documentation for [Create a NuGet package using MSBuild](/nuget/create-packages/creating-a-package-msbuild). |
+| Bundle-based | The extension bundle-based project isn't language-specific and doesn't require any language-specific build steps. You can use any method to zip your project files. <br><br>**Important**: Make sure that your .zip file contains the actual build artifacts, including all workflow folders, configuration files such as host.json, connections.json, and any other related files. |
+|||
+
+### Before release to Azure
+
+The managed API connections inside your logic app project's **connections.json** file are created specifically for local use in Visual Studio Code. Before you can release your project artiffacts from Visual Studio Code to Azure, you have to update these artifacts. To use the managed API connections in Azure, you have to update their authentication methods so that they're in the correct format to use in Azure.
+
+#### Update authentication type
+
+For each managed API connection that uses authentication, you have to update the **authentication** object from the local format in Visual Studio Code to the Azure portal format, as shown by the first and second code examples, respectively:
+
+**Visual Studio Code format**
```json {
- "type": "Microsoft.Web/connections",
- "apiVersion": "2016ΓÇô06ΓÇô01",
- "location": "[parameters('location')]",
- "name": "[parameters('connectionName')]",
- "properties": {}
+ "managedApiConnections": {
+ "sql": {
+ "api": {
+ "id": "/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/providers/Microsoft.Web/locations/westus/managedApis/sql"
+ },
+ "connection": {
+ "id": "/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/ase/providers/Microsoft.Web/connections/sql-8"
+ },
+ "connectionRuntimeUrl": "https://xxxxxxxxxxxxxx.01.common.logic-westus.azure-apihub.net/apim/sql/xxxxxxxxxxxxxxxxxxxxxxxxx/",
+ "authentication": {
+ "type": "Raw",
+ "scheme": "Key",
+ "parameter": "@appsetting('sql-connectionKey')"
+ }
+ }
} ```
-To find the values that you need to use in the `properties` object for completing the connection resource definition, you can use the following API for a specific connector:
+**Azure portal format**
-`GET https://management.azure.com/subscriptions/{subscription-ID}/providers/Microsoft.Web/locations/{location}/managedApis/{connector-name}?api-version=2016-06-01`
+```json
+{
+ "managedApiConnections": {
+ "sql": {
+ "api": {
+ "id": "/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/providers/Microsoft.Web/locations/westus/managedApis/sql"
+ },
+ "connection": {
+ "id": "/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/ase/providers/Microsoft.Web/connections/sql-8"
+ },
+ "connectionRuntimeUrl": "https://xxxxxxxxxxxxxx.01.common.logic-westus.azure-apihub.net/apim/sql/xxxxxxxxxxxxxxxxxxxxxxxxx/",
+ "authentication": {
+ "type": "ManagedServiceIdentity",
+ }
+ }
+}
+```
-In the response, find the `connectionParameters` object, which contains all the information necessary for you to complete resource definition for that specific connector. The following example shows an example resource definition for a SQL managed connection:
+#### Create API connections as needed
+
+If you're deploying your logic app workflow to an Azure region or subscription different from your local development environment, you must also make sure to create these managed API connections before deployment. Azure Resource Manager template (ARM template) deployment is the easiest way to create managed API connections.
+
+The following example shows a SQL managed API connection resource definition in an ARM template:
```json {
In the response, find the `connectionParameters` object, which contains all the
"properties": { "displayName": "sqltestconnector", "api": {
- "id": "/subscriptions/{subscription-ID}/providers/Microsoft.Web/locations/{location}/managedApis/sql"
+ "id": "/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/providers/Microsoft.Web/locations/{Azure-region-location}/managedApis/sql"
}, "parameterValues": { "authType": "windows",
In the response, find the `connectionParameters` object, which contains all the
} ```
-As an alternative, you can review the network trace for when you create a connection in the Logic Apps designer. Find the `PUT` call to the managed API for the connector as previously described, and review the request body for all the information you need.
+To find the values that you need to use in the **properties** object for completing the connection resource definition, you can use the following API for a specific connector:
-## Deploy logic app resources (zip deploy)
+`GET https://management.azure.com/subscriptions/{Azure-subscription-ID}/providers/Microsoft.Web/locations/{Azure-region-location}/managedApis/{connector-name}?api-version=2016-06-01`
-After you push your logic app project to your source repository, you can set up build and release pipelines that deploy logic apps to infrastructure inside or outside Azure.
+In the response, find the **connectionParameters** object, which contains all the information necessary for you to complete resource definition for that specific connector. The following example shows an example resource definition for a SQL managed connection:
-### Build your project
-
-To set up a build pipeline based on your logic app project type, complete the corresponding actions listed in the following table:
+```json
+{
+ "type": "Microsoft.Web/connections",
+ "apiVersion": "2016ΓÇô06ΓÇô01",
+ "location": "[parameters('location')]",
+ "name": "[parameters('connectionName')]",
+ "properties": {
+ "displayName": "sqltestconnector",
+ "api": {
+ "id": "/subscriptions/{Azure-subscription-ID}/providers/Microsoft.Web/locations/{Azure-region-location}/managedApis/sql"
+ },
+ "parameterValues": {
+ "authType": "windows",
+ "database": "TestDB",
+ "password": "TestPassword",
+ "server": "TestServer",
+ "username": "TestUserName"
+ }
+ }
+}
+```
-| Project type | Description and steps |
-|--|--|
-| Nuget-based | The NuGet-based project structure is based on the .NET Framework. To build these projects, make sure to follow the build steps for .NET Standard. For more information, review the [Create a NuGet package using MSBuild](/nuget/create-packages/creating-a-package-msbuild) documentation. |
-| Bundle-based | The extension bundle-based project isn't language-specific and doesn't require any language-specific build steps. You can use any method to zip your project files. <p><p>**Important**: Make sure that your .zip file contains the actual build artifacts, including all workflow folders, configuration files such as host.json, connections.json, and any other related files. |
-|||
+As an alternative, you can capture and review the network trace for when you create a connection using the workflow designer in Azure Logic Apps. Find the `PUT` call that's sent to the connector's managed API as previously described, and review the request body for all the necessary information.
### Release to Azure
-To set up a release pipeline that deploys to Azure, choose the associated option for GitHub, Azure DevOps, or Azure CLI.
+To set up a release pipeline that deploys to Azure, follow the associated steps for GitHub, Azure DevOps, or Azure CLI.
> [!NOTE] > Azure Logic Apps currently doesn't support Azure deployment slots.
az logicapp deployment source config-zip --name MyLogicAppName
-### Release to containers
-
-If you containerize your logic app, deployment works mostly the same as any other container you deploy and manage.
+### After release to Azure
-For examples that show how to implement an end-to-end container build and deployment pipeline, review [CI/CD for Containers](https://azure.microsoft.com/solutions/architecture/cicd-for-containers/).
+Each API connection has access policies. After the zip deployment completes, you must open your logic app resource in the Azure portal, and create access policies for each API connection to set up permissions for the deployed logic app. The zip deployment doesn't create app settings for you. So, after deployment, you must create these app settings based on the **local.settings.json** file in your local Visual Studio Code project.
## Next steps
machine-learning How To Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-image-models.md
Each of the tasks (and some models) have a set of parameters in the `model_setti
| Task | Parameter name | Default | | |- | | |Image classification (multi-class and multi-label) | `valid_resize_size`<br>`valid_crop_size` | 256<br>224 |
-|Object detection, instance segmentation| `min_size`<br>`max_size`<br>`box_score_thresh`<br>`box_nms_thresh`<br>`box_detections_per_img` | 600<br>1333<br>0.3<br>0.5<br>100 |
-|Object detection using `yolov5`| `img_size`<br>`model_size`<br>`box_score_thresh`<br>`box_iou_thresh` | 640<br>medium<br>0.1<br>0.5 |
+|Object detection, instance segmentation| `min_size`<br>`max_size`<br>`box_score_thresh`<br>`nms_iou_thresh`<br>`box_detections_per_img` | 600<br>1333<br>0.3<br>0.5<br>100 |
+|Object detection using `yolov5`| `img_size`<br>`model_size`<br>`box_score_thresh`<br>`nms_iou_thresh` | 640<br>medium<br>0.1<br>0.5 |
For a detailed description on task specific hyperparameters, please refer to [Hyperparameters for computer vision tasks in automated machine learning](reference-automl-images-hyperparameters.md).
machine-learning How To Create Component Pipelines Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipelines-cli.md
You define input data directories for your pipeline in the pipeline YAML file us
:::image type="content" source="media/how-to-create-component-pipelines-cli/inputs-and-outputs.png" alt-text="Image showing how the inputs and outputs paths map to the jobs inputs and outputs paths" lightbox="media/how-to-create-component-pipelines-cli/inputs-and-outputs.png":::
-1. The `inputs.pipeline_sample_input_data` path creates a key identifier and uploads the input data from the `local_path` directory. This identifier`${{inputs.pipeline_sample_input_data}}` is then used as the value of the `jobs.componentA_job.inputs.componentA_input` key.
-1. The `jobs.componentA_job.outputs.componentA_output` path as an identifier (`${{jobs.componentA_job.outputs.componentA_output`}}) that's used as the value for the next step's `jobs.componentB_job.inputs.componentB_input` key.
-1. As with Component A, the output of Component B is used as the input to Component C.
-1. The pipeline's `outputs.final_pipeline_output` key is the source of the identifier used as the value for the `jobs.componentC_job.outputs.componentC_output` key. In other words, Component C's output is the pipeline's final output.
+1. The `inputs.pipeline_sample_input_data` path (line 6) creates a key identifier and uploads the input data from the `local_path` directory (line 8). This identifier `${{inputs.pipeline_sample_input_data}}` is then used as the value of the `jobs.componentA_job.inputs.componentA_input` key (line 19). In other words, the pipeline's `pipeline_sample_input_data` input is passed to the `componentA_input` input of Component A.
+1. The `jobs.componentA_job.outputs.componentA_output` path (line 21) is used with the identifier `${{jobs.componentA_job.outputs.componentA_output}}` as the value for the next step's `jobs.componentB_job.inputs.componentB_input` key (line 27).
+1. As with Component A, the output of Component B (line 29) is used as the input to Component C (line 35).
+1. The pipeline's `outputs.final_pipeline_output` key (line 11) is the source of the identifier used as the value for the `jobs.componentC_job.outputs.componentC_output` key (line 37). In other words, Component C's output is the pipeline's final output.
Studio's visualization of this pipeline looks like this:
machine-learning How To Manage Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-rest.md
You can explore the REST API using the general pattern of:
| subscriptions/YOUR-SUBSCRIPTION-ID/ | subscriptions/abcde123-abab-abab-1234-0123456789abc/ | | resourceGroups/YOUR-RESOURCE-GROUP/ | resourceGroups/MyResourceGroup/ | | providers/operation-provider/ | providers/Microsoft.MachineLearningServices/ |
-| provider-resource-path/ | workspaces/MLWorkspace/MyWorkspace/FirstExperiment/runs/1/ |
+| provider-resource-path/ | workspaces/MyWorkspace/experiments/FirstExperiment/runs/1/ |
| operations-endpoint/ | artifacts/metadata/ |
machine-learning Reference Automl Images Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-images-hyperparameters.md
This table summarizes hyperparameters specific to the `yolov5` algorithm.
| `model_size` | Model size. <br> Must be `small`, `medium`, `large`, or `xlarge`. <br><br> *Note: training run may get into CUDA OOM if the model size is too big*. | `medium` | | `multi_scale` | Enable multi-scale image by varying image size by +/- 50% <br> Must be 0 or 1. <br> <br> *Note: training run may get into CUDA OOM if no sufficient GPU memory*. | 0 | | `box_score_thresh` | During inference, only return proposals with a score greater than `box_score_thresh`. The score is the multiplication of the objectness score and classification probability. <br> Must be a float in the range [0, 1]. | 0.1 |
-| `nms_iou_thresh` | IoU threshold used during inference in non-maximum suppression post processing. <br> Must be a float in the range [0, 1]. | 0.5 |
-
+| `nms_iou_thresh` | IOU threshold used during inference in non-maximum suppression post processing. <br> Must be a float in the range [0, 1]. | 0.5 |
## Model agnostic hyperparameters
The following hyperparameters are for object detection and instance segmentation
| `min_size` | Minimum size of the image to be rescaled before feeding it to the backbone. <br> Must be a positive integer. <br> <br> *Note: training run may get into CUDA OOM if the size is too big*.| 600 | | `max_size` | Maximum size of the image to be rescaled before feeding it to the backbone. <br> Must be a positive integer.<br> <br> *Note: training run may get into CUDA OOM if the size is too big*. | 1333 | | `box_score_thresh` | During inference, only return proposals with a classification score greater than `box_score_thresh`. <br> Must be a float in the range [0, 1].| 0.3 |
-| `box_nms_thresh` | Non-maximum suppression (NMS) threshold for the prediction head. Used during inference. <br>Must be a float in the range [0, 1]. | 0.5 |
+| `nms_iou_thresh` | IOU (intersection over union) threshold used in non-maximum suppression (NMS) for the prediction head. Used during inference. <br>Must be a float in the range [0, 1]. | 0.5 |
| `box_detections_per_img` | Maximum number of detections per image, for all classes. <br> Must be a positive integer.| 100 | | `tile_grid_size` | The grid size to use for tiling each image. <br>*Note: tile_grid_size must not be None to enable [small object detection](how-to-use-automl-small-object-detect.md) logic*<br> A tuple of two integers passed as a string. Example: --tile_grid_size "(3, 2)" | No Default | | `tile_overlap_ratio` | Overlap ratio between adjacent tiles in each dimension. <br> Must be float in the range of [0, 1) | 0.25 |
managed-instance-apache-cassandra Materialized Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/materialized-views.md
az managed-cassandra datacenter update \
--resource-group $resourceGroupName \ --cluster-name $clusterName \ --data-center-name $dataCenterName \
- --base64-encoded-cassandra-yaml-fragment "$ENCODED_FRAGMENT"
+ --base64-encoded-cassandra-yaml-fragment $ENCODED_FRAGMENT
``` ## Next steps
media-services Asset Create Asset Upload Portal Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/asset-create-asset-upload-portal-quickstart.md
- Title: Use portal to upload, encode, and stream content
-description: This quickstart shows you how to use portal to upload, encode, and stream content with Azure Media Services.
- Previously updated : 01/14/2022-----
-# Quickstart: Upload, encode, and stream content with portal
--
-This quickstart shows you how to use the Azure portal to upload, encode, and stream content with Azure Media Services.
-
-## Overview
-
-* To start managing, encrypting, encoding, analyzing, and streaming media content in Azure, you need to [create a Media Services account](account-create-how-to.md).
-
- > [!NOTE]
- > If you previously uploaded a video into the Media Services account using Media Services v3 API or the content was generated based on a live output, you will not see the **Encode**, **Analyze**, or **Encrypt** buttons in the Azure portal. Use the Media Services v3 APIs to perform these tasks.
-
- Review the following:
- * [Assets concept](assets-concept.md)
- * [Cloud upload and storage](storage-account-concept.md)
- * [Naming conventions for resource names](media-services-apis-overview.md#naming-conventions)
-
-* Once you upload your high-quality digital media file into an asset (an input asset), you can process it (encode or analyze). The processed content goes into another asset (output asset).
- * [Encode](encode-concept.md) your uploaded file into formats that can be played on a wide variety of browsers and devices.
- * [Analyze](analyze-video-audio-files-concept.md) your uploaded file.
-
- Presently, when using the Azure portal, you can perform the operations such as generating TTML and WebVTT closed caption files. Files in these formats can be used to make the audio and video files accessible to people with hearing or visual disability. You can also extract keywords from your content.
-
- For a rich experience that enables you to extract insights from your audio and video files, use Media Services v3 presets. For more information, see [Tutorial: Analyze videos with Media Services v3](analyze-videos-tutorial.md). If you require detailed insights, use [Video Analyzer for Media](../../azure-video-analyzer/video-analyzer-for-media-docs/index.yml) directly.
-
-* After the content gets processed, you can deliver media content to the client players. To make the videos in the output asset available to the clients for playback, you have to create a [streaming locator](stream-streaming-locators-concept.md). When creating a streaming locator, you need to specify a [streaming policy](stream-streaming-policy-concept.md). Streaming policies enable you to define streaming protocols and encryption options (if any) for your streaming locators. For information on packaging and filtering content, see [Packaging and delivery](encode-dynamic-packaging-concept.md) and [Filters](filters-concept.md).
-
-* You can protect your content by encrypting it with Advanced Encryption Standard (AES-128) or/and any of the three major DRM systems like Microsoft PlayReady, Google Widevine, and Apple FairPlay. For information on how to configure the content protection, see [Quickstart: Use portal to encrypt content](drm-encrypt-content-how-to.md).
-
-## Prerequisites
--
-Follow the steps to [Create a Media Services account](account-create-how-to.md).
-
-## Upload a new video
-
-1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Locate and select your Media Services account.
-1. In the left navigation pane, select **Assets** under **Media Services**.
-1. Select **Upload** at the top of the window.
-1. Choose a **Storage account** from the pull-down menu.
-1. Browse the file that you want to upload. An **Asset name** gets created for your media. If necessary, you can edit this **Asset name**.
-
- > [!TIP]
- > If you want to choose multiple files, add them to one folder in Windows File Explorer. When browsing to **Upload files**, select all the files. This creates multiple assets.
-
-1. Select the desired option at the bottom of the **Upload new assets** window.
-1. Navigate to your **Assets** resource window. After a successful upload, a new asset gets added to the list.
-
-## Add transform
-
-1. Under the **Media Services** services, select **Transforms + jobs**.
-1. Select **Add transform**.
-1. In the **Add a transform** window, enter the details.
-1. If your media is a video, select **Encoding** as your **Transform type**. Select a **Built-in preset name** from the pull-down menu. For more information, see [EncoderNamedPreset](/rest/api/media/transforms/create-or-update#encodernamedpreset).
-1. Select **Add**.
-
-## Encode (Add job)
-
-1. Select either **Assets** or **Transforms + jobs**.
-1. Select **Add job** at the top of the resource window.
-1. In **Create a job** window, enter the details. Select **Create**.
-1. Navigate to **Transforms + jobs**. Select the **Transform name** to check the job status. A job goes through multiple states like **Scheduled** , **Queued**, **Processing**, and **Final**. If the job encounters an error, you get the **Error** state.
-1. Navigate to your **Assets** resource window. After the job gets created successfully, it generates an output asset that contains the encoded content.
-
-## Publish and stream
-
-To publish an asset, you need to add a streaming locator to your asset and run the streaming endpoint.
-
-### Add streaming locator
-
-1. Under Media Services, select **Assets**.
-1. Select the output asset.
-1. Select **New streaming locator**.
-1. In **Add streaming locator** window, enter the details. Select a predefined **Streaming policy**. For more information, see [streaming policies](stream-streaming-policy-concept.md).
-1. If you want your stream to be encrypted, [Create a content key policy](drm-encrypt-content-how-to.md#create-a-content-key-policy) and select it in the **Add streaming locator** window.
-1. Select **Add**. This action publishes the asset and generates the streaming URLs.
-
-### Start streaming endpoint
-1. Once the asset gets published, you can stream it right in the portal. You can also copy the streaming URL and use it in your client player. Make sure the [streaming endpoint](stream-streaming-endpoint-concept.md) is running. When you first create a Media Services account, a default streaming endpoint gets created and remains in a stopped state. **Start** the streaming endpoint to stream your content. You're only billed when your streaming endpoint is in the running state.
-1. Select the output asset.
-1. Select **Start streaming endpoint?**. Select **Start** to run the streaming endpoint. The status of **default** streaming endpoint changes from **Stopped** to **Running**. Your billing will start now. You can now use the streaming URLs to deliver content.
-1. Select **Reload player**.
-
-### Stop streaming endpoint
-
-1. Navigate to **Media Services** and select **Streaming endpoints**.
-1. Select your streaming endpoint **Name**. In this quickstart, we are using the **default** streaming endpoint. The current state is **Running**.
-1. Select **Stop**. A **Stop streaming endpoint?** window gets opened. Select **Yes**. Now, the **default** streaming endpoint is in a **Stopped** state. You cannot use the streaming URLs to deliver the content.
-
-## Cleanup resources
-
-If you intend to try the other quickstarts, you should hold on to the resources created for this quickstart. Otherwise, sign in to the Azure portal, browse to your resource group, select the resource group under which you followed this quickstart, and delete all the resources.
media-services Concept Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/concept-managed-identities.md
Previously updated : 05/17/2021 Last updated : 02/17/2022
A common challenge for developers is the management of secrets and credentials to secure communication between different services. On Azure, managed identities eliminate the need for developers having to manage credentials by providing an identity for the Azure resource in Azure AD and using it to obtain Azure Active Directory (Azure AD) tokens. + ## Media Services Managed Identity scenarios There are three scenarios where Managed Identities can be used with Media
media-services Encode Recommended On Premises Live Encoders https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/encode-recommended-on-premises-live-encoders.md
keywords: encoding;encoders;media
Previously updated : 11/10/2020 Last updated : 02/17/2022
To play back content, both an audio and video stream must be present. Playback o
- Whenever possible, use a hardwired internet connection. - When you're determining bandwidth requirements, double the streaming bitrates. Although not mandatory, this simple rule helps to mitigate the impact of network congestion. - When using software-based encoders, close out any unnecessary programs.-- Changing your encoder configuration after it has started pushing has negative effects on the event. Configuration changes can cause the event to become unstable.
+- Changing your encoder configuration after it has started pushing has negative effects on the event. Configuration changes can cause the event to become unstable. If you change your encoder configuration, you need to reset [Live Events](https://docs.microsoft.com/rest/api/media/live-events/reset) and restart the live event in order for the change to take place. If you stop and start the live event without resetting it, the live event will preserve the previous configuration.
- Always test and validate newer versions of encoder software for continued compatibility with Azure Media Services. Microsoft does not re-validate encoders on this list, and most validations are done by the software vendors directly as a "self-certification." - Ensure that you give yourself ample time to set up your event. For high-scale events, we recommend starting the setup an hour before your event. - Use the H.264 video and AAC-LC audio codec output.
To play back content, both an audio and video stream must be present. Playback o
- Use strict CBR encoding recommended for optimum adaptive bitrate performance. > [!IMPORTANT]
-> Watch the physical condition of the machine (CPU / Memory / etc) as uploading fragments to cloud involves CPU and IO operations. If you change any settings in the encoder, be certain reset the channels / live event for the change to take effect.
+> Watch the physical condition of the machine (CPU / Memory / etc) as uploading fragments to cloud involves CPU and IO operations.
+> If you change any encoder configurations, reset [Live Events](https://docs.microsoft.com/rest/api/media/live-events/reset) the channels and the live event for the change to take place. If you stop and start the live event without resetting it, the live event will preserve the previous configuration.
## See also
media-services Live Event Streaming Best Practices Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/live-event-streaming-best-practices-guide.md
+
+ Title: Media Services live streaming best practices guide
+description: This article describes best practices for achieving low-latency live streams with Azure Media Services.
+++++++ Last updated : 02/14/2022+++
+# Media Services live streaming best practices guide
+
+Customers often ask how they can reduce the latency of their live stream. There are many factors that
+determine the end-to-end latency of a stream. Here are some that you should consider:
+
+1. Delays on the contribution encoder side. When customers use an
+ encoding software such as OBS Studio, Wirecast, or others to send an
+ RTMP live stream to Media Services. Settings on this software is critical in affecting the end-to-end latency of a live
+ stream.
+
+2. Delays in the live streaming pipeline within Azure Media Services.
+
+3. CDN performance
+
+4. Buffering algorithms of the video player and network conditions on
+ the client side
+
+5. Timing of provisioning
+
+## Contribution encoder
+
+As a customer, you are in control of the settings of the source encoder
+settings before the RTMP stream reaches Media Services. Here are some
+recommendations for the settings that would give you the lowest possible
+latency:
+
+1. **Pick the same region physically** **closest to your contribution
+ encoder for your Media Services account.** This will ensure
+ that you have a great network connection to the Media Services
+ account.
+
+2. **Use a consistent fragment size.** We recommend a GOP size of 2
+ seconds. The default on some encoders, such as OBS, is 8 seconds.
+ Make sure that you change this setting.
+
+3. **Use the GPU encoder if your encoding software allows you to do
+ that.** This would allow you to offload CPU work to the GPU.
+
+4. **Use an encoding profile that is optimized for low-latency.** For
+ example, with OBS Studio, if you use the Nvidia H.264 encoder, you
+ may see the ΓÇ£zero latencyΓÇ¥ preset.
+
+5. **Send content that is no higher in resolution than what you plan to
+ stream.** For example, if you're using 720p standard encoding live
+ events, you send files that are already at 720p.
+
+6. **Keep your framerate at 30fps or lower unless using pass-through
+ live events.** While we support 60 fps input for live events, our
+ encoding live event output is still not above 30 fps.
+
+## Configuration of the Azure Media Services live event
+
+Here are some configurations that will help you reduce the latency in
+our pipeline:
+
+1. **Use the ΓÇÿLowLatencyΓÇÖ StreamOption on the live event.**
+
+2. **We recommend that you choose CMAF output for both HLS and DASH
+ playback.** This allows you to share the same fragments for both
+ formats. It increases your cache hit ratio when CDN is used. For example:
+
+
+| Type | Format | URL example |
+||||
+|HLS CMAF (recommended) | format=m3u8-cmaf | `https://amsv3account-usw22.streaming.media.azure.net/21b17732-0112-4d76-b526-763dcd843449/ignite.ism/manifest(format=m3u8-cmaf)` |
+| MPEG-DASH CMAF (recommended) | format=mpd-time-cmaf | `https://amsv3account-usw22.streaming.media.azure.net/21b17732-0112-4d76-b526-763dcd843449/ignite.ism/manifest(format=mpd-time-cmaf)` |
+
+3. **If you must choose TS output, use an HLS packing ratio of 1.** This
+allows us to pack only one fragment into one HLS segment. You won't
+get the full benefits of LL-HLS in native Apple players.
+
+## Player optimizations
+
+**When choosing and configuring a video player, make sure you use settings that are optimized for lower latency.**
+
+Media Services supports different streaming protocols outputs ΓÇô DASH,
+HLS with TS output and HLS with CMAF fragments. Depending on the
+playerΓÇÖs implementation, buffering decisions impact the latency a
+viewer observes. Poor network conditions or default algorithms that
+favor quality and stability of playback could cause players to decide to
+buffer more content upfront to prevent interruptions during playback.
+These buffers before and during the playback sessions would add to the
+end-to-end latency.
+
+When Azure Media Player is used, the *Low Latency Heuristics* profile
+optimizes the player to have the lowest possible latency on the player
+side.
+
+## CDN choice
+
+Streaming endpoints are the origin servers that deliver the live and VOD
+streaming content to the CDN or to the customer directly. If a live
+event expects a large audience, or the audience is geographically
+located far away from the streaming endpoint (origin) serving the
+content, it's *important* for the customer to shield the origin using a
+Content Delivery Network (CDN).
+
+We recommend using Azure CDN which is provided by Verizon (Standard or
+Premium). We've optimized the integration experience so that a
+customer could configure this CDN with a single select in the Azure portal. Be sure to turn on Origin Shield and Streaming Optimizations for
+your CDN endpoint whenever you start your streaming endpoint.
+
+Our customers also have good experiences bringing their own CDN. Ensure that measures are taken on the CDN to shield the origin from
+excessive traffic.
+
+## Streaming endpoint scaling
+
+> [!NOTE]
+> A **standard streaming endpoint/origins** is a *shared* resource
+that allows customers with low traffic volumes to stream content at
+a lower cost. You would **not** use a standard streaming endpoint to
+scale streaming units if you expect large traffic volumes or you plan to
+use a CDN.
+
+A **premium streaming endpoint/origin** offers more flexibility and
+isolation for customers to scale by adding or removing *dedicated*
+streaming units. A *streaming unit* is a compute resource allocated to a
+streaming endpoint. Each streaming unit can stream approximately 200
+Mbps of traffic.
+
+While you can concurrently stream many live events at once using
+the same streaming endpoint, the maximum default streaming units needed
+for one streaming endpoint is 10. You can open a support ticket to
+request more than the default 10.
+
+## Determine the premium streaming units needed
+
+There are three steps to determine the number of streaming endpoints and
+streaming units needed:
+
+1. Determine the total egress needed.
+
+2. Divide the total egress by 200, which is the maximum Mbps each streaming unit can stream.
+
+### Determine the total egress needed
+
+Determine the total egress needed by using the following formula.
+
+*Total egress needed = average bandwidth x number of concurrent viewers
+x percent* *handled by the streaming endpoint.*
+
+LetΓÇÖs take a look at each of the multipliers in turn.
+
+**Average bandwidth.** What is the *average* bitrate you plan to stream?
+In other words, if you're going to have multiple bitrates available
+what bit rate is the average of all the bitrates you're planning for?
+You can estimate this using one of the following methods:
+
+For a live event that *includes encoding*:
+
+ - If you donΓÇÖt know what your *average* bandwidth is going to be, you
+ could use our top bitrates as an estimate. Our *top* bitrates are:
+
+ - 5.5Mbps for the 1080p encoded live events, therefore, your
+ average bitrate is going to be somewhere around 3.5Mbps.
+
+ - Look at the encoding preset used for encoding the live event, for
+ example, the AdaptiveStreaming(H.264) preset. See this [output
+ example](encode-autogen-bitrate-ladder.md#output).
+
+For a live event that is simply using pass-through and not encoding:
+
+ - Check the encoding bitrate ladder used by your local encoder.
+
+**Number of concurrent viewers.** How many concurrent viewers are
+expected? This could be hard to estimate, but do your best based on your
+customer data. Are you streaming a conference to a global audience? Are
+you planning to live stream to sell a set of products to your customers?
+
+**Percent of traffic** **handled by** **the streaming endpoint.** This
+can also be expressed as ΓÇ£the percent of traffic NOT handled by the CDNΓÇ¥
+since that is the number that actually goes into the formula. So, with
+that in mind, what is the CDN offload you expect? If the CDN is expected
+to handle 90% of the live traffic, then only 10% of the traffic would be
+expected on the streaming endpoint. The number used in the formula is
+.10 which is the percentage of traffic expected on the streaming
+endpoint.
+
+### Determine the number of premium streaming units needed
+
+Premium streaming units needed = Average bandwidth x \# of viewers x
+Percentage of traffic not handled by the CDN / 200 Mbps
+
+### Example
+
+You've recently released a new product and want to present it to your
+established customers. You want low latency because you donΓÇÖt want to
+frustrate your already busy audience, so you'll use premium streaming
+endpoints and a CDN.
+
+You have approximately 100,000 customers, but they probably arenΓÇÖt all
+going to watch your live event. You guess that in the best case, only 1%
+of them will attend, which brings your expected concurrent viewers to
+1,000.
+
+*Number of concurrent users =* *1,000*
+
+You've decided that you're going to use Media Services to encode your
+live stream and won't be using pass-through. You donΓÇÖt know what the
+average bandwidth is going to be, but you do know that you'll deliver
+in 1080p (*top* bitrate of 5.5 Mbps), so your *average* bandwidth is
+estimated to be 3.5 Mbps for your calculations.
+
+*Average bandwidth =* *3.5*
+
+Since your audience is dispersed worldwide, you expect that the CDN will
+handle most (90%) of the live traffic. Therefore, the premium streaming
+endpoints will only handle 10% of the traffic.
+
+*Percent handled by the streaming endpoint =* *10% = 0.1*
+
+Using the formula provided above:
+
+*Total egress needed = average bandwidth x number of concurrent viewers
+x percent handled by the streaming endpoint.*
+
+*total egress needed* = 3.5 x 1,000 x 0.1
+
+*total egress needed* = 350 Mbps
+
+Dividing the total egress by 200 you determine that you need 1.75
+premium streaming units.
+
+*premium streaming units needed* = *total egress needed*/200Mpbs
+
+*premium streaming units needed* = 1.75
+
+We'll round up this number to 2, giving us 2 units needed.
+
+### Use the portal to estimate your needs
+
+The Azure portal can help you simplify the calculations. On the
+streaming page, you can use the calculator provided to see the estimated
+audience reach when you change the average bandwidth, CDN hit ratio and
+number of streaming units.
+
+1. From the media services account page, select **Steaming endpoints** from
+ the menu.
+
+2. Add a new streaming endpoint by selecting **Add streaming endpoint**.
+
+3. Give the streaming endpoint a name.
+
+4. Select **Premium streaming endpoint** for the streaming endpoint type.
+
+5. Since you're just getting an estimate at this point, donΓÇÖt start
+ the streaming endpoint after creation. Select **No**.
+
+6. Select *Standard Verizon* or *Premium Verizon* for your CDN pricing
+ tier. The profile name will change accordingly. Leave the name as it
+ is for this exercise.
+
+7. For the CDN profile, select **Create New**.
+
+8. Select **Create**. Once the endpoint has been deployed, the streaming
+ endpoints screen will appear.
+
+9. Select the streaming endpoint you just created. The streaming
+ endpoint screen will appear with audience reach estimates.
+
+10. The default setting for the streaming endpoint with 1 streaming unit
+ shows that it's estimated to stream to 571 concurrent viewers at
+ 3.5 Mbps using 90% of the CDN and 10% of the streaming endpoint.
+
+11. Change the percentage of the **Egress source** from 90% from CDN cache
+ to 0%. The calculator will estimate that you'll be able to stream
+ to 57 concurrent viewers at 3.5 Mbps at 200 Mbps **without** a CDN.
+
+12. Now change the **Egress source** back to 90%.
+
+13. Then, change the **streaming units** to 2. The calculator will estimate
+ that you'll be able to stream to 1143 concurrent viewers at
+ 3.5 Mbps with 4000Mpbs with the CDN handling 90% of the traffic.
+
+14. Select **Save**.
+
+15. You can start the streaming endpoint and try sending traffic to it.
+ The metrics at the bottom of the screen will track actual traffic.
+
+## Timing
+
+You may want to provision streaming units 1 hour ahead of the expected
+peak usage to ensure streaming units are ready.
media-services Live Event Types Comparison Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/live-event-types-comparison-reference.md
na Previously updated : 08/31/2020 Last updated : 02/17/2022
If the source frame rate on input is >30 fps, the frame rate will be reduced to
For both *Default720p* and *Default1080p* presets, audio is encoded to stereo AAC-LC at 128 kbps. The sampling rate follows that of the audio track in the contribution feed.
+> [!NOTE]
+> If the sampling rate is low, such as 8khz, the encoded output will be lower than 128kbps.
+ ## Implicit properties of the live encoder The previous section describes the properties of the live encoder that can be controlled explicitly, via the preset - such as the number of layers, resolutions, and bitrates. This section clarifies the implicit properties.
media-services Security Access Storage Managed Identity Cli Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/security-access-storage-managed-identity-cli-tutorial.md
[!INCLUDE [media services api v3 logo](./includes/v3-hr.md)] + If you would like to access a storage account when the storage account is configured to block requests from unknown IP addresses, the Media Services account must be granted access to the Storage account. Follow the steps below to create a Managed Identity for the Media Services account and grant this identity access to storage using the Media Services CLI. :::image type="content" source="media/diagrams/managed-identities-scenario-storage-permissions-media-services-account.svg" alt-text="Media Services account uses a Managed Identity to access storage":::
media-services Security Encrypt Data Managed Identity Cli Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/security-encrypt-data-managed-identity-cli-tutorial.md
[!INCLUDE [media services api v3 logo](./includes/v3-hr.md)] + If you'd like Media Services to encrypt data using a key from your Key Vault, the Media Services account must be granted *access* to the Key Vault. Follow the steps below to create a Managed Identity for the Media Services account and grant this identity access to your Key Vault using the Media Services CLI. :::image type="content" source="media/diagrams/managed-identities-scenario-keyvault-media-services-account.svg" alt-text="Media Services account uses Key Vault with a Managed Identity":::
media-services Transform Create Copy Video Audio How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-create-copy-video-audio-how-to.md
This article shows how to create a `CopyVideo/CopyAudio` transform.
+This transform allows you have input video / input audio streams copied from the input asset to the output asset without any changes. This can be of value with multi bitrate encoding output where the input video and/or audio would be part of the output. It simply writes the manifest and other files needed to stream content.
+ ## Prerequisites Follow the steps in [Create a Media Services account](./account-create-how-to.md) to create the needed Media Services account and resource group to create an asset.
media-services Transform Create Thumbnail Sprites How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-create-thumbnail-sprites-how-to.md
[!INCLUDE [media services api v3 logo](./includes/v3-hr.md)]
-How do I create thumbnail sprites? You can create a transform for a job that will generate thumbnail sprites for your videos. This article shows you how with the Media Services 2020-05-01 v3 API.
+This article shows you how with the Media Services 2020-05-01 v3 API.
+
+You can use Media Encoder Standard to generate a thumbnail sprite, which is a JPEG file that contains multiple small resolution thumbnails stitched together into a single (large) image, together with a VTT file. This VTT file specifies the time range in the input video that each thumbnail represents, together with the size and coordinates of that thumbnail within the large JPEG file. Video players use the VTT file and sprite image to show a 'visual' seekbar, providing a viewer with visual feedback when scrubbing back and forward along the video timeline.
Add the code snippets for your preferred development language.
media-services Video On Demand Simple Portal Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/video-on-demand-simple-portal-quickstart.md
+
+ Title: Quickstart Video on Demand with Media Services
+description: This article shows you how to do the basic steps for delivering video on demand (VOD) with Azure Media Services.
+++++++ Last updated : 02/16/2022+++
+# Quickstart Basic Video On Demand (VOD) with Media Services
+
+This article shows you how to do the basic steps for delivering a basic video on demand (VOD) application with Azure Media Services and a GitHub repository. All the steps happen with your web browser from our documentation, the Azure portal, and GitHub.
+
+## Prerequisites
+
+- [Create a Media Services account](account-create-how-to.md). When you set up the Media Services account, a storage account, a user managed identity, and a default streaming endpoint will also be created.
+- One MP4 video to use for this exercise.
+- Create a GitHub account if you don't have one already, and stay logged in.
+- Create an Azure [Static Web App](/azure/static-web-apps/get-started-portal?tabs=vanilla-javascript).
+
+> [!NOTE]
+> You will be switching between several browser tabs or windows during this process. The below steps assume that you have your browser set to open tabs. Keep them all open.
+
+## Upload videos
+
+You should have a media services account, a storage account, and a default streaming endpoint.
+
+1. In the portal, navigate to the Media Services account that you just created.
+1. Select **Assets**. Assets are the containers that are used to house your media content.
+1. Select **Upload**. The Upload new assets screen will appear.
+1. Select the storage account you created for the Media Services account from the **Storage account** dropdown menu. It should be selected by default.
+1. Select the **file folder icon** next to the Upload files field.
+1. Select the media files you want to use. An asset will be created for every video you upload. The name of the asset will start with the name of the video and will be appended with a unique identifier. You *could* upload the same video twice and it will be located in two different assets.
+1. You must agree to the statement "I have all the rights to use the content/file, and agree that it will be handled per the Online Services Terms and the Microsoft Privacy Statement." Select **I agree and upload.**
+1. Select **Continue upload and close**, or **Close** if you want to watch the video upload progress.
+1. Repeat this process for each of the files you want to stream.
+
+## Create a transform
+
+> [!IMPORTANT]
+> You must encode your files with a transform in order to stream them, even if they have been encoded locally. The Media Services encoding process creates the manifest files needed for streaming.
+
+You'll now create a transform that uses a Built-in preset, which is like a recipe for encoding.
+
+1. Select **Transforms + jobs**.
+1. Select **Add transform**. The Add transform screen will appear.
+1. Enter a transform name in the **Transform name** field.
+1. Select the **Encoding** radio button.
+1. Select ContentAwareEncoding from the **Built-in preset name** dropdown list.
+1. Select **Add**.
+
+Stay on this screen for the next steps.
+
+## Create a job
+
+Next, you'll create a job which is for telling Media Services which transform to run on files within an asset. The asset you choose will be the input asset. The job will create an output asset to contain the encoded files as well as the manifest.
+
+1. Select **Add job**. The Create a job screen will appear.
+1. For the **Input source**, the **Asset** radio button should be selected by default. If not, select it now.
+1. Select **Select an existing asset** and choose one of the assets that was just created when you uploaded your videos. The Select an asset screen will appear.
+1. Select one of the assets in the list. You can only select one at a time for the job.
+1. Select the **Use existing** radio button.
+1. Select the transform that you created earlier from the **Transform** dropdown list.
+1. Under Configure output, default settings will be autopopulated, for this exercise leave them as they are.
+1. Select **Create**.
+1. Select **Transforms + Jobs**.
+1. You'll see the name of the transform you chose for the job. Select the transform to see the status of the job.
+1. Select the job listed under **Name** in the table of jobs. The job detail screen will open.
+1. Select the output asset from the **Outputs** list. The asset screen will open.
+1. Select the link for the asset next to Storage container. A new browser tab will open and You'll see the results of the job that used the transform. There should be several files in the output asset including:
+ 1. Encoded video files with.mpi and .mp4 extensions.
+ 1. A *XXXX_metadata.json* file.
+ 1. A *XXXX_manifest.json* file.
+ 1. A *XXXX_.ism* file.
+ 1. A *XXXX.isc* file.
+ 1. A *ThumbnailXXXX.jpg* file.
+1. Once you've viewed what is in the output asset, close the tab. Go back to the asset browser tab.
+
+## Create a streaming locator
+
+In order to stream your videos you need a streaming locator.
+
+1. Select **New streaming locator**. The Add streaming locator screen will appear and a default name for the locator will appear. You can change it or leave it as is.
+1. Select *Predefined_ClearStreamingOnly* from the Streaming policy dropdown list. This is a streaming policy that says that the video will be streamed using DASH, HLS and Smooth with no content protection restrictions except that the video canΓÇÖt be downloaded by the viewer. No content key policy is required.
+1. Leave the rest of the settings as they are.
+1. Select **Add**. The video will start playing in the player on the screen, and the **Streaming URL** field will be populated.
+1. Select **Show URLs** in the Streaming locator list. The Streaming URLs screen will appear.
+
+On this screen, you'll see that the streaming endpoint that was created when you created your account is in the Streaming endpoint dropdown list along with other data about the streaming locator.
+
+In the streaming and download section, you'll see the URLs to use for your streaming application. For the following steps, you'll use the URL that ends with `(format=m3u8-cmaf)`. Keep this browser tab open as you'll be coming back to it in a later step.
+
+## Create a web page with a video player client
+
+Assuming that you created a Static Web App, you'll now change the HTML in the https://docsupdatetracker.net/index.html file. If you didn't create a web app with Azure, you can still use this code where you plan to host your web app.
+
+1. If you aren't already logged in, sign in to GitHub and navigate to the repository you created for the Static Web App.
+1. Navigate to the *https://docsupdatetracker.net/index.html* file. It should be in a directory called `src`.
+1. Select the edit pencil icon to edit the file.
+1. Replace the code that is in the html file with the following code:
+
+ ```html
+ <html lang="en-US">
+ <head>
+ <meta charset="utf-8">
+ <meta http-equiv="X-UA-Compatible" content="IE=edge">
+ <title>Basic Video on Demand Static Web App</title>
+ <meta name="description" content="">
+ <meta name="viewport" content="width=device-width, initial-scale=1">
+
+ <!--*****START OF Azure Media Player Scripts*****-->
+ <!--Note: DO NOT USE the "latest" folder in production. Replace "latest" with a version number like "1.0.0"-->
+ <!--EX:<script src="//amp.azure.net/libs/amp/1.0.0/azuremediaplayer.min.js"></script>-->
+ <!--Azure Media Player versions can be queried from //aka.ms/ampchangelog-->
+ <link href="//amp.azure.net/libs/amp/latest/skins/amp-default/azuremediaplayer.min.css" rel="stylesheet">
+ <script src="//amp.azure.net/libs/amp/latest/azuremediaplayer.min.js"></script>
+ <!--*****END OF Azure Media Player Scripts*****-->
+ </head>
+ <body>
+ <h1>Clear Streaming Only</h1>
+ <video id="azuremediaplayer" class="azuremediaplayer amp-default-skin amp-big-play-centered" controls autoplay width="640" height="400" poster="" data-setup='{}' tabindex="0">
+ <source src="put streaming url here" type="application/vnd.ms-sstr+xml" />
+ <p class="amp-no-js">To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video</p>
+ </video>
+ </body>
+ </html>
+ ```
+
+1. Return to the Azure portal, Streaming locator browser tab where the streaming URLs are located.
+1. Copy the URL that ends with `(format=m3u8-cmaf)` under HLS.
+1. Return to the index file on GitHub browser tab.
+1. Paste the URL into the `src` value in the source object in the HTML.
+1. Select **Commit changes** to commit the change. It may take a minute for the changes to be live.
+1. Back in the Azure portal, Static web app tab, select the link next to **URL** to open the index page in another tab of your browser. The player should appear on the page.
+1. Select the **video play** button. The video should begin playing. If it isn't playing, check that your streaming endpoint is running.
+
+## Clean up resources
+
+If you don't intend to further develop this basic web app, make sure you delete all the resources you created or you'll be billed.
migrate Troubleshoot Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-assessment.md
This table lists help for fixing the following assessment readiness issues.
**Issue** | **Fix** |
-Unsupported IPv6 | Only applicable to Azure VMware Solution assessments. Azure VMware Solution doesn't support IPv6 internet addresses.Contact the Azure VMware Solution team for remediation guidance if your server is detected with IPv6.
-Unsupported OS | Support for certain Operating System versions have been deprecated by VMware and the assessment recommends you to upgrade the operating system before migrating to Azure VMware Solution. [Learn more](https://www.vmware.com/resources/compatibility/search.php?deviceCategory=software)
+Unsupported IPv6 | Only applicable to Azure VMware Solution assessments. Azure VMware Solution doesn't support IPv6 internet addresses. Contact the Azure VMware Solution team for remediation guidance if your server is detected with IPv6.
+Unsupported OS | Support for certain Operating System versions has been deprecated by VMware and the assessment recommends you to upgrade the operating system before migrating to Azure VMware Solution. [Learn more](https://www.vmware.com/resources/compatibility/search.php?deviceCategory=software)
## Suggested migration tool in an import-based Azure VMware Solution assessment is unknown
In the case of VMware and Hyper-V VMs, an Azure VM assessment marks Linux VMs as
- You can determine whether the Linux OS running on the on-premises VM is endorsed in Azure by reviewing [Azure Linux support](../virtual-machines/linux/endorsed-distros.md). - After you've verified the endorsed distribution, you can ignore this warning.
-This gap can be addressed by enabling [application discovery](./how-to-discover-applications.md) on the VMware VMs. An Azure VM assessment uses the operating system detected from the VM by using the guest credentials provided. This operating system data identifies the right OS information in the case of both Windows and Linux VMs.
+This gap can be addressed by enabling [application discovery](./how-to-discover-applications.md) on the VMware VMs. An Azure VM assessment uses the operating system detected from the VM by using the guest credentials provided. This Operating System data identifies the right OS information in the case of both Windows and Linux VMs.
## Operating system version not available
An Azure VM assessment might recommend Azure VM SKUs with more cores and memory
Let's look at an example recommendation:
-We have an on-premises VM with four cores and 8 GB of memory, with 50% CPU utilization and 50% memory utilization, and a specified comfort factor of 1.3.
+We have an on-premises VM with 4 cores and 8 GB of memory, with 50% CPU utilization and 50% memory utilization, and a specified comfort factor of 1.3.
-- If the assessment is **As on-premises**, an Azure VM SKU with four cores and 8 GB of memory is recommended.-- If the assessment is **Performance-based**, based on effective CPU and memory utilization (50% of 4 cores * 1.3 = 2.6 cores and 50% of 8-GB memory * 1.3 = 5.3-GB memory), the cheapest VM SKU of four cores (nearest supported core count) and 8 GB of memory (nearest supported memory size) is recommended.
+- If the assessment is **As on-premises**, an Azure VM SKU with 4 cores and 8 GB of memory is recommended.
+- If the assessment is **Performance-based**, based on effective CPU and memory utilization (50% of 4 cores * 1.3 = 2.6 cores and 50% of 8-GB memory * 1.3 = 5.3-GB memory), the cheapest VM SKU of 4 cores (nearest supported core count) and 8 GB of memory (nearest supported memory size) is recommended.
- [Learn more](concepts-assessment-calculation.md#types-of-assessments) about assessment sizing. ## Why is the recommended Azure disk SKU bigger than on-premises in an Azure VM assessment?
No, currently only disk size, total throughput, and total IOPS are used for sizi
This result is possible because not all VM sizes that support Ultra disk are present in all Ultra disk supported regions. Change the target assessment region to get the VM size for this server.
+## Why is my assessment showing a warning that it was created with an invalid offer?
+
+Your assessment was created with an offer that is no longer valid and hence, the **Edit** and **Recalculate** buttons are disabled. You can create a new assessment with any of the valid offers - *Pay as you go*, *Pay as you go Dev/Test*, and *Enterprise Agreement*. You can also use the **Discount(%)** field to specify any custom discount on top of the Azure offer. [Learn more](how-to-create-assessment.md).
+ ## Why is my assessment showing a warning that it was created with an invalid combination of Reserved Instances, VM uptime, and Discount (%)? When you select **Reserved Instances**, the **Discount (%)** and **VM uptime** properties aren't applicable. As your assessment was created with an invalid combination of these properties, the **Edit** and **Recalculate** buttons are disabled. Create a new assessment. [Learn more](./concepts-assessment-calculation.md#whats-an-assessment).
To collect network traffic logs:
- In Microsoft Edge or Internet Explorer, right-click the errors and select **Copy all**. 1. Close Developer Tools.
-## Where is the operating system data in my assessment discovered from?
+## Where is the Operating System data in my assessment discovered from?
-- For VMware VMs, by default, it's the operating system data provided by the vCenter Server.
+- For VMware VMs, by default, it's the Operating System data provided by the vCenter Server.
- For VMware Linux VMs, if application discovery is enabled, the OS details are fetched from the guest VM. To check which OS details are in the assessment, go to the **Discovered servers** view, and mouse over the value in the **Operating system** column. In the text that pops up, you'd be able to see whether the OS data you see is gathered from the vCenter Server or from the guest VM by using the VM credentials. - For Windows VMs, the operating system details are always fetched from the vCenter Server.-- For Hyper-V VMs, the operating system data is gathered from the Hyper-V host.
+- For Hyper-V VMs, the Operating System data is gathered from the Hyper-V host.
- For physical servers, it is fetched from the server. ## Common web apps discovery errors
mysql Concepts Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-data-in-replication.md
To learn more about this parameter, review the [MySQL documentation](https://dev
Data-in Replication is only supported in General Purpose and Memory Optimized pricing tiers.
+>[!Note]
+>GTID is supported on versions 5.7 and 8.0 and only on servers that support storage up to 16 TB (General purpose storage v2).
+ ### Requirements - The source server version must be at least MySQL version 5.6.
mysql Concepts Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-read-replicas.md
After your application is successfully processing reads and writes, you've compl
## Global transaction identifier (GTID)
-Global transaction identifier (GTID) is a unique identifier created with each committed transaction on a source server and is OFF by default in Azure Database for MySQL. GTID is supported on versions 5.7 and 8.0 and only on servers that support storage up to 16 TB. To learn more about GTID and how it's used in replication, refer to MySQL's [replication with GTID](https://dev.mysql.com/doc/refman/5.7/en/replication-gtids.html) documentation.
+Global transaction identifier (GTID) is a unique identifier created with each committed transaction on a source server and is OFF by default in Azure Database for MySQL. GTID is supported on versions 5.7 and 8.0 and only on servers that support storage up to 16 TB(General purpose storage v2). To learn more about GTID and how it's used in replication, refer to MySQL's [replication with GTID](https://dev.mysql.com/doc/refman/5.7/en/replication-gtids.html) documentation.
MySQL supports two types of transactions: GTID transactions (identified with GTID) and anonymous transactions (don't have a GTID allocated)
mysql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/connect-java.md
+ms.devlang: java
Last updated 08/17/2020
mysql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-java.md
+ms.devlang: java
Last updated 01/16/2021
mysql How To Restore Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-restore-server-portal.md
Follow these steps to restore your flexible server using an existing full backup
5. Provide a new server name in the **Name** field in the Server details section.
-6. Select **Review + Create** to review your selections.
+6. When primary region is down, one cannot create geo-redundant servers in the respective geo-paired region as storage cannot be provisioned in the primary region. One must wait for the primary region to be up to provision geo-redundant servers in the geo-paired region. With the primary region down one can still geo-restore the source server to the geo-paired region by disabling the geo-redundancy option in the Compute + Storage Configure Server settings in the restore portal experience and restore as a locally redundant server to ensure business continuity.
-7. A notification will be shown that the restore operation has been initiated. This operation may take a few minutes.
+ :::image type="content" source="./media/how-to-restore-server-portal/georestore-region-down-1.png" alt-text="Compute + Storage window":::
+
+ :::image type="content" source="./media/how-to-restore-server-portal/georestore-region-down-2.png" alt-text="Disabling Geo-Redundancy":::
+
+ :::image type="content" source="./media/how-to-restore-server-portal/georestore-region-down-3.png" alt-text="Restoring as Locally redundant server":::
+
+7. Select **Review + Create** to review your selections.
+
+8. A notification will be shown that the restore operation has been initiated. This operation may take a few minutes.
The new server created by geo-restore has the same server admin login name and password that was valid for the existing server at the time the restore was initiated. The password can be changed from the new server's Overview page. Additionally during a geo-restore, **Networking** settings such as virtual network settings and firewall rules can be configured as described in the below section.
mysql Tutorial Php Database App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-php-database-app.md
+ms.devlang: php
Last updated 9/21/2020
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
This release of Azure Database for MySQL - Flexible Server includes the followin
- When you're using ARM templates for provisioning or configuration changes for HA enabled servers, if a single deployment is made to enable/disable HA and along with other server properties like backup redundancy, storage etc. then deployment would fail. You can mitigate it by submitting the deployment request separately for to enable\disable and configuration changes. You wouldnΓÇÖt have issue with Portal or Azure CLI as these are request already separated.
- - When you're viewing automated backups for a HA enabled server in Backup and Restore blade, if at some point in time HA has been disabled for the server and then enabled, you will lose viewing rights to the server's backups on the blade though the flexible server is successfully taking daily automated backups for the server in the backend.
+ - When you're viewing automated backups for a HA enabled server in Backup and Restore blade, if at some point in time a forced or automatic failover is performed, you may lose viewing rights to the server's backups on the Backup and Restore blade. Despite the invisibility of information regarding backups on the portal, the flexible server is successfully taking daily automated backups for the server in the backend and the server can be restored to any point in time within the retention period.
## November 2021
mysql Howto Data In Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-data-in-replication.md
Review the [limitations and requirements](concepts-data-in-replication.md#limita
> [!IMPORTANT] > The Azure Database for MySQL server must be created in the General Purpose or Memory Optimized pricing tiers as data-in replication is only supported in these tiers.
+ > GTID is supported on versions 5.7 and 8.0 and only on servers that support storage up to 16 TB (General purpose storage v2).
2. Create the same user accounts and corresponding privileges.
call mysql. az_replication_skip_gtid_transaction(ΓÇÿ<transaction_gtid>ΓÇÖ)
The procedure can skip the transaction for the given GTID. If the GTID format is not right or the GTID transaction has already been executed, the procedure will fail to execute. The GTID for a transaction can be determined by parsing the binary log to check the transaction events. MySQL provides a utility [mysqlbinlog](https://dev.mysql.com/doc/refman/5.7/en/mysqlbinlog.html) to parse binary logs and display their contents in text format, which can be used to identify GTID of the transaction.
+>[!Important]
+>This procedure can be only used to skip one transaction, and can't be used to skip gtid set or set gtid_purged.
+ To skip the next transaction after the current replication position, use the following command to identify the GTID of next transaction as shown below. ```sql
postgresql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/connect-java.md
+ms.devlang: java
Last updated 08/17/2020
postgresql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-backup-restore.md
Title: Backup and restore in Azure Database for PostgreSQL - Flexible Server
-description: Learn about the concepts of backup and restore with Azure Database for PostgreSQL - Flexible Server
+description: Learn about the concepts of backup and restore with Azure Database for PostgreSQL - Flexible Server.
Last updated 11/30/2021
# Backup and restore in Azure Database for PostgreSQL - Flexible Server
+Backups form an essential part of any business continuity strategy. They help protect data from accidental corruption or deletion.
-
-Backups form an essential part of any business continuity strategy. They help with protecting data from accidental corruption or deletion. Azure Database for PostgreSQL - Flexible Server automatically performs regular backup of your server. You can then do a point-in-time recovery within the retention period where you can specify the date and time to which you want to restore to. The overall time to restore and recovery typically depends on the size of data and amount of recovery to be performed.
+Azure Database for PostgreSQL - Flexible Server automatically performs regular backups of your server. You can then do a point-in-time recovery (PITR) within a retention period that you specify. The overall time to restore and recovery typically depends on the size of data and the amount of recovery to be performed.
## Backup overview
-Flexible Server takes snapshot backups of the data files and stores them securely in zone-redundant storage or locally redundant storage depending on the [region](overview.md#azure-regions). The server also performs transaction logs backup as and when the WAL file is ready to be archived. These backups allow you to restore a server to any point-in-time within your configured backup retention period. The default backup retention period is seven days and can be stored up to 35 days. All backups are encrypted using AES 256-bit encryption for the data stored at rest.
+Flexible Server takes snapshot backups of data files and stores them securely in zone-redundant storage or locally redundant storage, depending on the [region](overview.md#azure-regions). The server also backs up transaction logs when the write-ahead log (WAL) file is ready to be archived. You can use these backups to restore a server to any point in time within your configured backup retention period.
+
+The default backup retention period is 7 days, but you can extend the period to a maximum of 35 days. All backups are encrypted through AES 256-bit encryption for data stored at rest.
-These backup files cannot be exported or used to create servers outside of Azure Database for PostgreSQL - Flexible Server. For that purpose, you can use PostgreSQL tools pg_dump and pg_restore/psql.
+These backup files can't be exported or used to create servers outside Azure Database for PostgreSQL - Flexible Server. For that purpose, you can use the PostgreSQL tools pg_dump and pg_restore/psql.
## Backup frequency
-Backups on flexible servers are snapshot-based. The first snapshot backup is scheduled immediately after a server is created. Snapshot backups are currently taken daily once. Transaction log backups occur at a varied frequency depending on the workload and when the WAL file is filled to be archived. In general, the delay (RPO) may be up to 15 minutes.
+Backups on flexible servers are snapshot based. The first snapshot backup is scheduled immediately after a server is created. Snapshot backups are currently taken once daily.
+
+Transaction log backups happen at varied frequencies, depending on the workload and when the WAL file is filled and ready to be archived. In general, the delay (recovery point objective, or RPO) can be up to 15 minutes.
## Backup redundancy options
-Azure Database for PostgreSQL stores multiple copies of your backups so that your data is protected from planned and unplanned events, including transient hardware failures, network or power outages, and massive natural disasters. Azure Database for PostgreSQL provides the flexibility to choose between a local backup copy within a region or a geo-redundant backup (Preview). By default, Azure Database for PostgreSQL server backup uses zone redundant storage if available in the region. If not, it uses locally redundant storage. In addition, customers can choose geo-redundant backup, which is in preview, for Disaster Recovery at the time of server create. Refer to the list of regions where the geo-redundant backups are supported.
+Flexible Server stores multiple copies of your backups to help protect your data from planned and unplanned events. These events can include transient hardware failures, network or power outages, and natural disasters. Backup redundancy helps ensure that your database meets its availability and durability targets, even if failures happen.
-Backup redundancy ensures that your database meets its availability and durability targets even in the case of failures and Azure Database for PostgreSQL extends three options to users -
+Flexible Server offers three options:
-- **Zone-redundant backup storage** : This is automatically chosen for regions that support Availability zones. When the backups are stored in zone-redundant backup storage, multiple copies are not only stored within the availability zone in which your server is hosted, but are also replicated to another availability zone in the same region. This option can be leveraged for scenarios that require high availability or for restricting replication of data to within a country/region to meet data residency requirements. Also this provides at least 99.9999999999% (12 9's) durability of Backups objects over a given year.
+- **Zone-redundant backup storage**: This option is automatically chosen for regions that support availability zones. When the backups are stored in zone-redundant backup storage, multiple copies are not only stored within the availability zone in which your server is hosted, but also replicated to another availability zone in the same region.
-- **Locally redundant backup storage** : This is automatically chosen for regions that do not support Availability zones yet. When the backups are stored in locally redundant backup storage, multiple copies of backups are stored in the same datacenter. This option protects your data against server rack and drive failures. Also this provides at least 99.999999999% (11 9's) durability of Backups objects over a given year. By default backup storage for servers with same-zone high availability (HA) or no high availability configuration is set to locally redundant.
+ This option provides backup data availability across availability zones and restricts replication of data to within a country/region to meet data residency requirements. This option provides at least 99.9999999999 percent (12 nines) durability of backup objects over a year.
-- **Geo-Redundant backup storage (Preview)** : You can choose this option at the time of server creation. When the backups are stored in geo-redundant backup storage, in addition to three copies of data stored within the region in which your server is hosted, but are also replicated to it's geo-paired region. This provides better protection and ability to restore your server in a different region in the event of a disaster. Also this provides at least 99.99999999999999% (16 9's) durability of Backups objects over a given year. One can enable Geo-Redundancy option at server create time to ensure geo-redundant backup storage. Geo redundancy is supported for servers hosted in any of the [Azure paired regions](../../availability-zones/cross-region-replication-azure.md).
+- **Locally redundant backup storage**: This option is automatically chosen for regions that don't support availability zones yet. When the backups are stored in locally redundant backup storage, multiple copies of backups are stored in the same datacenter.
+
+ This option helps protect your data against server rack and drive failures. It provides at least 99.999999999 percent (11 nines) durability of backup objects over a year.
+
+ By default, backup storage for servers with same-zone high availability (HA) or no high-availability configuration is set to locally redundant.
-> [!NOTE]
-> Geo-redundancy backup option can be configured at the time of server creates only.
+- **Geo-redundant backup storage (preview)**: You can choose this option at the time of server creation. When the backups are stored in geo-redundant backup storage, in addition to three copies of data stored within the region where your server is hosted, the data is replicated to a geo-paired region.
+
+ This option provides the ability to restore your server in a different region in the event of a disaster. It also provides at least 99.99999999999999 percent (16 nines) durability of backup objects over a year.
+
+ Geo-redundancy is supported for servers hosted in any of the [Azure paired regions](../../availability-zones/cross-region-replication-azure.md).
## Moving from other backup storage options to geo-redundant backup storage
-Configuring geo-redundant storage for backup is only allowed during server create. Once the server is provisioned, you cannot change the backup storage redundancy option.
+You can configure geo-redundant storage for backup only during server creation. After a server is provisioned, you can't change the backup storage redundancy option.
### Backup retention
-Backups are retained based on the backup retention period setting for the server. You can select a retention period between 7 and 35 days. The default retention period is seven days. You can set the retention period during server creation or you can change it at a later time. Backups are retained even for stopped servers.
+Backups are retained based on the retention period that you set for the server. You can select a retention period between 7 (default) and 35 days. You can set the retention period during server creation or change it at a later time. Backups are retained even for stopped servers.
-The backup retention period governs how far back in time a point-in-time restore can be retrieved, since it is based on backups available. The backup retention period can also be treated as a recovery window from a restore perspective. All backups required to perform a point-in-time restore within the backup retention period are retained in the backup storage. For example - if the backup retention period is set to seven days, the recovery window is considered as last seven days. In this scenario, all the data and logs required to restore and recover the server in last seven days are retained.
+The backup retention period governs how far back in time a PITR can be retrieved, because it's based on available backups. You can also treat the backup retention period as a recovery window from a restore perspective.
-### Backup storage cost
+All backups required to perform a PITR within the backup retention period are retained in the backup storage. For example, if the backup retention period is set to 7 days, the recovery window is the last 7 days. In this scenario, all the data and logs that are required to restore and recover the server in the last 7 days are retained.
-Flexible server provides up to 100% of your provisioned server storage as backup storage at no additional cost. Any additional backup storage used is charged in GB per month. For example, if you have provisioned a server with 250 GiB of storage, then you have 250 GiB of backup storage capacity at no additional charge. If the daily backup usage is 25 GiB, then you can have up to 10 days of free backup storage. Backup storage consumption exceeding 250 GiB is charged as per the [pricing model](https://azure.microsoft.com/pricing/details/postgresql/flexible-server/).
+### Backup storage cost
-If you configured your server with geo-redundant backup, then the backup data is also copied to the Azure paired region. Hence, your backup size will be two times the local backup copy. Billing is computed as ( (2 x local backup size) - provisioned storage size ) x Price @ GB/month.
+Flexible Server provides up to 100 percent of your provisioned server storage as backup storage at no additional cost. Any additional backup storage that you use is charged in gigabytes per month.
-You can use the [Backup storage used](../concepts-monitoring.md) metric in the Azure portal to monitor the backup storage consumed by a server. The Backup Storage used metric represents the sum of storage consumed by all the database backups and log backups retained based on the backup retention period set for the server.
+For example, if you have provisioned a server with 250 gibibytes (GiB) of storage, then you have 250 GiB of backup storage capacity at no additional charge. If the daily backup usage is 25 GiB, then you can have up to 10 days of free backup storage. Backup storage consumption that exceeds 250 GiB is charged as defined in the [pricing model](https://azure.microsoft.com/pricing/details/postgresql/flexible-server/).
->[!Note]
-> Irrespective of the database size, heavy transactional activity on the server generates more WAL files which in turn increases the backup storage.
+If you configured your server with geo-redundant backup, the backup data is also copied to the Azure paired region. So, your backup size will be two times the local backup copy. Billing is computed as *( (2 x local backup size) - provisioned storage size ) x price @ gigabytes per month*.
-The primary means of controlling the backup storage cost is by setting the appropriate backup retention period and choosing the right backup redundancy options to meet your desired recovery goals.
+You can use the [Backup Storage Used](../concepts-monitoring.md) metric in the Azure portal to monitor the backup storage that a server consumes. The Backup Storage Used metric represents the sum of storage consumed by all the retained database backups and log backups, based on the backup retention period set for the server.
-## Point-in-time restore overview
+>[!Note]
+> Irrespective of the database size, heavy transactional activity on the server generates more WAL files. The increase in files in turn increases the backup storage.
-In Flexible server, performing a point-in-time restore creates a new server in the same region as your source server, but you can choose the availability zone. It is created with the source server's configuration for the pricing tier, compute generation, number of vCores, storage size, backup retention period, and backup redundancy option. Also, tags and settings such as VNET and firewall settings are inherited from the source server.
+## Point-in-time recovery
- ### Point-in-time restore
+In Flexible Server, performing a PITR creates a new server in the same region as your source server, but you can choose the availability zone. It's created with the source server's configuration for the pricing tier, compute generation, number of virtual cores, storage size, backup retention period, and backup redundancy option. Also, tags and settings such as virtual networks and firewall settings are inherited from the source server.
-The physical database files are first restored from the snapshot backups to the server's data location. The appropriate backup that was taken earlier than the desired point-in-time is automatically chosen and restored. A recovery process is then initiated using WAL files to bring the database to a consistent state.
+The physical database files are first restored from the snapshot backups to the server's data location. The appropriate backup that was taken earlier than the desired point in time is automatically chosen and restored. A recovery process then starts by using WAL files to bring the database to a consistent state.
- For example, let us assume the backups are performed at 11pm every night. If the restore point is for August 15, 2020 at 10:00 am, the daily backup of August 14, 2020 is restored. The database will be recovered until 10am of August 15, 2020 using the transaction logs backup from August 14, 11pm to August 15, 10am.
+For example, assume that the backups are performed at 11:00 PM every night. If the restore point is for August 15 at 10:00 AM, the daily backup of August 14 is restored. The database will be recovered until 10:00 AM of August 15 by using the transaction log backup from August 14, 11:00 PM, to August 15, 10:00 AM.
- Please see [these steps](./how-to-restore-server-portal.md) to restore your database server.
+To restore your database server, see [these steps](./how-to-restore-server-portal.md).
> [!IMPORTANT]
-> Restore operations in flexible server always creates a new database server with the name you provide and does not overwrite the existing database server.
+> A restore operation in Flexible Server always creates a new database server with the name that you provide. It doesn't overwrite the existing database server.
+
+PITR is useful in scenarios like these:
-Point-in-time restore is useful in multiple scenarios. For example, when a user accidentally deletes data, drops an important table or database, or if an application accidentally overwrites good data with bad data due to an application defect. You will be able to restore to the last transaction due to continuous backup of transaction logs.
+- A user accidentally deletes data, a table, or a database.
+- An application accidentally overwrites good data with bad data because of an application defect.
-You can choose between a latest restore point and a custom restore point.
+With continuous backup of transaction logs, you'll be able to restore to the last transaction. You can choose between two restore options:
-- **Latest restore point (now)**: This is the default option which allows you to restore the server to the latest point-in-time.
+- **Latest restore point (now)**: This is the default option. It allows you to restore the server to the latest point in time.
-- **Custom restore point**: This option allows you to choose any point-in-time within the retention period defined for this flexible server. By default, the latest time in UTC is auto-selected, and useful if you want to restore to the last committed transaction for your test purposes. You can optionally choose other days and time.
+- **Custom restore point**: This option allows you to choose any point in time within the retention period defined for this flexible server. By default, the latest time in UTC is automatically selected. Automatic selection is useful if you want to restore to the last committed transaction for test purposes. You can optionally choose other days and times.
-The estimated time to recover depends on several factors including the volume of transaction logs to process post the previous backup time, and the total number of databases recovering in the same region at the same time. The overall recovery time usually takes from few minutes up to few hours.
+The estimated time to recover depends on several factors, including the volume of transaction logs to process after the previous backup time, and the total number of databases recovering in the same region at the same time. The overall recovery time usually takes from few minutes up to a few hours.
-If you have configured your server within a VNET, you can restore to the same VNET or to a different VNET. However, you cannot restore to a public access. Similarly, if you configured your server with public access, you cannot restore to a private VNET access.
+If you've configured your server within a virtual network, you can restore to the same virtual network or to a different virtual network. However, you can't restore to public access. Similarly, if you configured your server with public access, you can't restore to private virtual network access.
> [!IMPORTANT]
-> Deleted servers **cannot** be restored by the user. If you delete the server, all databases that belong to the server are also deleted and cannot be recovered. To protect server resources, post deployment, from accidental deletion or unexpected changes, administrators can leverage [management locks](../../azure-resource-manager/management/lock-resources.md). If you accidentally deleted your server, please reach out to support. In some cases, your server may be restored with or without data loss.
+> A user can't restore deleted servers. If you delete a server, all databases that belong to the server are also deleted and can't be recovered. To help protect server resources from accidental deletion or unexpected changes after deployment, administrators can use [management locks](../../azure-resource-manager/management/lock-resources.md).
+>
+>If you accidentally deleted your server, please reach out to support. In some cases, your server might be restored with or without data loss.
-## Geo-redundant backup and restore (Preview)
+## Geo-redundant backup and restore (preview)
-You can configure geo-redundant backup at the time of server creation. Refer to this [quick start guide](./quickstart-create-server-portal.md) on how to enable Geo-redundant backup from Compute+Storage blade.
+To enable geo-redundant backup from the **Compute + storage** pane in the Azure portal, see the [quickstart guide](./quickstart-create-server-portal.md).
>[!IMPORTANT]
-> Geo-redundant backup can only be configured at the time of server creation.
+> Geo-redundant backup can be configured only at the time of server creation.
-Once you have configured your server with geo-redundant backup, you can restore it to a [geo-paired region](../../availability-zones/cross-region-replication-azure.md). Please refer to the geo-redundant backup supported [regions](overview.md#azure-regions).
+After you've configured your server with geo-redundant backup, you can restore it to a [geo-paired region](../../availability-zones/cross-region-replication-azure.md). For more information, see the [supported regions](overview.md#azure-regions) for geo-redundant backup.
-When the server is configured with geo-redundant backup, the backup data is copied to the paired region asynchronously using storage replication. This includes copying of data backup and also transaction logs. After the server creation, please wait at least for one hour before initiating a geo-restore. That will allow the first set of backup data to be replicated to the paired region. Subsequently, the transaction logs and the daily backups are asynchronously copied to the paired region and there could be up to one hour of delay in data transmission. Hence, you can expect up to one hour of RPO when you restore. You can only restore to the last available backup data that is available at the paired region. Currently, point-in-time restore of geo-backup is not available.
+When the server is configured with geo-redundant backup, the backup data and transaction logs are copied to the paired region asynchronously through storage replication. After you create a server, wait at least one hour before initiating a geo-restore. That will allow the first set of backup data to be replicated to the paired region.
-The estimated time to recover the server (RTO) depends on factors including the size of the database, the last database backup time, and the amount of WAL to process till the last received backup data. The overall recovery time usually takes from few minutes up to few hours.
+Subsequently, the transaction logs and the daily backups are asynchronously copied to the paired region. There might be up to one hour of delay in data transmission. So, you can expect up to one hour of RPO when you restore. You can restore only to the last available backup data that's available at the paired region. Currently, PITR of geo-redundant backups is not available.
-During the geo-restore, the server configurations that can be changed include VNET settings and the ability to remove geo-redundant backup from the restored server. Changing other server configurations such as compute, storage or pricing tier (Burstable, General Purpose, or Memory Optimized) during geo-restore are not supported.
+The estimated time to recover the server (recovery time objective, or RTO) depends on factors like the size of the database, the last database backup time, and the amount of WAL to process until the last received backup data. The overall recovery time usually takes from a few minutes up to a few hours.
-Refer to the [how to guide](how-to-restore-server-portal.md#performing-geo-restore-preview) on performing Geo-restore.
+During the geo-restore, the server configurations that can be changed include virtual network settings and the ability to remove geo-redundant backup from the restored server. Changing other server configurations--such as compute, storage, or pricing tier (Burstable, General Purpose, or Memory Optimized)--during geo-restore is not supported.
+
+For more information about performing a geo-restore, see the [how-to guide](how-to-restore-server-portal.md#performing-geo-restore-preview).
> [!IMPORTANT]
-> When primary region is down, you cannot create geo-redundant servers in the respective geo-paired region as storage cannot be provisioned in the primary region. You must wait for the primary region to be up to provision geo-redundant servers in the geo-paired region.
-> With the primary region down, you can still geo-restore the source server to the geo-paired region by disabling the geo-redundancy option in the Compute + Storage Configure Server settings in the restore portal experience and restore as a locally redundant server to ensure business continuity.
+> When the primary region is down, you can't create geo-redundant servers in the respective geo-paired region, because storage can't be provisioned in the primary region. Before you can provision geo-redundant servers in the geo-paired region, you must wait for the primary region to be up.
+>
+> With the primary region down, you can still geo-restore the source server to the geo-paired region. Disable the geo-redundancy option in the **Compute + Storage** > **Configure Server** settings in the portal, and restore as a locally redundant server to help ensure business continuity.
## Restore and networking
-### Point-in-time restore
+### Point-in-time recovery
+
+If your source server is configured with a *public access* network, you can only restore to public access.
-- If your source server is configured with **public access** network, you can only restore to a **public access**. -- If your source server is configured with **private access** VNET, then you can either restore in the same VNET or to a different VNET. You cannot perform point-in-time restore across public and private access.
+If your source server is configured with a *private access* virtual network, you can restore either to the same virtual network or to a different virtual network. You can't perform PITR across public and private access.
### Geo-restore -- If your source server is configured with **public access** network, you can only restore to a **public access**. Also, you would have to apply firewall rules after the restore operation is complete. -- If your source server is configured with **private access** VNET, then you can only restore to a different VNET - as VNET cannot be spanned across regions. You cannot perform geo-restore across public and private access.
+If your source server is configured with a *public access* network, you can only restore to public access. Also, you have to apply firewall rules after the restore operation is complete.
-## Perform post-restore tasks
+If your source server is configured with a *private access* virtual network, you can only restore to a different virtual network, because virtual networks can't span regions. You can't perform geo-restore across public and private access.
-After restoring the database, you can perform the following tasks to get your users and applications back up and running:
+## Post-restore tasks
-- If the new server is meant to replace the original server, redirect clients and client applications to the new server. Change server name of your connection string to point to the new restored server.
+After you restore the database, you can perform the following tasks to get your users and applications back up and running:
-- Ensure appropriate server-level firewall and VNet rules are in place for users to connect. These rules are not copied over from the original server.
+- If the new server is meant to replace the original server, redirect clients and client applications to the new server. Change the server name of your connection string to point to the new server.
+
+- Ensure that appropriate server-level firewall and virtual network rules are in place for user connections. These rules are not copied over from the original server.
-- The restored server's compute can be scaled up / down as needed.
+- Scale up or scale down the restored server's compute as needed.
-- Ensure appropriate logins and database level permissions are in place.
+- Ensure that appropriate logins and database-level permissions are in place.
-- Configure alerts, as appropriate.
+- Configure alerts as appropriate.
-- If you had restored the database configured with high availability, and if you want to configure the restored server with high availability, you can then follow [the steps](./how-to-manage-high-availability-portal.md).
+- If you restored the database configured with high availability, and if you want to configure the restored server with high availability, you can then follow [the steps](./how-to-manage-high-availability-portal.md).
## Frequently asked questions
-### Backup related questions
+### Backup-related questions
-* **How do Azure handles backup of my server?**
+* **How does Azure handle backup of my server?**
- By default, Azure Database for PostgreSQL enables automated backups of your entire server (encompassing all databases created) with a default 7 days of retention period. A daily incremental snapshot of the database is performed. The logs (WAL) files are archived to Azure BLOB continuously.
+ By default, Azure Database for PostgreSQL enables automated backups of your entire server (encompassing all databases created) with a default retention period of 7 days. The automated backups include a daily incremental snapshot of the database. The log (WAL) files are archived to Azure Blob Storage continuously.
-* **Can I configure these automatic backup to be retained for long term?**
+* **Can I configure automated backups to retain data for the long term?**
- No. Currently we only support a maximum of 35 days of retention. You can do manual backups and use that for long-term retention requirement.
+ No. Currently, Flexible Server supports a maximum of 35 days of retention. You can use manual backups for a long-term retention requirement.
-* **How do I do manual backup of my Postgres servers?**
+* **How do I manually back up my Postgres servers?**
- You can manually take a backup is by using PostgreSQL tool pg_dump as documented [here](https://www.postgresql.org/docs/current/app-pgdump.html). For examples, you can refer to this [upgrade/migration documentation](../howto-migrate-using-dump-and-restore.md) that you can use for backups as well. If you wish to backup Azure Database for PostgreSQL to a Blob storage, refer to our tech community blog [Backup Azure Database for PostgreSQL to a Blob Storage](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/backup-azure-database-for-postgresql-to-a-blob-storage/ba-p/803343).
+ You can manually take a backup by using the PostgreSQL tool [pg_dump](https://www.postgresql.org/docs/current/app-pgdump.html). For examples, see [Migrate your PostgreSQL database by using dump and restore](../howto-migrate-using-dump-and-restore.md).
+
+ If you want to back up Azure Database for PostgreSQL to Blob Storage, see [Back up Azure Database for PostgreSQL to Blob Storage](https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/backup-azure-database-for-postgresql-to-a-blob-storage/ba-p/803343) on our tech community blog.
-* **What are the backup windows for my server? Can I customize it?**
+* **What are the backup windows for my server? Can I customize them?**
- Backup windows are inherently managed by Azure and cannot be customized. The first full snapshot backup is scheduled immediately after a server is created. Subsequent snapshot backups are incremental backups that occur once a day.
+ Azure manages backup windows, and you can't customize them. The first full snapshot backup is scheduled immediately after a server is created. Subsequent snapshot backups are incremental and occur once a day.
* **Are my backups encrypted?**
- Yes. All Azure Database for PostgreSQL data, backups and temporary files that are created during query execution are encrypted using AES 256-bit encryption. The storage encryption is always on and cannot be disabled.
+ Yes. All Azure Database for PostgreSQL data, backups, and temporary files that are created during query execution are encrypted through AES 256-bit encryption. Storage encryption is always on and can't be disabled.
-* **Can I restore a single/few database(s) in a server?**
+* **Can I restore a single database or a few databases in a server?**
- Restoring a single/few database(s) or tables is not directly supported. However, you need to restore the entire server to a new server and then extract the table(s) or database(s) needed and import them to your server.
+ Restoring a single database or a few databases or tables is not directly supported. However, you can restore the entire server to a new server, and then extract tables or databases and import them to the new server.
-* **Is my server available while the backup is in progress?**
- Yes. Backups are online operations using snapshots. The snapshot operation only takes few seconds and doesnΓÇÖt interfere with production workloads ensuring high availability of the server.
+* **Is my server available while a backup is in progress?**
-* **When setting up the maintenance window for the server do we need to account for backup window?**
-
- No. Backups are triggered internally as part of the managed service and have no bearing to the Managed Maintenance Window.
+ Yes. Backups are online operations that use snapshots. The snapshot operation takes only a few seconds and doesn't interfere with production workloads, to help ensure high availability of the server.
-* **Where are my automated backups stored and how do I manage their retention?**
+* **When I'm setting up the maintenance window for the server, do I need to account for the backup window?**
- Azure Database for PostgreSQL automatically creates server backups and stores them automatically in zone-redundant storage in regions where multiple zones are supported or in locally redundant storage in regions that do not support multiple zones yet. These backup files cannot be exported. You can use backups to restore your server to a point-in-time only. The default backup retention period is seven days. You can optionally configure the backup retention up to 35 days. If you configured with geo-redundant backup, the backup is also copied to the paired region.
+ No. Backups are triggered internally as part of the managed service and have no bearing on the maintenance window.
-* **With geo-redundant backup, how frequently the backup is copied to the paired region?**
-
- When the server is configured with geo-redundant backup, the backup data is stored on geo-redundant storage account which performs the copy of data to the paired region. Data files are copied to the paired region as and when the daily backup occurs at the primary server. WAL files are backed up as and when the WAL files are ready to be archived. These backup data are asynchronously copied in a continuous manner to the paired region. You can expect up to 1 hr of delay in receiving backup data.
+* **Where are my automated backups stored, and how do I manage their retention?**
+
+ Azure Database for PostgreSQL automatically creates server backups and stores them in:
+
+ - Zone-redundant storage, in regions where multiple zones are supported.
+ - Locally redundant storage, in regions that don't support multiple zones yet.
+ - The paired region, if you've configured geo-redundant backup.
+
+ These backup files can't be exported.
+
+ You can use backups to restore your server to a point in time only. The default backup retention period is 7 days. You can optionally configure the backup retention up to 35 days.
+
+* **With geo-redundant backup, how often is the backup copied to the paired region?**
+
+ When the server is configured with geo-redundant backup, the backup data is stored in a geo-redundant storage account. The storage account copies data files to the paired region when the daily backup occurs at the primary server. WAL files are backed up when they're ready to be archived.
+
+ Backup data is asynchronously copied in a continuous manner to the paired region. You can expect up to one hour of delay in receiving backup data.
* **Can I do PITR at the remote region?** No. The data is recovered to the last available backup data at the remote region.
-* **How are backups performed in a HA enabled servers?**
+* **How are backups performed in a HA-enabled servers?**
- Flexible server's data volumes are backed up using Managed disk incremental snapshots from the primary server. The WAL backup is performed from either the primary server or the standby server.
+ Data volumes in Flexible Server are backed up through managed disk incremental snapshots from the primary server. The WAL backup is performed from either the primary server or the standby server.
-* **How can I validate backups are performed on my server?**
+* **How can I validate that backups are performed on my server?**
- The best way to validate availability of valid backups is performing periodic point in time restores and ensuring backups are valid and restorable. Backup operations or files are not exposed to the end users.
+ The best way to check backups is to perform periodic PITR and ensure that backups are valid and restorable. Backup operations or files are not exposed to end users.
* **Where can I see the backup usage?**
- In the Azure portal, under Monitoring, click Metrics, you can find ΓÇ£Backup Usage metricΓÇ¥ in which you can monitor the total backup usage.
+ In the Azure portal, under **Monitoring**, select **Metrics**. In **Backup usage metric**, you can monitor the total backup usage.
* **What happens to my backups if I delete my server?**
- If you delete the server, all backups that belong to the server are also deleted and cannot be recovered. To protect server resources, post deployment, from accidental deletion or unexpected changes, administrators can leverage management locks.
+ If you delete a server, all backups that belong to the server are also deleted and can't be recovered. To help protect server resources from accidental deletion or unexpected changes after deployment, administrators can use management locks.
* **How are backups retained for stopped servers?**
- No new backups are performed for stopped servers. All older backups (within the retention window) at the time of stopping the server are retained until the server is restarted post which backup retention for the active server is governed by itΓÇÖs backup retention window.
+ No new backups are performed for stopped servers. All older backups (within the retention window) at the time of stopping the server are retained until the server is restarted. After that, backup retention for the active server is governed by its retention window.
* **How will I be charged and billed for my backups?**
- Flexible server provides up to 100% of your provisioned server storage as backup storage at no additional cost. Any additional backup storage used is charged in GB per month as per the pricing model. Backup storage billing is also governed by the backup retention period selected and backup redundancy option chosen apart from the transactional activity on the server which impacts the total backup storage used directly.
+ Flexible Server provides up to 100 percent of your provisioned server storage as backup storage at no additional cost. Any additional backup storage that you use is charged in gigabytes per month, as defined in the pricing model.
+
+ The backup retention period and backup redundancy option that you select, along with transactional activity on the server, directly affect the total backup storage and billing.
* **How will I be billed for a stopped server?**
- While your server instance is stopped, no new backups are performed. You are charged for provisioned storage and backup storage (backups stored within your specified retention window). Free backup storage is limited to the size of your provisioned database and any excess backup data will be charged using the backup price.
+ While your server instance is stopped, no new backups are performed. You're charged for provisioned storage and backup storage (backups stored within your specified retention window).
+
+ Free backup storage is limited to the size of your provisioned database. Any excess backup data will be charged according to the backup price.
-* **I configured my server with zone-redundant high availability. Do you take two backups and will I be charged twice?**
+* **I configured my server with zone-redundant high availability. Do you take two backups, and will I be charged twice?**
- No. Irrespective of HA or non-HA servers, only one set of backup copy is maintained and you will be charged only once.
+ No. Irrespective of HA or non-HA servers, only one set of backup copies is maintained. You'll be charged only once.
-### Restore related questions
+### Restore-related questions
* **How do I restore my server?**
- Azure supports Point In Time Restore (for all servers) allowing users to restore to latest or custom restore point using Azure portal, Azure CLI and API.
+ Azure supports PITR for all servers. Users can restore to the latest restore point or a custom restore point by using the Azure portal, the Azure CLI, and the API.
- To restore your server from the backups taken manually using tools like pg_dump, you can first create a flexible server and restore your database(s) into the server using [pg_restore](https://www.postgresql.org/docs/current/app-pgrestore.html).
+ To restore your server from manual backups by using tools like pg_dump, you can first create a flexible server and then restore your databases to the server by using [pg_restore](https://www.postgresql.org/docs/current/app-pgrestore.html).
* **Can I restore to another availability zone within the same region?**
- Yes. If the region supports multiple availability zones, the backup is stored on ZRS account and allows you to restore to another zone.
+ Yes. If the region supports multiple availability zones, the backup is stored on a zone-redundant storage account so that you can restore to another zone.
-* **How long it takes to do a point in time restore? Why is my restore taking so much time?**
+* **How long does a PITR take? Why is my restore taking so much time?**
- The data restore operation from snapshot does not depend of the size of data, however the recovery process timing which applies the logs (transaction activities to replay) could vary depending on the previous backup of the requested date/time and the amount of logs to process. This is applicable to both restoring within the same zone or to a different zone.
+ The data restore operation from a snapshot doesn't depend on the size of data. But the recovery process timing that applies the logs (transaction activities to replay) might vary, depending on the previous backup of the requested date/time and the number of logs to process. This applies to both restoring within the same zone or and restoring data to a different zone.
-* **If I restore my HA enabled server, do the restore server automatically configured with high availability?**
+* **If I restore my HA-enabled server, is the restore server automatically configured with high availability?**
- No. The server is restored as a single instance flexible server. After the restore is complete, you can optionally configure the server with high availability.
+ No. The server is restored as a single-instance flexible server. After the restore is complete, you can optionally configure the server with high availability.
-* **I configured my server within a VNET. Can I restore to another VNET?**
+* **I configured my server within a virtual network. Can I restore to another virtual network?**
- Yes. At the time of restore, choose a different VNET to restore.
+ Yes. At the time of restore, choose a different virtual network to restore to.
-* **Can I restore my public access server into a VNET or vice-versa?**
+* **Can I restore my public access server to a virtual network or vice versa?**
- No. We currently do not support restoring servers across public and private access.
+ No. Flexible Server currently doesn't support restoring servers across public and private access.
* **How do I track my restore operation?**
- Currently there is no way to track the restore operation. You may monitor the activity log to see if the operation is in progress or complete.
+ Currently, there's no way to track the restore operation. You can monitor the activity log to see if the operation is in progress or complete.
## Next steps -- Learn about [business continuity](./concepts-business-continuity.md)-- Learn about [zone redundant high availability](./concepts-high-availability.md)-- Learn [how to restore](./how-to-restore-server-portal.md)
+- Learn about [business continuity](./concepts-business-continuity.md).
+- Learn about [zone-redundant high availability](./concepts-high-availability.md).
+- Learn [how to restore](./how-to-restore-server-portal.md).
postgresql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/connect-java.md
+ms.devlang: java
Last updated 11/30/2021
postgresql Quickstart Connect Psql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-connect-psql.md
When you create your Hyperscale (Citus) server group, a default database named *
citus=> ```
+4. Run a test query. Copy the following command and paste it into the psql
+ prompt, then press enter to run:
+
+ ```sql
+ SHOW server_version;
+ ```
+
+ You should see a result matching the PostgreSQL version you selected
+ during server group creation. For instance:
+
+ ```
+ server_version
+ -
+ 13.5
+ (1 row)
+ ```
+ ## Next steps Now that you've connected to the server group, the next step is to create
postgresql Quickstart Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-create-portal.md
To follow this quickstart, you'll first need to:
| Admin username | Currently required to be the value `citus`, and can't be changed. | | Password | A new password for the server admin account. It must contain between 8 and 128 characters. Your password must contain characters from three of the following categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and non-alphanumeric characters (!, $, #, %, etc.). | | Version | The latest PostgreSQL major version, unless you have specific requirements. |
- | Compute + storage | The compute, storage, and Tier configurations for your new server. Select **Configure server group**. |
+
+5. Select **Configure server group**.
![compute and storage](../media/quickstart-hyperscale-create-portal/compute.png)
-5. For this quickstart, you can accept the default value of **Basic** for
+ For this quickstart, you can accept the default value of **Basic** for
**Tiers**. The other option, standard tier, creates worker nodes for greater total data capacity and query parallelism. See [tiers](concepts-server-group.md#tiers) for a more in-depth comparison.
-6. Select **Next : Networking >** at the bottom of the screen.
-7. In the **Networking** tab, select **Allow public access from Azure services
+
+6. Select **Save**.
+
+7. Select **Next : Networking >** at the bottom of the screen.
+8. In the **Networking** tab, select **Allow public access from Azure services
and resources within Azure to this server group**. ![networking configuration](../media/quickstart-hyperscale-create-portal/networking.png)
-8. Select **Review + create** and then **Create** to create the server.
+9. Select **Review + create** and then **Create** to create the server.
Provisioning takes a few minutes.
-9. The page will redirect to monitor deployment. When the live status changes
+10. The page will redirect to monitor deployment. When the live status changes
from **Deployment is in progress** to **Your deployment is complete**. After this transition, select **Go to resource**.
postgresql Quickstart Distribute Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-distribute-tables.md
By default, `create_distributed_table()` splits tables into 32 shards. We can
verify using the `citus_shards` view: ```sql
-SELECT table_name, count(*)
+SELECT table_name, count(*) AS shards
FROM citus_shards GROUP BY 1; ``` ```
- table_name | count
-+-
- github_events | 32
- github_users | 32
+ table_name | shards
++--
+ github_users | 32
+ github_events | 32
(2 rows) ```
SELECT table_name, count(*)
We're ready to fill the tables with sample data. For this quickstart, we'll use a dataset previously captured from the GitHub API.
+Run the following commands to download example CSV files and load them into the
+database tables. (The `curl` command downloads the files, and comes
+pre-installed in the Azure Cloud Shell.)
+ ```
+-- download users and store in table
+ \COPY github_users FROM PROGRAM 'curl https://examples.citusdata.com/users.csv' WITH (FORMAT CSV)+
+-- download events and store in table
+ \COPY github_events FROM PROGRAM 'curl https://examples.citusdata.com/events.csv' WITH (FORMAT CSV) ``` We can confirm the shards now hold data: ```sql
-SELECT table_name, pg_size_pretty(sum(shard_size))
+SELECT table_name,
+ pg_size_pretty(sum(shard_size)) AS shard_size_sum
FROM citus_shards GROUP BY 1; ``` ```
- table_name | pg_size_pretty
+ table_name | shard_size_sum
+- github_users | 38 MB github_events | 95 MB
postgresql Quickstart Run Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-run-queries.md
SELECT count(*) FROM github_users;
Recall that `github_users` is a distributed table, meaning its data is divided between multiple shards. Hyperscale (Citus) automatically runs the count on all
-the shards in parallel, and combines the results.
+shards in parallel, and combines the results.
+
+Let's continue looking at a few more query examples:
```sql -- Find all events for a single user.
SELECT created_at, event_type, repo->>'name' AS repo_name
## More complicated queries
-Hyperscale (Citus) uses an advanced query planner to transform arbitrary SQL
-queries into tasks running across shards. The tasks run in parallel on
-horizontally scalable worker nodes.
- Here's an example of a more complicated query, which retrieves hourly statistics for push events on GitHub. It uses PostgreSQL's JSONB feature to handle semi-structured data.
ORDER BY hour;
(4 rows) ```
-Hyperscale (Citus) also automatically applies changes to data definition across
+Hyperscale (Citus) also automatically applies data definition changes across
the shards of a distributed table. ```sql
The quickstart is now complete. You've successfully created a scalable
Hyperscale (Citus) server group, created tables, sharded them, loaded data, and run distributed queries.
-Here are good resources to begin to deepen your knowledge.
+Here are good resources to deepen your knowledge.
* See a more detailed [illustration](tutorial-shard.md) of distributed query execution.
-* Discover [useful diagnostic queries](howto-useful-diagnostic-queries.md) to
- inspect distributed tables.
-* Learn how to speed up the per-minute `http_request` aggregation from this
- example with "roll-ups" in the [real-time
- dashboard](tutorial-design-database-realtime.md) tutorial.
+* Scale your server group by [adding
+ nodes](howto-scale-grow.md#add-worker-nodes) and [rebalancing
+ shards](howto-scale-rebalance.md).
purview Manage Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-credentials.md
Previously updated : 11/10/2021 Last updated : 02/16/2022
In Azure Purview, there are few options to use as authentication method to scan
- Service Principal (using [Key Vault](#create-azure-key-vaults-connections-in-your-azure-purview-account)) - Consumer Key (using [Key Vault](#create-azure-key-vaults-connections-in-your-azure-purview-account))
-Before creating any credentials, consider your data source types and networking requirements to decide which authentication method is needed for your scenario. Review the following decision tree to find which credential is most suitable:
+Before creating any credentials, consider your data source types and networking requirements to decide which authentication method you need for your scenario. Review the following decision tree to find which credential is most suitable:
:::image type="content" source="media/manage-credentials/manage-credentials-decision-tree-small.png" alt-text="Manage credentials decision tree" lightbox="media/manage-credentials/manage-credentials-decision-tree.png"::: ## Use Azure Purview system-assigned managed identity to set up scans
-If you are using the Azure Purview system-assigned managed identity (SAMI) to set up scans, you will not have to explicitly create a credential and link your key vault to Azure Purview to store them. For detailed instructions on adding the Azure Purview SAMI to have access to scan your data sources, refer to the data source-specific authentication sections below:
+If you're using the Azure Purview system-assigned managed identity (SAMI) to set up scans, you won't need to create a credential and link your key vault to Azure Purview to store them. For detailed instructions on adding the Azure Purview SAMI to have access to scan your data sources, refer to the data source-specific authentication sections below:
- [Azure Blob Storage](register-scan-azure-blob-storage-source.md#authentication-for-a-scan) - [Azure Data Lake Storage Gen1](register-scan-adls-gen1.md#authentication-for-a-scan)
If you are using the Azure Purview system-assigned managed identity (SAMI) to se
## Grant Azure Purview access to your Azure Key Vault
+To give Azure Purview access to your Azure Key Vault, there are two things you'll need to confirm:
+
+- [Firewall access to the Azure Key Vault](#firewall-access-to-azure-key-vault)
+- [Azure Purview permissions on the Azure Key Vault](#azure-purview-permissions-on-the-azure-key-vault)
+
+### Firewall access to Azure Key Vault
+
+If your Azure Key Vault has disabled public network access, you have two options to allow access for Azure Purview.
+
+- [Trusted Microsoft services](#trusted-microsoft-services)
+- [Private endpoint connections](#private-endpoint-connections)
+
+#### Trusted Microsoft services
+
+Azure Purview is listed as one of [Azure Key Vault's trusted services](../key-vault/general/overview-vnet-service-endpoints.md#trusted-services), so if public network access is disabled on your Azure Key Vault you can enable access only to trusted Microsoft services, and Azure Purview will be included.
+
+You can enable this setting in your Azure Key Vault under the **Networking** tab.
+
+At the bottom of the page, under Exception, enable the **Allow trusted Microsoft services to bypass this firewall** feature.
++
+#### Private endpoint connections
+
+To connect to Azure Key Vault with private endpoints, follow [Azure Key Vault's private endpoint documentation](../key-vault/general/private-link-service.md).
+
+### Azure Purview permissions on the Azure Key Vault
+ Currently Azure Key Vault supports two permission models: -- Option 1 - Access Policies -- Option 2 - Role-based Access Control
+- [Option 1 - Access Policies](#option-1assign-access-using-key-vault-access-policy)
+- [Option 2 - Role-based Access Control](#option-2assign-access-using-key-vault-azure-role-based-access-control)
Before assigning access to the Azure Purview system-assigned managed identity (SAMI), first identify your Azure Key Vault permission model from Key Vault resource **Access Policies** in the menu. Follow steps below based on relevant the permission model.
-### Option 1 - Assign access using Key Vault Access Policy
+#### Option 1 - Assign access using Key Vault Access Policy
Follow these steps only if permission model in your Azure Key Vault resource is set to **Vault Access Policy**:
Follow these steps only if permission model in your Azure Key Vault resource is
4. In the **Secrets permissions** dropdown, select **Get** and **List** permissions.
-5. For **Select principal**, choose the Azure Purview system managed identity. You can search for the Azure Purview SAMI using either the Azure Purview instance name **or** the managed identity application ID. We do not currently support compound identities (managed identity name + application ID).
+5. For **Select principal**, choose the Azure Purview system managed identity. You can search for the Azure Purview SAMI using either the Azure Purview instance name **or** the managed identity application ID. We don't currently support compound identities (managed identity name + application ID).
:::image type="content" source="media/manage-credentials/add-access-policy.png" alt-text="Add access policy":::
Follow these steps only if permission model in your Azure Key Vault resource is
:::image type="content" source="media/manage-credentials/save-access-policy.png" alt-text="Save access policy":::
-### Option 2 - Assign access using Key Vault Azure role-based access control
+#### Option 2 - Assign access using Key Vault Azure role-based access control
Follow these steps only if permission model in your Azure Key Vault resource is set to **Azure role-based access control**:
The following steps will show you how to create a UAMI for Azure Purview to use.
### Create a user-assigned managed identity
-1. In the [Azure Portal](https://portal.azure.com/) navigate to your Azure Purview account.
+1. In the [Azure portal](https://portal.azure.com/) navigate to your Azure Purview account.
1. In the **Managed identities** section on the left menu, select the **+ Add** button to add user assigned managed identities. :::image type="content" source="media/manage-credentials/create-new-managed-identity.png" alt-text="Screenshot showing managed identity screen in the Azure portal with user-assigned and add highlighted.":::
-1. After finishing the setup, go back to your Azure Purview account in the Azure Portal. If the managed identity is successfully deployed, you'll see the Azure Purview account's status as **Succeeded**.
+1. After finishing the setup, go back to your Azure Purview account in the Azure portal. If the managed identity is successfully deployed, you'll see the Azure Purview account's status as **Succeeded**.
- :::image type="content" source="media/manage-credentials/status-successful.png" alt-text="Screenshot the Azure Purview account in the Azure Portal with Status highlighted under the overview tab and essentials menu.":::
+ :::image type="content" source="media/manage-credentials/status-successful.png" alt-text="Screenshot the Azure Purview account in the Azure portal with Status highlighted under the overview tab and essentials menu.":::
1. Once the managed identity is successfully deployed, navigate to the [Azure Purview Studio](https://web.purview.azure.com/), by selecting the **Open Azure Purview Studio** button.
The following steps will show you how to create a UAMI for Azure Purview to use.
1. In the [Azure Purview Studio](https://web.purview.azure.com/), navigate to the Management Center in the studio and then navigate to the Credentials section. 1. Create a user-assigned managed identity by selecting **+New**.
-1. Select the Managed identity authentication method, and select your user assigned managed identity from the drop down menu.
+1. Select the Managed identity authentication method, and select your user assigned managed identity from the drop-down menu.
:::image type="content" source="media/manage-credentials/new-user-assigned-managed-identity-credential.png" alt-text="Screenshot showing the new managed identity creation tile, with the Learn More link highlighted.":::
purview Troubleshoot Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/troubleshoot-connections.md
This article describes how to troubleshoot connection errors while setting up sc
If you're using a managed identity or service principal as a method of authentication for scans, you'll have to allow these identities to have access to your data source.
-There are specific instructions for each source type:
--- [Azure multiple sources](register-scan-azure-multiple-sources.md#authentication-for-registration)-- [Azure Blob Storage](register-scan-azure-blob-storage-source.md#authentication-for-a-scan)-- [Azure Cosmos DB](register-scan-azure-cosmos-database.md#authentication-for-a-scan)-- [Azure Data Explorer](register-scan-azure-data-explorer.md#authentication-for-registration)-- [Azure Data Lake Storage Gen1](register-scan-adls-gen1.md#prerequisites-for-scan)-- [Azure Data Lake Storage Gen2](register-scan-adls-gen2.md#prerequisites-for-scan)-- [Azure SQL Database](register-scan-azure-sql-database.md)-- [Azure SQL Database Managed Instance](register-scan-azure-sql-database-managed-instance.md#authentication-for-registration)-- [Azure Synapse Analytics](register-scan-azure-synapse-analytics.md#authentication-for-registration)-- [SQL Server](register-scan-on-premises-sql-server.md#authentication-for-registration)-- [Power BI](register-scan-power-bi-tenant.md)-- [Amazon S3](register-scan-amazon-s3.md#create-an-azure-purview-credential-for-your-aws-s3-scan)
+There are specific instructions for each [source type](azure-purview-connector-overview.md).
+
+> [!IMPORTANT]
+> Verify that you have followed all prerequisite and authentication steps for the source you are connecting to.
+> You can find all available sources listed in the [Azure Purview supported sources article](azure-purview-connector-overview.md).
## Verifying Azure Role-based Access Control to enumerate Azure resources in Azure Purview Studio ### Registering single Azure data source
-To register a single data source in Azure Purview, such as an Azure Blog Storage or an Azure SQL Database, you must be granted at least **Reader** role on the resource or inherited from higher scope such as resource group or subscription. Note that some Azure RBAC roles, such as Security Admin do not have read access to view Azure resources in control plane.
+
+To register a single data source in Azure Purview, such as an Azure Blog Storage or an Azure SQL Database, you must be granted at least **Reader** role on the resource or inherited from higher scope such as resource group or subscription. Some Azure RBAC roles, such as Security Admin, don't have read access to view Azure resources in control plane.
Verify this by following the steps below:
-1. From the [Azure portal](https://portal.azure.com), navigate to the resource that you are trying to register in Azure Purview. If you can view the resource, it is likely, that you already have at least reader role on the resource.
+1. From the [Azure portal](https://portal.azure.com), navigate to the resource that you're trying to register in Azure Purview. If you can view the resource, it's likely, that you already have at least reader role on the resource.
2. Select **Access control (IAM)** > **Role Assignments**. 3. Search by name or email address of the user who is trying to register data sources in Azure Purview.
-4. Verify if any role assignments such as Reader exists in the list or add a new role assignment if needed.
+4. Verify if any role assignments, such as Reader, exist in the list or add a new role assignment if needed.
### Scanning multiple Azure data sources+ 1. From the [Azure portal](https://portal.azure.com), navigate to the subscription or the resource group.
-2. Select **Access Control (IAM)** from the left menu.
-3. Select **+Add**.
+2. Select **Access Control (IAM)** from the left menu.
+3. Select **+Add**.
4. In the **Select input** box, select the **Reader** role and enter your Azure Purview account name (which represents its MSI name). 5. Select **Save** to finish the role assignment. 6. Repeat the steps above to add the identity of the user who is trying to create a new scan for multiple data sources in Azure Purview.
-## Scanning data sources using Private Link
-If public endpoint is restricted on your data sources, to scan Azure data sources using Private Link, you need to setup a Self-hosted integration runtime and create a credential.
+## Scanning data sources using Private Link
+
+If public endpoint is restricted on your data sources, to scan Azure data sources using Private Link, you need to set up a Self-hosted integration runtime and create a credential.
> [!IMPORTANT] > Scanning multiple data sources which contain databases as Azure SQL database with _Deny public network access_, would fail. To scan these data sources using private Endpoint, instead use registering single data source option.
Verify this by following the steps below:
1. Navigate to your Key Vault. 1. Select **Settings** > **Secrets**. 1. Select the secret you're using to authenticate against your data source for scans.
-1. Select the version that you intend to use and verify that the password or account key is correct by selecting **Show Secret Value**.
+1. Select the version that you intend to use and verify that the password or account key is correct by selecting **Show Secret Value**.
## Verify permissions for the Azure Purview managed identity on your Azure Key Vault
To verify this, do the following steps:
1. Navigate to your key vault and to the **Access policies** section
-1. Verify that your Azure Purview managed identity shows under the *Current access policies* section with at least **Get** and **List** permissions on Secrets
+1. Verify that your Azure Purview managed identity shows under the _Current access policies_ section with at least **Get** and **List** permissions on Secrets
:::image type="content" source="./media/troubleshoot-connections/verify-minimum-permissions.png" alt-text="Image showing dropdown selection of both Get and List permission options":::
-If you don't see your Azure Purview managed identity listed, then follow the steps in [Create and manage credentials for scans](manage-credentials.md) to add it.
+If you don't see your Azure Purview managed identity listed, then follow the steps in [Create and manage credentials for scans](manage-credentials.md) to add it.
## Next steps
role-based-access-control Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/troubleshooting.md
If you recently invited a user when creating a role assignment, this security pr
However, if this security principal is not a recently invited user, it might be a deleted security principal. If you assign a role to a security principal and then you later delete that security principal without first removing the role assignment, the security principal will be listed as **Identity not found** and an **Unknown** type.
-If you list this role assignment using Azure PowerShell, you might see an empty `DisplayName` and `SignInName`. For example, [Get-AzRoleAssignment](/powershell/module/az.resources/get-azroleassignment) returns a role assignment that is similar to the following output:
+If you list this role assignment using Azure PowerShell, you might see an empty `DisplayName` and `SignInName`, or a value for `ObjectType` of `Unknown`. For example, [Get-AzRoleAssignment](/powershell/module/az.resources/get-azroleassignment) returns a role assignment that is similar to the following output:
``` RoleAssignmentId : /subscriptions/11111111-1111-1111-1111-111111111111/providers/Microsoft.Authorization/roleAssignments/22222222-2222-2222-2222-222222222222
role-based-access-control Tutorial Custom Role Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/tutorial-custom-role-cli.md
editor: '' ''
route-server Multiregion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/multiregion.md
+
+ Title: 'Multi-region designs with Azure Route Server'
+description: Learn about how Azure Route Server enables multi-region designs.
++++ Last updated : 02/03/2022+++
+# Multi-region networking with Azure Route Server
+
+Applications that have demanding requirements around high availability or disaster recovery often need to be deployed in more than one Azure region, where spoke VNets in multiple regions need to communicate between each other. A possibility to achieve this communication pattern is peering to each other all spokes that need to communicate, but those flows would bypass any central NVAs in the hubs, such as firewalls. Another possibility is using User Defined Routes (UDRs) in the subnets where the hub NVAs are deployed, but that can be difficult to maintain. Azure Route Server offers an alternative which is very dynamic and adapts to topology changes without manual intervention.
+
+## Topology
+
+The following diagram shows a dual-region architecture, where a hub and spoke topology exists in each region, and the hub VNets are peered to each other via VNet global peering:
++
+Each NVA learns the prefixes from the local hub and spokes from its Azure Route Server, and will communicate it to the NVA in the other region via BGP. This communication between the NVAs should be established over an encapsulation technology such as IPsec or Virtual eXtensible LAN (VXLAN), since otherwise routing loops can occur in the network.
+
+The spokes need to be peered with the hub VNet with the setting "Use Remote Gateways", so that Azure Route Server advertises their prefixes to the local NVAs, and it injects learnt routes back into the spokes.
+
+The NVAs will advertise to their local Route Server the routes that they learn from the remote region, and Route Server will configure these routes in the local spokes, hence attracting traffic. If there are multiple NVAs in the same region (Route Server supports up to 8 BGP adjacencies), AS path prepending can be used to make one of the NVAs preferred to the others, hence defining an active/standby NVA topology.
+
+## ExpressRoute
+
+This design can be combined with ExpressRoute or VPN gateways. The following diagram shows a topology including an ExpressRoute gateway connected to an on-premises network in one of the Azure regions. In this case, an overlay network over the ExpressRoute circuit will help to simplify the network, so that on-premises prefixes will only appear in Azure as advertised by the NVA (and not from the ExpressRoute gateway).
++
+## Design without overlays
+
+The cross-region tunnels between the NVAs are required because otherwise a routing loop is formed. For example, looking at the NVA in region 1:
+
+- The NVA in region 1 learns the prefixes from region 2, and advertises them to the Route Server in region 1
+- The Route Server in region 1 will inject routes for those prefixes in all subnets in the local region, with the NVA in region 1 as the next hop
+- For traffic from region 1 to region 2, when the NVA in region 1 sends traffic to the other NVA, its own subnet inherits as well the routes programmed by the Route Server, which are pointing to itself (the NVA). So the packet is returned to the NVA, and a routing loop appears.
+
+If UDRs are an option, you could disable BGP route propagation in the NVAs' subnets, and configure static UDRs instead of an overlay, so that Azure can route traffic to the remote spokes.
+
+## Next steps
+
+* [Learn how Azure Route Server works with ExpressRoute](expressroute-vpn-support.md)
+* [Learn how Azure Route Server works with a network virtual appliance](resource-manager-template-samples.md)
route-server Vmware Solution Default Route https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/vmware-solution-default-route.md
+
+ Title: 'Injecting default route to Azure VMware Solution'
+description: Learn about how to advertise a default route to Azure VMware Solution with Azure Route Server.
++++ Last updated : 02/03/2022+++
+# Injecting a default route to Azure VMware Solution
+
+[Azure VMware Solution](../azure-vmware/introduction.md) is an Azure service where native VMware vSphere workloads run and communicate with other Azure services. This communication happens over ExpressRoute, and Azure Route Server can be used to modify the default behavior of Azure VMware Solution networking. For example, a default route can be injected from a Network Virtual Appliance (NVA) in Azure to attract traffic from AVS and inspect it before sending it out to the public Internet, or to analyze traffic between AVS and the on-premises network.
+
+## Topology
+
+The following diagram describes a basic hub and spoke topology connected to an AVS cloud and to an on-premises network through ExpressRoute. The diagram shows how the default route (`0.0.0.0/0`) is originated by the NVA in Azure, and propagated by Azure Route Server to Azure VMware Solution through ExpressRoute.
++
+> [!IMPORTANT]
+> The default route advertised by the NVA will be propagated to the on-premises network as well, so it needs to be filtered out in the customer routing environment.
+
+Communication between Azure VMware Solution and the on-premises network will typically happen over ExpressRoute Global Reach, as described in [Peer on-premises environments to Azure VMware Solution](../azure-vmware/tutorial-expressroute-global-reach-private-cloud.md).
+
+## Communication between Azure VMware Solution and the on-premises network via NVA
+
+If not only the Internet traffic should be inspected by the NVA, but also traffic between AVS and the on-premises network instead of being sent over ExpressRoute Global Reach, an additional transit VNet is required to avoid potential routing loops, which would be originated since a single ExpressRoute gateway wouldn't be able to route packets properly (more specifically, the User Defined Routes in the GatewaySubnet can either point to the NVA or to on-premises, but not to both).
+
+An additional NVA would be required in this transit VNet, and both NVAs would exchange the routes they learn from their respective Azure Route Servers via BGP and some sort of encapsulation protocol such as VXLAN or IPsec, as the following diagram shows.
++
+The reason why encapsulation is needed is because the NVA NICs would learn the routes from ExpressRoute or from the Route Server, so they would send packets that need to be routed to the other NVA in the wrong direction (potentially creating a routing loop returning the packets to the local NVA).
+
+## Next steps
+
+* [Learn how Azure Route Server works with ExpressRoute](expressroute-vpn-support.md)
+* [Learn how Azure Route Server works with a network virtual appliance](resource-manager-template-samples.md)
scheduler Scheduler Advanced Complexity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/scheduler/scheduler-advanced-complexity.md
- Title: Build advanced job schedules and recurrences
-description: Learn how to create advanced schedules and recurrences for jobs in Azure Scheduler.
------ Previously updated : 02/15/2022--
-# Build advanced schedules and recurrences for jobs in Azure Scheduler
-
-> [!IMPORTANT]
-> [Azure Logic Apps](../logic-apps/logic-apps-overview.md) has replaced Azure Scheduler, which is fully retired
-> since January 31, 2022. Please migrate your Azure Scheduler jobs by recreating them as workflows in Azure Logic Apps
-> following the steps in [Migrate Azure Scheduler jobs to Azure Logic Apps](migrate-from-scheduler-to-logic-apps.md).
-> Azure Scheduler is longer available in the Azure portal. The [Azure Scheduler REST API](/rest/api/scheduler) and
-> [Azure Scheduler PowerShell cmdlets](scheduler-powershell-reference.md) no longer work.
-
-Within an [Azure Scheduler](../scheduler/scheduler-intro.md) job,
-the schedule is the core that determines when and how the Scheduler
-service runs the job. You can set up multiple one-time and recurring
-schedules for a job with Scheduler. One-time schedules run only once
-at a specified time and are basically recurring schedules that run only once.
-Recurring schedules run on a specified frequency. With this flexibility,
-you can use Scheduler for various business scenarios, for example:
-
-* **Clean up data regularly**: Create a daily job
-that deletes all tweets older than three months.
-
-* **Archive data**: Create a monthly job that pushes
-invoice history to a backup service.
-
-* **Request external data**: Create a job that runs
-every 15 minutes and pulls a new weather report from NOAA.
-
-* **Process images**: Create a weekday job that runs
-during off-peak hours and uses cloud computing for
-compressing images uploaded during the day.
-
-This article describes example jobs you can create by using Scheduler
-and the [Azure Scheduler REST API](/rest/api/scheduler),
-and includes the JavaScript Object Notation (JSON) definition for each schedule.
-
-## Supported scenarios
-
-These examples show the range of scenarios that Azure Scheduler supports
-and how to create schedules for various behavior patterns, for example:
-
-* Run once at a specific date and time.
-* Run and recur a specific number of times.
-* Run immediately and recur.
-* Run and recur every *n* minutes, hours, days,
-weeks, or months, starting at a specific time.
-* Run and recur weekly or monthly, but only on
-specific days of the week or on specific days of the month.
-* Run and recur more than once for a specific period.
-For example, every month on the last Friday and Monday,
-or daily at 5:15 AM and at 5:15 PM.
-
-This article later describes these scenarios in more detail.
-
-<a name="create-scedule"></a>
-
-## Create schedule with REST API
-
-To create a basic schedule with the
-[Azure Scheduler REST API](/rest/api/scheduler),
-follow these steps:
-
-1. Register your Azure subscription with a resource provider
-by using the [Register operation - Resource Manager REST API](/rest/api/resources/providers).
-The provider name for the Azure Scheduler service is **Microsoft.Scheduler**.
-
-1. Create a job collection by using the
-[Create or Update operation for job collections](/rest/api/scheduler/jobcollections)
-in the Scheduler REST API.
-
-1. Create a job by using the
-[Create or Update operation for jobs](/rest/api/scheduler/jobs/createorupdate).
-
-## Job schema elements
-
-This table provides a high-level overview for the major JSON elements
-you can use when setting up recurrences and schedules for jobs.
-
-| Element | Required | Description |
-||-|-|
-| **startTime** | No | A DateTime string value in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601) that specifies when the job first starts in a basic schedule. <p>For complex schedules, the job starts no sooner than **startTime**. |
-| **recurrence** | No | The recurrence rules for when the job runs. The **recurrence** object supports these elements: **frequency**, **interval**, **schedule**, **count**, and **endTime**. <p>If you use the **recurrence** element, you must also use the **frequency** element, while other **recurrence** elements are optional. |
-| **frequency** | Yes, when you use **recurrence** | The time unit between occurrences and supports these values: "Minute", "Hour", "Day", "Week", "Month", and "Year" |
-| **interval** | No | A positive integer that determines the number of time units between occurrences based on **frequency**. <p>For example, if **interval** is 10 and **frequency** is "Week", the job recurs every 10 weeks. <p>Here is the most number of intervals for each frequency: <p>- 18 months <br>- 78 weeks <br>- 548 days <br>- For hours and minutes, the range is 1 <= <*interval*> <= 1000. |
-| **schedule** | No | Defines changes to the recurrence based on the specified minute-marks, hour-marks, days of the week, and days of the month |
-| **count** | No | A positive integer that specifies the number of times that the job runs before finishing. <p>For example, when a daily job has **count** set to 7, and the start date is Monday, the job finishes running on Sunday. If the start date has already passed, the first run is calculated from the creation time. <p>Without **endTime** or **count**, the job runs infinitely. You can't use both **count** and **endTime** in the same job, but the rule that finishes first is honored. |
-| **endTime** | No | A Date or DateTime string value in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601) that specifies when the job stops running. You can set a value for **endTime** that's in the past. <p>Without **endTime** or **count**, the job runs infinitely. You can't use both **count** and **endTime** in the same job, but the rule that finishes first is honored. |
-||||
-
-For example, this JSON schema describes a basic schedule and recurrence for a job:
-
-```json
-"properties": {
- "startTime": "2012-08-04T00:00Z",
- "recurrence": {
- "frequency": "Week",
- "interval": 1,
- "schedule": {
- "weekDays": ["Monday", "Wednesday", "Friday"],
- "hours": [10, 22]
- },
- "count": 10,
- "endTime": "2012-11-04"
- },
-},
-```
-
-*Dates and DateTime values*
-
-* Dates in Scheduler jobs include only the date and follow the
-[ISO 8601 specification](https://en.wikipedia.org/wiki/ISO_8601).
-
-* Date-times in Scheduler jobs include both date and time,
-follow the [ISO 8601 specification](https://en.wikipedia.org/wiki/ISO_8601),
-and are assumed to be UTC when no UTC offset is specified.
-
-For more information, see [Concepts, terminology, and entities](../scheduler/scheduler-concepts-terms.md).
-
-<a name="start-time"></a>
-
-## Details: startTime
-
-This table describes how **startTime** controls the way a job runs:
-
-| startTime | No recurrence | Recurrence, no schedule | Recurrence with schedule |
-|--||-|--|
-| **No start time** | Run once immediately. | Run once immediately. Run later executions calculated from the last execution time. | Run once immediately. Run later executions based on a recurrence schedule. |
-| **Start time in the past** | Run once immediately. | Calculate the first future run time after start time, and run at that time. <p>Run later executions calculated from the last execution time. <p>See the example after this table. | Start job *no sooner than* the specified start time. The first occurrence is based on the schedule calculated from the start time. <p>Run later executions based on a recurrence schedule. |
-| **Start time in the future or the current time** | Run once at the specified start time. | Run once at the specified start time. <p>Run later executions calculated from the last execution time. | Start job *no sooner than* the specified start time. The first occurrence is based on the schedule, calculated from the start time. <p>Run later executions based on a recurrence schedule. |
-|||||
-
-Suppose you this example with these conditions:
-a start time in the past with a recurrence,
-but no schedule.
-
-```json
-"properties": {
- "startTime": "2015-04-07T14:00Z",
- "recurrence": {
- "frequency": "Day",
- "interval": 2
- }
-}
-```
-
-* The current date and time is April 08, 2015 at 1:00 PM.
-
-* The start date and time is April 07, 2015 at 2:00 PM,
-which is before the current date and time.
-
-* The recurrence is every two days.
-
-1. Under these conditions, the first execution is on April 09, 2015 at 2:00 PM.
-
- Scheduler calculates the execution occurrences based on the start time,
- discards any instances in the past, and uses the next instance in the future.
- In this case, **startTime** is on April 07, 2015 at 2:00 PM, so the next instance
- is two days from that time, which is April 09, 2015 at 2:00 PM.
-
- The first execution is the same whether **startTime**
- is 2015-04-05 14:00 or 2015-04-01 14:00. After the
- first execution, later executions are calculated
- based on the schedule.
-
-1. The executions then follow in this order:
-
- 1. 2015-04-11 at 2:00 PM
- 1. 2015-04-13 at 2:00 PM
- 1. 2015-04-15 at 2:00 PM
- 1. And so on...
-
-1. Finally, when a job has a schedule but no specified hours and minutes,
-these values default to the hours and minutes in the first execution, respectively.
-
-<a name="schedule"></a>
-
-## Details: schedule
-
-You can use **schedule** to *limit* the number of job executions.
-For example, if a job with a **frequency** of "month" has a schedule that runs only on day 31, the job runs only in months that have a 31st day.
-
-You can also use **schedule** to *expand* the number of job executions. For example, if a job with a **frequency** of "month" has a schedule that runs on month days 1 and 2, the job runs on the first and second days of the month instead of only once a month.
-
-If you specify more than one schedule element, the order of evaluation is from the largest to smallest:
-week number, month day, weekday, hour, and minute.
-
-The following table describes schedule elements in detail:
-
-| JSON name | Description | Valid values |
-|: |: |: |
-| **minutes** |Minutes of the hour at which the job runs. |An array of integers. |
-| **hours** |Hours of the day at which the job runs. |An array of integers. |
-| **weekDays** |Days of the week the job runs. Can be specified only with a weekly frequency. |An array of any of the following values (maximum array size is 7):<br />- "Monday"<br />- "Tuesday"<br />- "Wednesday"<br />- "Thursday"<br />- "Friday"<br />- "Saturday"<br />- "Sunday"<br /><br />Not case-sensitive. |
-| **monthlyOccurrences** |Determines which days of the month the job runs. Can be specified only with a monthly frequency. |An array of **monthlyOccurrences** objects:<br /> `{ "day": day, "occurrence": occurrence}`<br /><br /> **day** is the day of the week the job runs. For example, *{Sunday}* is every Sunday of the month. Required.<br /><br />**occurrence** is the occurrence of the day during the month. For example, *{Sunday, -1}* is the last Sunday of the month. Optional. |
-| **monthDays** |Day of the month the job runs. Can be specified only with a monthly frequency. |An array of the following values:<br />- Any value <= -1 and >= -31<br />- Any value >= 1 and <= 31|
-
-## Examples: Recurrence schedules
-
-The following examples show various recurrence schedules. The examples focus on the schedule object and its subelements.
-
-These schedules assume that **interval** is set to 1\. The examples also assume the correct **frequency** values for the values in **schedule**. For example, you can't use a **frequency** of "day" and have a **monthDays** modification in **schedule**. We describe these restrictions earlier in the article.
-
-| Example | Description |
-|: |: |
-| `{"hours":[5]}` |Run at 5 AM every day.<br /><br />Scheduler matches up each value in "hours" with each value in "minutes", one by one, to create a list of all the times at which the job runs. |
-| `{"minutes":[15], "hours":[5]}` |Run at 5:15 AM every day. |
-| `{"minutes":[15], "hours":[5,17]}` |Run at 5:15 AM and 5:15 PM every day. |
-| `{"minutes":[15,45], "hours":[5,17]}` |Run at 5:15 AM, 5:45 AM, 5:15 PM, and 5:45 PM every day. |
-| `{"minutes":[0,15,30,45]}` |Run every 15 minutes. |
-| `{hours":[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]}` |Run every hour.<br /><br />This job runs every hour. The minute is controlled by the value for **startTime**, if it's specified. If no **startTime** value is specified, the minute is controlled by the creation time. For example, if the start time or creation time (whichever applies) is 12:25 PM, the job runs at 00:25, 01:25, 02:25, …, 23:25.<br /><br />The schedule is the same as a job with a **frequency** of "hour", an **interval** of 1, and no **schedule** value. The difference is that you can use this schedule with different **frequency** and **interval** values to create other jobs. For example, if **frequency** is "month", the schedule runs only once a month instead of every day (if **frequency** is "day"). |
-| `{minutes:[0]}` |Run every hour on the hour.<br /><br />This job also runs every hour, but on the hour (12 AM, 1 AM, 2 AM, and so on). This schedule is the same as a job with a **frequency** of "hour", a **startTime** value of zero minutes, and no **schedule**, if the frequency is "day". However, if the **frequency** is "week" or "month", the schedule executes only one day a week or one day a month, respectively. |
-| `{"minutes":[15]}` |Run at 15 minutes past the hour every hour.<br /><br />Runs every hour, starting at 00:15 AM, 1:15 AM, 2:15 AM, and so on. It ends at 11:15 PM. |
-| `{"hours":[17], "weekDays":["saturday"]}` |Run at 5 PM on Saturday every week. |
-| `{hours":[17], "weekDays":["monday", "wednesday", "friday"]}` |Run at 5 PM on Monday, Wednesday, and Friday every week. |
-| `{"minutes":[15,45], "hours":[17], "weekDays":["monday", "wednesday", "friday"]}` |Run at 5:15 PM and 5:45 PM on Monday, Wednesday, and Friday every week. |
-| `{"hours":[5,17], "weekDays":["monday", "wednesday", "friday"]}` |Run at 5 AM and 5 PM on Monday, Wednesday, and Friday every week. |
-| `{"minutes":[15,45], "hours":[5,17], "weekDays":["monday", "wednesday", "friday"]}` |Run at 5:15 AM, 5:45 AM, 5:15 PM, and 5:45 PM on Monday, Wednesday, and Friday every week. |
-| `{"minutes":[0,15,30,45], "weekDays":["monday", "tuesday", "wednesday", "thursday", "friday"]}` |Run every 15 minutes on weekdays. |
-| `{"minutes":[0,15,30,45], "hours": [9, 10, 11, 12, 13, 14, 15, 16] "weekDays":["monday", "tuesday", "wednesday", "thursday", "friday"]}` |Run every 15 minutes on weekdays, between 9 AM and 4:45 PM. |
-| `{"weekDays":["sunday"]}` |Run on Sundays at start time. |
-| `{"weekDays":["tuesday", "thursday"]}` |Run on Tuesdays and Thursdays at start time. |
-| `{"minutes":[0], "hours":[6], "monthDays":[28]}` |Run at 6 AM on the 28th day of every month (assuming a **frequency** of "month"). |
-| `{"minutes":[0], "hours":[6], "monthDays":[-1]}` |Run at 6 AM on the last day of the month.<br /><br />If you want to run a job on the last day of a month, use -1 instead of day 28, 29, 30, or 31. |
-| `{"minutes":[0], "hours":[6], "monthDays":[1,-1]}` |Run at 6 AM on the first and last day of every month. |
-| `{monthDays":[1,-1]}` |Run on the first and last day of every month at start time. |
-| `{monthDays":[1,14]}` |Run on the first and 14th day of every month at start time. |
-| `{monthDays":[2]}` |Run on the second day of the month at start time. |
-| `{"minutes":[0], "hours":[5], "monthlyOccurrences":[{"day":"friday", "occurrence":1}]}` |Run on the first Friday of every month at 5 AM. |
-| `{"monthlyOccurrences":[{"day":"friday", "occurrence":1}]}` |Run on the first Friday of every month at start time. |
-| `{"monthlyOccurrences":[{"day":"friday", "occurrence":-3}]}` |Run on the third Friday from the end of the month, every month, at start time. |
-| `{"minutes":[15], "hours":[5], "monthlyOccurrences":[{"day":"friday", "occurrence":1},{"day":"friday", "occurrence":-1}]}` |Run on the first and last Friday of every month at 5:15 AM. |
-| `{"monthlyOccurrences":[{"day":"friday", "occurrence":1},{"day":"friday", "occurrence":-1}]}` |Run on the first and last Friday of every month at start time. |
-| `{"monthlyOccurrences":[{"day":"friday", "occurrence":5}]}` |Run on the fifth Friday of every month at start time.<br /><br />If there's no fifth Friday in a month, the job doesn't run. You might consider using -1 instead of 5 for the occurrence if you want to run the job on the last occurring Friday of the month. |
-| `{"minutes":[0,15,30,45], "monthlyOccurrences":[{"day":"friday", "occurrence":-1}]}` |Run every 15 minutes on the last Friday of the month. |
-| `{"minutes":[15,45], "hours":[5,17], "monthlyOccurrences":[{"day":"wednesday", "occurrence":3}]}` |Run at 5:15 AM, 5:45 AM, 5:15 PM, and 5:45 PM on the third Wednesday of every month. |
-
-## Next steps
-
-* [Azure Scheduler concepts, terminology, and entity hierarchy](scheduler-concepts-terms.md)
-* [Azure Scheduler REST API reference](/rest/api/scheduler)
-* [Azure Scheduler PowerShell cmdlets reference](scheduler-powershell-reference.md)
-* [Azure Scheduler limits, defaults, and error codes](scheduler-limits-defaults-errors.md)
scheduler Scheduler Concepts Terms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/scheduler/scheduler-concepts-terms.md
- Title: Concepts, terms, and entities
-description: Learn the concepts, terminology, and entity hierarchy, including jobs and job collections, in Azure Scheduler
------ Previously updated : 02/15/2022--
-# Concepts, terminology, and entities in Azure Scheduler
-
-> [!IMPORTANT]
-> [Azure Logic Apps](../logic-apps/logic-apps-overview.md) has replaced Azure Scheduler, which is fully retired
-> since January 31, 2022. Please migrate your Azure Scheduler jobs by recreating them as workflows in Azure Logic Apps
-> following the steps in [Migrate Azure Scheduler jobs to Azure Logic Apps](migrate-from-scheduler-to-logic-apps.md).
-> Azure Scheduler is longer available in the Azure portal. The [Azure Scheduler REST API](/rest/api/scheduler) and
-> [Azure Scheduler PowerShell cmdlets](scheduler-powershell-reference.md) no longer work.
-
-## Entity hierarchy
-
-The Azure Scheduler REST API exposes and uses these main entities, or resources:
-
-| Entity | Description |
-|--|-|
-| **Job** | Defines a single recurring action with simple or complex strategies for execution. Actions might include HTTP, Storage queue, Service Bus queue, or Service Bus topic requests. |
-| **Job collection** | Contains a group of jobs and maintains settings, quotas, and throttles that are shared by jobs in the collection. As an Azure subscription owner, you can create job collections and group jobs together based on their usage or application boundaries. A job collection has these attributes: <p>- Constrained to one region. <br>- Lets you enforce quotas so you can constrain usage for all jobs in a collection. <br>- Quotas include MaxJobs and MaxRecurrence. |
-| **Job history** | Describes details for a job execution, for example, status and any response details. |
-|||
-
-## Entity management
-
-At a high-level, the Scheduler REST API exposes these operations for managing entities.
-
-### Job management
-
-Supports operations for creating and editing jobs.
-All jobs must belong to an existing job collection,
-so there's no implicit creation. For more information, see
-[Scheduler REST API - Jobs](/rest/api/scheduler/jobs).
-Here's the URI address for these operations:
-
-```
-https://management.azure.com/subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.Scheduler/jobCollections/{jobCollectionName}/jobs/{jobName}
-```
-
-### Job collection management
-
-Supports operations for creating and editing jobs and job collections,
-which map to quotas and shared settings. For example, quotas specify
-the maximum number of jobs and smallest recurrence interval.
-For more information, see [Scheduler REST API - Job Collections](/rest/api/scheduler/jobcollections).
-Here's the URI address for these operations:
-
-```
-https://management.azure.com/subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.Scheduler/jobCollections/{jobCollectionName}
-```
-
-### Job history management
-
-Supports the GET operation for fetching 60 days of job execution history,
-for example, job elapsed time and job execution results.
-Includes query string parameter support for filtering based on state and status.
-For more information, see [Scheduler REST API - Jobs - List Job History](/rest/api/scheduler/jobs/listjobhistory).
-Here's the URI address for this operation:
-
-```
-https://management.azure.com/subscriptions/{subscriptionID}/resourceGroups/{resourceGroupName}/providers/Microsoft.Scheduler/jobCollections/{jobCollectionName}/jobs/{jobName}/history
-```
-
-## Job types
-
-Azure Scheduler supports multiple job types:
-
-* HTTP jobs, including HTTPS jobs that support TLS,
-for when you have the endpoint for an existing service or workload
-* Storage queue jobs for workloads that use Storage queues,
-such as posting messages to Storage queues
-* Service Bus queue jobs for workloads that use Service Bus queues
-* Service Bus topic jobs for workloads that use Service Bus topics
-
-## Job definition
-
-At the high level, a Scheduler job has these basic parts:
-
-* The action that runs when the job timer fires
-* Optional: The time to run the job
-* Optional: When and how often to repeat the job
-* Optional: An error action that runs if the primary action fails
-
-The job also includes system-provided data such as the job's next scheduled run time.
-The job's code definition is an object in JavaScript Object Notation (JSON) format,
-which has these elements:
-
-| Element | Required | Description |
-||-|-|
-| [**startTime**](#start-time) | No | The start time for the job with a time zone offset in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601) |
-| [**action**](#action) | Yes | The details for the primary action, which can include an **errorAction** object |
-| [**errorAction**](#error-action) | No | The details for the secondary action that runs if the primary action fails |
-| [**recurrence**](#recurrence) | No | The details such as frequency and interval for a recurring job |
-| [**retryPolicy**](#retry-policy) | No | The details for how often to retry an action |
-| [**state**](#state) | Yes | The details for the job's current state |
-| [**status**](#status) | Yes | The details for the job's current status, which is controlled by the service |
-||||
-
-Here's an example that shows a comprehensive job definition for an
-HTTP action with fuller element details described in later sections:
-
-```json
-"properties": {
- "startTime": "2012-08-04T00:00Z",
- "action": {
- "type": "Http",
- "request": {
- "uri": "http://contoso.com/some-method",
- "method": "PUT",
- "body": "Posting from a timer",
- "headers": {
- "Content-Type": "application/json"
- },
- "retryPolicy": {
- "retryType": "None"
- },
- },
- "errorAction": {
- "type": "Http",
- "request": {
- "uri": "http://contoso.com/notifyError",
- "method": "POST"
- }
- }
- },
- "recurrence": {
- "frequency": "Week",
- "interval": 1,
- "schedule": {
- "weekDays": ["Monday", "Wednesday", "Friday"],
- "hours": [10, 22]
- },
- "count": 10,
- "endTime": "2012-11-04"
- },
- "state": "Disabled",
- "status": {
- "lastExecutionTime": "2007-03-01T13:00:00Z",
- "nextExecutionTime": "2007-03-01T14:00:00Z ",
- "executionCount": 3,
- "failureCount": 0,
- "faultedCount": 0
- }
-}
-```
-
-<a name="start-time"></a>
-
-## startTime
-
-In the **startTime** object, you can specify the start time and a time
-zone offset in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601).
-
-<a name="action"></a>
-
-## action
-
-Your Scheduler job runs a primary **action** based on the specified schedule.
-Scheduler supports HTTP, Storage queue, Service Bus queue, and Service Bus
-topic actions. If the primary **action** fails, Scheduler can run a
-secondary [**errorAction**](#erroraction) that handles the error.
-The **action** object describes these elements:
-
-* The action's service type
-* The action's details
-* An alternative **errorAction**
-
-The previous example describes an HTTP action.
-Here's an example for a Storage queue action:
-
-```json
-"action": {
- "type": "storageQueue",
- "queueMessage": {
- "storageAccount": "myStorageAccount",
- "queueName": "myqueue",
- "sasToken": "TOKEN",
- "message": "My message body"
- }
-}
-```
-
-Here's an example for a Service Bus queue action:
-
-```json
-"action": {
- "type": "serviceBusQueue",
- "serviceBusQueueMessage": {
- "queueName": "q1",
- "namespace": "mySBNamespace",
- "transportType": "netMessaging", // Either netMessaging or AMQP
- "authentication": {
- "sasKeyName": "QPolicy",
- "type": "sharedAccessKey"
- },
- "message": "Some message",
- "brokeredMessageProperties": {},
- "customMessageProperties": {
- "appname": "FromScheduler"
- }
- }
-},
-```
-
-Here's an example for a Service Bus topic action:
-
-```json
-"action": {
- "type": "serviceBusTopic",
- "serviceBusTopicMessage": {
- "topicPath": "t1",
- "namespace": "mySBNamespace",
- "transportType": "netMessaging", // Either netMessaging or AMQP
- "authentication": {
- "sasKeyName": "QPolicy",
- "type": "sharedAccessKey"
- },
- "message": "Some message",
- "brokeredMessageProperties": {},
- "customMessageProperties": {
- "appname": "FromScheduler"
- }
- }
-},
-```
-
-For more information about Shared Access Signature (SAS) tokens, see
-[Authorize with Shared Access Signatures](../storage/common/storage-sas-overview.md).
-
-<a name="error-action"></a>
-
-## errorAction
-
-If your job's primary **action** fails, Scheduler can run
-an **errorAction** that handles the error. In the primary
-**action**, you can specify an **errorAction** object
-so Scheduler can call an error-handling endpoint or send a user notification.
-
-For example, if a disaster happens at the primary endpoint,
-you can use **errorAction** for calling a secondary endpoint,
-or for notifying an error handling endpoint.
-
-Just like the primary **action**, you can have the error action
-use simple or composite logic based on other actions.
-
-<a name="recurrence"></a>
-
-## recurrence
-
-A job recurs if the job's JSON definition includes the **recurrence** object, for example:
-
-```json
-"recurrence": {
- "frequency": "Week",
- "interval": 1,
- "schedule": {
- "hours": [10, 22],
- "minutes": [0, 30],
- "weekDays": ["Monday", "Wednesday", "Friday"]
- },
- "count": 10,
- "endTime": "2012-11-04"
-},
-```
-
-| Property | Required | Value | Description |
-|-|-|-|-|
-| **frequency** | Yes, when **recurrence** is used | "Minute", "Hour", "Day", "Week", "Month", "Year" | The time unit between occurrences |
-| **interval** | No | 1 to 1000 inclusively | A positive integer that determines the number of time units between each occurrence based on **frequency** |
-| **schedule** | No | Varies | The details for more complex and advanced schedules. See **hours**, **minutes**, **weekDays**, **months**, and **monthDays** |
-| **hours** | No | 1 to 24 | An array with the hour marks for when to run the job |
-| **minutes** | No | 0 to 59 | An array with the minute marks for when to run the job |
-| **months** | No | 1 to 12 | An array with the months for when to run the job |
-| **monthDays** | No | Varies | An array with the days of the month for when to run the job |
-| **weekDays** | No | "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday" | An array with days of the week for when to run the job |
-| **count** | No | <*none*> | The number of recurrences. The default is to recur infinitely. You can't use both **count** and **endTime**, but the rule that finishes first is honored. |
-| **endTime** | No | <*none*> | The date and time for when to stop the recurrence. The default is to recur infinitely. You can't use both **count** and **endTime**, but the rule that finishes first is honored. |
-||||
-
-For more information about these elements, see
-[Build complex schedules and advanced recurrences](../scheduler/scheduler-advanced-complexity.md).
-
-<a name="retry-policy"></a>
-
-## retryPolicy
-
-For the case when a Scheduler job might fail, you can set up a retry policy,
-which determines whether and how Scheduler retries the action. By default,
-Scheduler retries the job four more times at 30-second intervals.
-You can make this policy more or less aggressive, for example,
-this policy retries an action two times per day:
-
-```json
-"retryPolicy": {
- "retryType": "Fixed",
- "retryInterval": "PT1D",
- "retryCount": 2
-},
-```
-
-| Property | Required | Value | Description |
-|-|-|-|-|
-| **retryType** | Yes | **Fixed**, **None** | Determines whether you specify a retry policy (**fixed**) or not (**none**). |
-| **retryInterval** | No | PT30S | Specifies the interval and frequency between retry attempts in [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations). The minimum value is 15 seconds, while the maximum value is 18 months. |
-| **retryCount** | No | 4 | Specifies the number of retry attempts. The maximum value is 20. |
-||||
-
-For more information, see
-[High availability and reliability](../scheduler/scheduler-high-availability-reliability.md).
-
-<a name="status"></a>
-
-## state
-
-A job's state is either **Enabled**, **Disabled**,
-**Completed**, or **Faulted**, for example:
-
-`"state": "Disabled"`
-
-To change jobs to **Enabled** or **Disabled** state,
-you can use the PUT or PATCH operation on those jobs.
-However, if a job has **Completed** or **Faulted** state,
-you can't update the state, although you can perform
-the DELETE operation on the job. Scheduler deletes
-completed and faulted jobs after 60 days.
-
-<a name="status"></a>
-
-## status
-
-After a job starts, Scheduler returns information
-about the job's status through the **status** object,
-which only Scheduler controls. However, you can find
-the **status** object inside the **job** object.
-Here's the information that a job's status includes:
-
-* Time for the previous execution, if any
-* Time for the next scheduled execution for jobs in progress
-* The number of job executions
-* The number of failures, if any
-* The number of faults, if any
-
-For example:
-
-```json
-"status": {
- "lastExecutionTime": "2007-03-01T13:00:00Z",
- "nextExecutionTime": "2007-03-01T14:00:00Z ",
- "executionCount": 3,
- "failureCount": 0,
- "faultedCount": 0
-}
-```
-
-## Next steps
-
-* [Build complex schedules and advanced recurrence](scheduler-advanced-complexity.md)
-* [Azure Scheduler REST API reference](/rest/api/scheduler)
-* [Azure Scheduler PowerShell cmdlets reference](scheduler-powershell-reference.md)
-* [Limits, quotas, default values, and error codes](scheduler-limits-defaults-errors.md)
scheduler Scheduler High Availability Reliability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/scheduler/scheduler-high-availability-reliability.md
- Title: High availability and reliability
-description: Learn about high availability and reliability in Azure Scheduler.
------ Previously updated : 02/15/2022--
-# High availability and reliability for Azure Scheduler
-
-> [!IMPORTANT]
-> [Azure Logic Apps](../logic-apps/logic-apps-overview.md) has replaced Azure Scheduler, which is fully retired
-> since January 31, 2022. Please migrate your Azure Scheduler jobs by recreating them as workflows in Azure Logic Apps
-> following the steps in [Migrate Azure Scheduler jobs to Azure Logic Apps](migrate-from-scheduler-to-logic-apps.md).
-> Azure Scheduler is longer available in the Azure portal. The [Azure Scheduler REST API](/rest/api/scheduler) and
-> [Azure Scheduler PowerShell cmdlets](scheduler-powershell-reference.md) no longer work.
-
-Azure Scheduler provides both [high availability](/azure/architecture/framework/#resiliency) and reliability for your jobs. For more information, see [SLA for Scheduler](https://azure.microsoft.com/support/legal/sla/scheduler).
-
-## High availability
-
-Azure Scheduler is [highly available]
-and uses both geo-redundant service deployment and geo-regional job replication.
-
-### Geo-redundant service deployment
-
-Azure Scheduler is available across almost [every geographical region supported by Azure today](https://azure.microsoft.com/global-infrastructure/regions/#services). So, if an Azure datacenter in a hosted region becomes unavailable, you can still use Azure Scheduler because the service's failover capabilities make Scheduler available from another datacenter.
-
-### Geo-regional job replication
-
-Your own jobs in Azure Scheduler are replicated across Azure regions.
-So if one region has an outage, Azure Scheduler fails over and makes sure
-that your job runs from another datacenter in the paired geographic region.
-
-For example, if you create a job in South Central US,
-Azure Scheduler automatically replicates that job in
-North Central US. If a failure happens in South Central US,
-Azure Scheduler runs the job in North Central US.
-
-![Geo-regional job replication](./media/scheduler-high-availability-reliability/scheduler-high-availability-reliability-image1.png)
-
-Azure Scheduler also makes sure your data stays within the same but wider
-geographic region, just in case a failure happens in Azure. So, you don't
-have to duplicate your jobs when you just want high availability.
-Azure Scheduler automatically provides high-availability for your jobs.
-
-## Reliability
-
-Azure Scheduler guarantees its own high-availability but
-takes a different approach to user-created jobs. For example,
-suppose your job invokes an HTTP endpoint that's unavailable.
-Azure Scheduler still tries to run your job successfully by
-giving you alternative ways for handling failures:
-
-* Set up retry policies.
-* Set up alternate endpoints.
-
-<a name="retry-policies"></a>
-
-### Retry policies
-
-Azure Scheduler lets you set up retry policies. If a job fails,
-then by default, Scheduler retries the job four more times at
-30-second intervals. You can make this retry policy more aggressive,
-such as 10 times at 30-second intervals, or less aggressive,
-such as two times at daily intervals.
-
-For example, suppose you create a weekly job that calls an HTTP endpoint.
-If the HTTP endpoint becomes unavailable for a few hours when your job runs,
-you might not want to wait another week for the job to run again,
-which happens because the default retry policy won't work in this case.
-So, you might want to change the standard retry policy so that retries
-happen, for example, every three hours, rather than every 30 seconds.
-
-To learn how to set up a retry policy, see
-[retryPolicy](scheduler-concepts-terms.md#retrypolicy).
-
-### Alternate endpoints
-
-If your Azure Scheduler job calls an endpoint that is unreachable,
-even after following the retry policy, Scheduler falls back to an
-alternate endpoint that can handle such errors. So, if you set up
-this endpoint, Scheduler calls that endpoint, which makes your
-own jobs highly available when failures happen.
-
-For example, this diagram shows how Scheduler follows the retry
-policy when calling a web service in New York. If the retries fail,
-Scheduler checks for an alternate endpoint. If the endpoint exists,
-Scheduler starts sending requests to the alternate endpoint.
-The same retry policy applies to both the original action and
-the alternate action.
-
-![Scheduler behavior with retry policy and alternate endpoint](./media/scheduler-high-availability-reliability/scheduler-high-availability-reliability-image2.png)
-
-The action type for the alternate action can differ from the original action.
-For example, although the original action calls an HTTP endpoint,
-the alternate action might log errors by using a Storage queue,
-Service Bus queue, or Service Bus topic action.
-
-To learn how to set up an alternate endpoint, see
-[errorAction](scheduler-concepts-terms.md#error-action).
-
-## Next steps
-
-* [Concepts, terminology, and entity hierarchy](scheduler-concepts-terms.md)
-* [Azure Scheduler REST API reference](/rest/api/scheduler)
-* [Azure Scheduler PowerShell cmdlets reference](scheduler-powershell-reference.md)
-* [Limits, quotas, default values, and error codes](scheduler-limits-defaults-errors.md)
scheduler Scheduler Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/scheduler/scheduler-intro.md
- Title: What is Azure Scheduler?
-description: Create schedule, and run automated jobs that call services inside or outside Azure.
------ Previously updated : 02/15/2022--
-# What is Azure Scheduler?
-
-> [!IMPORTANT]
-> [Azure Logic Apps](../logic-apps/logic-apps-overview.md) has replaced Azure Scheduler, which is fully retired
-> since January 31, 2022. Please migrate your Azure Scheduler jobs by recreating them as workflows in Azure Logic Apps
-> following the steps in [Migrate Azure Scheduler jobs to Azure Logic Apps](migrate-from-scheduler-to-logic-apps.md).
-> Azure Scheduler is longer available in the Azure portal. The [Azure Scheduler REST API](/rest/api/scheduler) and
-> [Azure Scheduler PowerShell cmdlets](scheduler-powershell-reference.md) no longer work.
-
-[Azure Scheduler](https://azure.microsoft.com/services/scheduler/) helps you create [jobs](../scheduler/scheduler-concepts-terms.md) that run in the cloud by declaratively describing actions. The service then automatically schedules and runs those actions. For example, you can call services inside and outside Azure, such as calling HTTP or HTTPS endpoints, and also post messages to Azure Storage queues and Azure Service Bus queues or topics. You can run jobs immediately or at a later time. Scheduler easily supports [complex schedules and advanced recurrence](../scheduler/scheduler-advanced-complexity.md). Scheduler specifies when to run jobs, keeps a history of job results that you can review, and then predictably and reliably schedules workloads to run.
-
-Other Azure scheduling capabilities also use Scheduler in the background, for example, [Azure WebJobs](../app-service/webjobs-create.md), which is a [Web Apps](https://azure.microsoft.com/services/app-service/web/) feature in Azure App Service. You can manage communication for these actions by using the [Scheduler REST API](/rest/api/scheduler/), which helps you manage the communication for these actions.
-
-Here are some scenarios where Scheduler can help you:
-
-* Run recurring app actions: For example, periodically collect data from Twitter into a feed.
-
-* Perform daily maintenance: Such as pruning logs daily, performing backups, and other maintenance tasks.
-
- For example, as an administrator, you might want to back up your database at 1:00 AM every day for the next nine months.
-
-Although you can use Scheduler to create, maintain, and run scheduled workloads, Scheduler doesn't host the workloads or run code. The service only *invokes* the services or code hosted elsewhere, for example, in Azure, on-premises, or with another provider. Scheduler can invoke through HTTP, HTTPS, a Storage queue, a Service Bus queue, or a Service Bus topic.
-
-To create, schedule, manage, update, or delete jobs and [job collections](../scheduler/scheduler-concepts-terms.md), you can use code, the [Scheduler REST API](/rest/api/scheduler/), or the [Azure Scheduler PowerShell cmdlets](scheduler-powershell-reference.md).
-
-## Next steps
-
-* [Azure Scheduler concepts, terminology, and entity hierarchy](scheduler-concepts-terms.md)
-* [Plans and billing for Azure Scheduler](scheduler-plans-billing.md)
-* [Build complex schedules and advanced recurrence with Azure Scheduler](scheduler-advanced-complexity.md)
-* [Azure Scheduler REST API reference](/rest/api/scheduler)
-* [Azure Scheduler PowerShell cmdlets reference](scheduler-powershell-reference.md)
scheduler Scheduler Limits Defaults Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/scheduler/scheduler-limits-defaults-errors.md
- Title: Limits, quotas, and thresholds in Azure Scheduler
-description: Learn about limits, quotas, default values, and throttle thresholds for Azure Scheduler.
------ Previously updated : 02/15/2022--
-# Limits, quotas, and throttle thresholds in Azure Scheduler
-
-> [!IMPORTANT]
-> [Azure Logic Apps](../logic-apps/logic-apps-overview.md) has replaced Azure Scheduler, which is fully retired
-> since January 31, 2022. Please migrate your Azure Scheduler jobs by recreating them as workflows in Azure Logic Apps
-> following the steps in [Migrate Azure Scheduler jobs to Azure Logic Apps](migrate-from-scheduler-to-logic-apps.md).
-> Azure Scheduler is longer available in the Azure portal. The [Azure Scheduler REST API](/rest/api/scheduler) and
-> [Azure Scheduler PowerShell cmdlets](scheduler-powershell-reference.md) no longer work.
-
-## Limits, quotas, and thresholds
--
-## x-ms-request-id header
-
-Every request made against the Scheduler service
-returns a response header named **x-ms-request-id**.
-This header contains an opaque value that uniquely
-identifies the request. So, if a request consistently fails,
-and you confirmed the request is properly formatted,
-you can report the error to Microsoft by providing the
-**x-ms-request-id** response header value and including these details:
-
-* The **x-ms-request-id** value
-* The approximate time when the request was made
-* The identifiers for the Azure subscription, job collection, and job
-* The type of operation that the request attempted
-
-## Next steps
-
-* [Azure Scheduler concepts, terminology, and entity hierarchy](scheduler-concepts-terms.md)
-* [Plans and billing for Azure Scheduler](scheduler-plans-billing.md)
-* [Azure Scheduler REST API reference](/rest/api/scheduler)
-* [Azure Scheduler PowerShell cmdlets reference](scheduler-powershell-reference.md)
scheduler Scheduler Outbound Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/scheduler/scheduler-outbound-authentication.md
- Title: Outbound authentication
-description: Learn how to set up or remove outbound authentication for Azure Scheduler.
------ Previously updated : 02/15/2022--
-# Outbound authentication for Azure Scheduler
-
-> [!IMPORTANT]
-> [Azure Logic Apps](../logic-apps/logic-apps-overview.md) has replaced Azure Scheduler, which is fully retired
-> since January 31, 2022. Please migrate your Azure Scheduler jobs by recreating them as workflows in Azure Logic Apps
-> following the steps in [Migrate Azure Scheduler jobs to Azure Logic Apps](migrate-from-scheduler-to-logic-apps.md).
-> Azure Scheduler is longer available in the Azure portal. The [Azure Scheduler REST API](/rest/api/scheduler) and
-> [Azure Scheduler PowerShell cmdlets](scheduler-powershell-reference.md) no longer work.
-
-Azure Scheduler jobs might have to call services that require authentication,
-such as other Azure services, Salesforce.com, Facebook, and secure custom websites.
-The called service can determine whether the Scheduler job can access the requested resources.
-
-Scheduler supports these authentication models:
-
-* *Client certificate* authentication when using SSL/TLS client certificates
-* *Basic* authentication
-* *Active Directory OAuth* authentication
-
-## Add or remove authentication
-
-* To add authentication to a Scheduler job, when you create or update the job,
-add the `authentication` JavaScript Object Notation (JSON) child element
-to the `request` element.
-
- Responses never return secrets that are passed to the Scheduler service
- through a PUT, PATCH, or POST request in the `authentication` object.
- Responses set secret information to null or might use a public token
- that represents the authenticated entity.
-
-* To remove authentication from a Scheduler job,
-explicitly run a PUT or PATCH request on the job,
-and set the `authentication` object to null.
-The response won't contain any authentication properties.
-
-## Client certificate
-
-### Request body - Client certificate
-
-When adding authentication using the `ClientCertificate` model,
-specify these additional elements in the request body.
-
-| Element | Required | Description |
-||-|-|
-| **authentication** (parent element) | The authentication object for using an SSL/TLS client certificate |
-| **type** | Yes | The authentication type. For SSL/TLS client certificates, the value is `ClientCertificate`. |
-| **pfx** | Yes | The base64-encoded contents of the PFX file |
-| **password** | Yes | The password for accessing the PFX file |
-|||
-
-### Response body - Client certificate
-
-When a request is sent with authentication information,
-the response contains these authentication elements.
-
-| Element | Description |
-||-|
-| **authentication** (parent element) | The authentication object for using an SSL/TLS client certificate |
-| **type** | The authentication type. For SSL/TLS client certificates, the value is `ClientCertificate`. |
-| **certificateThumbprint** |The certificate's thumbprint |
-| **certificateSubjectName** |The certificate subject distinguished name |
-| **certificateExpiration** | The certificate's expiration date |
-|||
-
-### Sample REST request - Client certificate
-
-```json
-PUT https://management.azure.com/subscriptions/<Azure-subscription-ID>/resourceGroups/CS-SoutheastAsia-scheduler/providers/Microsoft.Scheduler/jobcollections/southeastasiajc/jobs/httpjob?api-version=2016-01-01 HTTP/1.1
-User-Agent: Fiddler
-Host: management.azure.com
-Authorization: Bearer sometoken
-Content-Type: application/json; charset=utf-8
-
-{
- "properties": {
- "startTime": "2015-05-14T14:10:00Z",
- "action": {
- "request": {
- "uri": "https://mywebserviceendpoint.com",
- "method": "GET",
- "headers": {
- "x-ms-version": "2013-03-01"
- },
- "authentication": {
- "type": "clientcertificate",
- "password": "password",
- "pfx": "pfx key"
- }
- },
- "type": "http"
- },
- "recurrence": {
- "frequency": "minute",
- "endTime": "2016-04-10T08:00:00Z",
- "interval": 1
- },
- "state": "enabled"
- }
-}
-```
-
-### Sample REST response - Client certificate
-
-```json
-HTTP/1.1 200 OKCache-Control: no-cache
-Pragma: no-cache
-Content-Length: 858
-Content-Type: application/json; charset=utf-8
-Expires: -1
-x-ms-request-id: 56c7b40e-721a-437e-88e6-f68562a73aa8
-Server: Microsoft-IIS/8.5
-X-AspNet-Version: 4.0.30319
-X-Powered-By: ASP.NET
-x-ms-ratelimit-remaining-subscription-resource-requests: 599
-x-ms-correlation-request-id: 1075219e-e879-4030-bc81-094e54fbabce
-x-ms-routing-request-id: WESTUS:20160316T190424Z:1075219e-e879-4030-bc81-094e54fbabce
-Strict-Transport-Security: max-age=31536000; includeSubDomains
-Date: Wed, 16 Mar 2016 19:04:23 GMT
-
-{
- "id": "/subscriptions/<Azure-subscription-ID>/resourceGroups/CS-SoutheastAsia-scheduler/providers/Microsoft.Scheduler/jobCollections/southeastasiajc/jobs/httpjob",
- "type": "Microsoft.Scheduler/jobCollections/jobs",
- "name": "southeastasiajc/httpjob",
- "properties": {
- "startTime": "2015-05-14T14:10:00Z",
- "action": {
- "request": {
- "uri": "https://mywebserviceendpoint.com",
- "method": "GET",
- "headers": {
- "x-ms-version": "2013-03-01"
- },
- "authentication": {
- "certificateThumbprint": "88105CG9DF9ADE75B835711D899296CB217D7055",
- "certificateExpiration": "2021-01-01T07:00:00Z",
- "certificateSubjectName": "CN=Scheduler Mgmt",
- "type": "ClientCertificate"
- }
- },
- "type": "http"
- },
- "recurrence": {
- "frequency": "minute",
- "endTime": "2016-04-10T08:00:00Z",
- "interval": 1
- },
- "state": "enabled",
- "status": {
- "nextExecutionTime": "2016-03-16T19:05:00Z",
- "executionCount": 0,
- "failureCount": 0,
- "faultedCount": 0
- }
- }
-}
-```
-
-## Basic
-
-### Request body - Basic
-
-When adding authentication using the `Basic` model,
-specify these additional elements in the request body.
-
-| Element | Required | Description |
-||-|-|
-| **authentication** (parent element) | The authentication object for using Basic authentication |
-| **type** | Yes | The authentication type. For Basic authentication, the value is `Basic`. |
-| **username** | Yes | The username to authenticate |
-| **password** | Yes | The password to authenticate |
-||||
-
-### Response body - Basic
-
-When a request is sent with authentication information,
-the response contains these authentication elements.
-
-| Element | Description |
-||-|
-| **authentication** (parent element) | The authentication object for using Basic authentication |
-| **type** | The authentication type. For basic authentication, the value is `Basic`. |
-| **username** | The authenticated username |
-|||
-
-### Sample REST request - Basic
-
-```json
-PUT https://management.azure.com/subscriptions/<Azure-subscription-ID>/resourceGroups/CS-SoutheastAsia-scheduler/providers/Microsoft.Scheduler/jobcollections/southeastasiajc/jobs/httpjob?api-version=2016-01-01 HTTP/1.1
-User-Agent: Fiddler
-Host: management.azure.com
-Authorization: Bearer sometoken
-Content-Length: 562
-Content-Type: application/json; charset=utf-8
-
-{
- "properties": {
- "startTime": "2015-05-14T14:10:00Z",
- "action": {
- "request": {
- "uri": "https://mywebserviceendpoint.com",
- "method": "GET",
- "headers": {
- "x-ms-version": "2013-03-01"
- },
- "authentication": {
- "type": "basic",
- "username": "user",
- "password": "password"
- }
- },
- "type": "http"
- },
- "recurrence": {
- "frequency": "minute",
- "endTime": "2016-04-10T08:00:00Z",
- "interval": 1
- },
- "state": "enabled"
- }
-}
-```
-
-### Sample REST response - Basic
-
-```json
-HTTP/1.1 200 OK
-Cache-Control: no-cache
-Pragma: no-cache
-Content-Length: 701
-Content-Type: application/json; charset=utf-8
-Expires: -1
-x-ms-request-id: a2dcb9cd-1aea-4887-8893-d81273a8cf04
-Server: Microsoft-IIS/8.5
-X-AspNet-Version: 4.0.30319
-X-Powered-By: ASP.NET
-x-ms-ratelimit-remaining-subscription-resource-requests: 599
-x-ms-correlation-request-id: 7816f222-6ea7-468d-b919-e6ddebbd7e95
-x-ms-routing-request-id: WESTUS:20160316T190506Z:7816f222-6ea7-468d-b919-e6ddebbd7e95
-Strict-Transport-Security: max-age=31536000; includeSubDomains
-Date: Wed, 16 Mar 2016 19:05:06 GMT
-
-{
- "id":"/subscriptions/<Azure-subscription-ID>/resourceGroups/CS-SoutheastAsia-scheduler/providers/Microsoft.Scheduler/jobCollections/southeastasiajc/jobs/httpjob",
- "type":"Microsoft.Scheduler/jobCollections/jobs",
- "name":"southeastasiajc/httpjob",
- "properties":{
- "startTime":"2015-05-14T14:10:00Z",
- "action":{
- "request":{
- "uri":"https://mywebserviceendpoint.com",
- "method":"GET",
- "headers":{
- "x-ms-version":"2013-03-01"
- },
- "authentication":{
- "username":"user1",
- "type":"Basic"
- }
- },
- "type":"Http"
- },
- "recurrence":{
- "frequency":"Minute",
- "endTime":"2016-04-10T08:00:00Z",
- "interval":1
- },
- "state":"Enabled",
- "status":{
- "nextExecutionTime":"2016-03-16T19:06:00Z",
- "executionCount":0,
- "failureCount":0,
- "faultedCount":0
- }
- }
-}
-```
-
-## Active Directory OAuth
-
-### Request body - Active Directory OAuth
-
-When adding authentication using the `ActiveDirectoryOAuth` model,
-specify these additional elements in the request body.
-
-| Element | Required | Description |
-||-|-|
-| **authentication** (parent element) | Yes | The authentication object for using ActiveDirectoryOAuth authentication |
-| **type** | Yes | The authentication type. For ActiveDirectoryOAuth authentication, the value is `ActiveDirectoryOAuth`. |
-| **tenant** | Yes | The tenant identifier for the Azure AD tenant. To find the tenant identifier for the Azure AD tenant, run `Get-AzureAccount` in Azure PowerShell. |
-| **audience** | Yes | This value is set to `https://management.core.windows.net/`. |
-| **clientId** | Yes | The client identifier for the Azure AD application |
-| **secret** | Yes | The secret for the client that is requesting the token |
-||||
-
-### Response body - Active Directory OAuth
-
-When a request is sent with authentication information,
-the response contains these authentication elements.
-
-| Element | Description |
-||-|
-| **authentication** (parent element) | The authentication object for using ActiveDirectoryOAuth authentication |
-| **type** | The authentication type. For ActiveDirectoryOAuth authentication, the value is `ActiveDirectoryOAuth`. |
-| **tenant** | The tenant identifier for the Azure AD tenant |
-| **audience** | This value is set to `https://management.core.windows.net/`. |
-| **clientId** | The client identifier for the Azure AD application |
-|||
-
-### Sample REST request - Active Directory OAuth
-
-```json
-PUT https://management.azure.com/subscriptions/<Azure-subscription-ID>/resourceGroups/CS-SoutheastAsia-scheduler/providers/Microsoft.Scheduler/jobcollections/southeastasiajc/jobs/httpjob?api-version=2016-01-01 HTTP/1.1
-User-Agent: Fiddler
-Host: management.azure.com
-Authorization: Bearer sometoken
-Content-Length: 757
-Content-Type: application/json; charset=utf-8
-
-{
- "properties": {
- "startTime": "2015-05-14T14:10:00Z",
- "action": {
- "request": {
- "uri": "https://mywebserviceendpoint.com",
- "method": "GET",
- "headers": {
- "x-ms-version": "2013-03-01"
- },
- "authentication": {
- "tenant":"microsoft.onmicrosoft.com",
- "audience":"https://management.core.windows.net/",
- "clientId":"dc23e764-9be6-4a33-9b9a-c46e36f0c137",
- "secret": "G6u071r8Gjw4V4KSibnb+VK4+tX399hkHaj7LOyHuj5=",
- "type":"ActiveDirectoryOAuth"
- }
- },
- "type": "Http"
- },
- "recurrence": {
- "frequency": "Minute",
- "endTime": "2016-04-10T08:00:00Z",
- "interval": 1
- },
- "state": "Enabled"
- }
-}
-```
-
-### Sample REST response - Active Directory OAuth
-
-```json
-HTTP/1.1 200 OK
-Cache-Control: no-cache
-Pragma: no-cache
-Content-Length: 885
-Content-Type: application/json; charset=utf-8
-Expires: -1
-x-ms-request-id: 86d8e9fd-ac0d-4bed-9420-9baba1af3251
-Server: Microsoft-IIS/8.5
-X-AspNet-Version: 4.0.30319
-X-Powered-By: ASP.NET
-x-ms-ratelimit-remaining-subscription-resource-requests: 599
-x-ms-correlation-request-id: 5183bbf4-9fa1-44bb-98c6-6872e3f2e7ce
-x-ms-routing-request-id: WESTUS:20160316T191003Z:5183bbf4-9fa1-44bb-98c6-6872e3f2e7ce
-Strict-Transport-Security: max-age=31536000; includeSubDomains
-Date: Wed, 16 Mar 2016 19:10:02 GMT
-
-{
- "id": "/subscriptions/<Azure-subscription-ID>/resourceGroups/CS-SoutheastAsia-scheduler/providers/Microsoft.Scheduler/jobCollections/southeastasiajc/jobs/httpjob",
- "type": "Microsoft.Scheduler/jobCollections/jobs",
- "name": "southeastasiajc/httpjob",
- "properties": {
- "startTime": "2015-05-14T14:10:00Z",
- "action": {
- "request": {
- "uri": "https://mywebserviceendpoint.com",
- "method": "GET",
- "headers": {
- "x-ms-version": "2013-03-01"
- },
- "authentication": {
- "tenant": "microsoft.onmicrosoft.com",
- "audience": "https://management.core.windows.net/",
- "clientId": "dc23e764-9be6-4a33-9b9a-c46e36f0c137",
- "type": "ActiveDirectoryOAuth"
- }
- },
- "type": "Http"
- },
- "recurrence": {
- "frequency": "minute",
- "endTime": "2016-04-10T08:00:00Z",
- "interval": 1
- },
- "state": "Enabled",
- "status": {
- "lastExecutionTime": "2016-03-16T19:10:00.3762123Z",
- "nextExecutionTime": "2016-03-16T19:11:00Z",
- "executionCount": 5,
- "failureCount": 5,
- "faultedCount": 1
- }
- }
-}
-```
-
-## Next steps
-
-* [Azure Scheduler concepts, terminology, and entity hierarchy](scheduler-concepts-terms.md)
-* [Azure Scheduler limits, defaults, and error codes](scheduler-limits-defaults-errors.md)
-* [Azure Scheduler REST API reference](/rest/api/scheduler)
-* [Azure Scheduler PowerShell cmdlets reference](scheduler-powershell-reference.md)
scheduler Scheduler Plans Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/scheduler/scheduler-plans-billing.md
- Title: Plans and billing
-description: Learn about plans and billing for Azure Scheduler.
------ Previously updated : 02/15/2022--
-# Plans and billing for Azure Scheduler
-
-> [!IMPORTANT]
-> [Azure Logic Apps](../logic-apps/logic-apps-overview.md) has replaced Azure Scheduler, which is fully retired
-> since January 31, 2022. Please migrate your Azure Scheduler jobs by recreating them as workflows in Azure Logic Apps
-> following the steps in [Migrate Azure Scheduler jobs to Azure Logic Apps](migrate-from-scheduler-to-logic-apps.md).
-> Azure Scheduler is longer available in the Azure portal. The [Azure Scheduler REST API](/rest/api/scheduler) and
-> [Azure Scheduler PowerShell cmdlets](scheduler-powershell-reference.md) no longer work.
-
-## Job collection plans
-
-In Azure Scheduler, a job collection contains a specific number of jobs. The job collection is the billable entity and comes in Standard, P10 Premium, and P20 Premium plans, which are described here:
-
-| Job collection plan | Max jobs per collection | Max recurrence | Max job collections per subscription | Limits |
-|: |: |: |: |: |
-| **Standard** | 50 jobs per collection | One per minute. Can't run jobs more often than one per minute. | Each Azure subscription can have up to 100 Standard job collections. | Access to Scheduler full feature set |
-| **P10 Premium** | 50 jobs per collection | One per minute. Can't run jobs more often than one per minute. | Each Azure subscription can have up to 10,000 P10 Premium job collections. For more collections, <a href="mailto:wapteams@microsoft.com">Contact us</a>. | Access to Scheduler full feature set |
-| **P20 Premium** | 1000 jobs per collection | One per minute. Can't run jobs more often than one per minute. | Each Azure subscription can have up to 5,000 P20 Premium job collections. For more collections, <a href="mailto:wapteams@microsoft.com">Contact us</a>. | Access to Scheduler full feature set |
-||||||
-
-## Pricing
-
-For pricing details, see [Scheduler Pricing](https://azure.microsoft.com/pricing/details/scheduler/).
-
-## Upgrade or downgrade plans
-
-At any time, you can upgrade or downgrade a job collection
-plan across the Standard, P10 Premium, and P20 Premium plans.
-
-## Active status and billing
-
-Job collections are always active unless your entire Azure subscription
-goes into a temporary disabled state due to billing issues. And although
-you can disable all jobs in a job collection through a single operation,
-this action doesn't change the job collection's billing status, so the job
-collection is *still* billed. Empty job collections are considered active
-and are billed.
-
-To make sure a job collection isn't billed, you must delete the job collection.
-
-## Standard billable units
-
-A standard billable unit can have up to 10 Standard job collections.
-Since a Standard job collection can have up to 50 jobs per collection,
-one standard billing unit lets your Azure subscription have up to 500 jobs,
-or up to almost 22 *million* job executions per month. This list explains
-how you're billed based on various numbers of Standard job collections:
-
-* If you have between 1 and 10 Standard job collections,
-you're billed for one standard billing unit.
-
-* If you have between 11 and 20 Standard job collections,
-you're billed for two standard billing units.
-
-* If you have between 21 and 30 Standard job collections,
-you're billed for three standard billing units, and so on.
-
-## P10 premium billable units
-
-A P10 premium billable unit can have up to 10,000 P10 Premium job collections.
-Since a P10 Premium job collection can have up to 50 jobs per collection,
-one P10 premium billing unit lets your Azure subscription have up to 500,000 jobs,
-or up to almost 22 *billion* job executions per month.
-
-P10 Premium job collections provide the same capabilities
-as Standard job collections but offer a price break for apps
-that require many job collections and provide more scalability.
-This list explains how you're billed based on various
-numbers of P10 Premium job collections:
-
-* If you have between 1 and 10,000 P10 Premium job collections,
-you're billed for one P10 premium billing unit.
-
-* If you have between 10,001 and 20,000 P10 Premium job collections,
-you're billed for 2 P10 premium billing units, and so on.
-
-## P20 premium billable units
-
-A P20 premium billable unit can have up to 5,000 P20 Premium job collections.
-Since a P20 Premium job collection can have up to 1,000 jobs per job collection,
-one P20 premium billing unit lets your Azure subscription have up to 5,000,000 jobs,
-or up to almost 220 *billion* job executions per month.
-
-P20 Premium job collections provide the same capabilities
-as P10 Premium job collections but also support a greater
-number of jobs per collection and a greater total number
-of jobs overall than P10 Premium, providing more scalability.
-
-## Plan comparison
-
-* If you have more than 100 Standard job collections (10 standard billing units),
-then you can get a better deal by having all job collections in a Premium plan.
-
-* If you have one Standard job collection and one Premium job collection,
-then you're billed for one standard billing unit *and* one premium billing unit.
-
- The Scheduler service bills based on the number of active
- job collections that are either standard or premium.
-
-## Next steps
-
-* [Azure Scheduler concepts, terminology, and entity hierarchy](scheduler-concepts-terms.md)
-* [Azure Scheduler limits, defaults, and error codes](scheduler-limits-defaults-errors.md)
scheduler Scheduler Powershell Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/scheduler/scheduler-powershell-reference.md
-
Title: PowerShell cmdlets reference
-description: Learn about PowerShell cmdlets for Azure Scheduler.
------ Previously updated : 02/15/2022--
-# PowerShell cmdlets reference for Azure Scheduler
-
-> [!IMPORTANT]
-> [Azure Logic Apps](../logic-apps/logic-apps-overview.md) has replaced Azure Scheduler, which is fully deprecated
-> since January 31, 2022. Please migrate your Azure Scheduler jobs by recreating them as workflows in Azure Logic Apps
-> following the steps in [Migrate Azure Scheduler jobs to Azure Logic Apps](migrate-from-scheduler-to-logic-apps.md).
-> Azure Scheduler is longer available in the Azure portal. The [Azure Scheduler REST API](/rest/api/scheduler) and
-> [Azure Scheduler PowerShell cmdlets](scheduler-powershell-reference.md) no longer work.
-
-To author scripts for creating and managing Scheduler jobs and job collections, you can use PowerShell cmdlets. This article lists the major PowerShell cmdlets for Azure Scheduler with links to their reference articles. To install Azure PowerShell for your Azure subscription, see [How to install and configure Azure PowerShell](/powershell/azure/). For more information about [Azure Resource Manager cmdlets](/powershell/azure/),
-see [Using Azure PowerShell with Azure Resource Manager](../azure-resource-manager/management/manage-resources-powershell.md).
--
-| Cmdlet | Description |
-|--|-|
-| [Disable-AzSchedulerJobCollection](/powershell/module/azurerm.scheduler/disable-azurermschedulerjobcollection) |Disables a job collection. |
-| [Enable-AzureRmSchedulerJobCollection](/powershell/module/azurerm.scheduler/enable-azurermschedulerjobcollection) |Enables a job collection. |
-| [Get-AzSchedulerJob](/powershell/module/azurerm.scheduler/get-azurermschedulerjob) |Gets Scheduler jobs. |
-| [Get-AzSchedulerJobCollection](/powershell/module/azurerm.scheduler/get-azurermschedulerjobcollection) |Gets job collections. |
-| [Get-AzSchedulerJobHistory](/powershell/module/azurerm.scheduler/get-azurermschedulerjobhistory) |Gets job history. |
-| [New-AzSchedulerHttpJob](/powershell/module/azurerm.scheduler/new-azurermschedulerhttpjob) |Creates an HTTP job. |
-| [New-AzSchedulerJobCollection](/powershell/module/azurerm.scheduler/new-azurermschedulerjobcollection) |Creates a job collection. |
-| [New-AzSchedulerServiceBusQueueJob](/powershell/module/azurerm.scheduler/new-azurermschedulerservicebusqueuejob) | Creates a Service Bus queue job. |
-| [New-AzSchedulerServiceBusTopicJob](/powershell/module/azurerm.scheduler/new-azurermschedulerservicebustopicjob) |Creates a Service Bus topic job. |
-| [New-AzSchedulerStorageQueueJob](/powershell/module/azurerm.scheduler/new-azurermschedulerstoragequeuejob) |Creates a Storage queue job. |
-| [Remove-AzSchedulerJob](/powershell/module/azurerm.scheduler/remove-azurermschedulerjob) |Removes a Scheduler job. |
-| [Remove-AzSchedulerJobCollection](/powershell/module/azurerm.scheduler/remove-azurermschedulerjobcollection) |Removes a job collection. |
-| [Set-AzSchedulerHttpJob](/powershell/module/azurerm.scheduler/set-azurermschedulerhttpjob) |Modifies a Scheduler HTTP job. |
-| [Set-AzSchedulerJobCollection](/powershell/module/azurerm.scheduler/set-azurermschedulerjobcollection) |Modifies a job collection. |
-| [Set-AzSchedulerServiceBusQueueJob](/powershell/module/azurerm.scheduler/set-azurermschedulerservicebusqueuejob) |Modifies a Service Bus queue job. |
-| [Set-AzSchedulerServiceBusTopicJob](/powershell/module/azurerm.scheduler/set-azurermschedulerservicebustopicjob) |Modifies a Service Bus topic job. |
-| [Set-AzSchedulerStorageQueueJob](/powershell/module/azurerm.scheduler/set-azurermschedulerstoragequeuejob) |Modifies a Storage queue job. |
-|||
-
-For more details, you can run any of these cmdlets:
-
-```text
-Get-Help <cmdlet name> -Detailed
-Get-Help <cmdlet name> -Examples
-Get-Help <cmdlet name> -Full
-```
-
-## Next steps
-
-* [Azure Scheduler concepts, terminology, and entity hierarchy](scheduler-concepts-terms.md)
-* [Azure Scheduler limits, defaults, and error codes](scheduler-limits-defaults-errors.md)
-* [Azure Scheduler REST API reference](/rest/api/scheduler)
search Cognitive Search Concept Annotations Syntax https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-concept-annotations-syntax.md
Notice that the cardinality of `"/document/people/*/lastname"` is larger than th
## See also++ [Skill context and input annotation language](cognitive-search-skill-annotation-language.md) + [How to integrate a custom skill into an enrichment pipeline](cognitive-search-custom-skill-interface.md) + [How to define a skillset](cognitive-search-defining-skillset.md) + [Create Skillset (REST)](/rest/api/searchservice/create-skillset)
search Cognitive Search Defining Skillset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-defining-skillset.md
Inside the skillset definition, the skills array specifies which skills to execu
``` > [!NOTE]
-> You can build complex skillsets with looping and branching using the [Conditional skill](cognitive-search-skill-conditional.md) to create the expressions. The syntax is based on the [JSON Pointer](https://tools.ietf.org/html/rfc6901) path notation, with a few modifications to identify nodes in the enrichment tree. A `"/"` traverses a level lower in the tree and `"*"` acts as a for-each operator in the context. Numerous examples in this article illustrate the syntax.
+> You can build complex skillsets with looping and branching using the [Conditional skill](cognitive-search-skill-conditional.md) to create the expressions. The syntax is based on the [JSON Pointer](https://tools.ietf.org/html/rfc6901) path notation, with a few modifications to identify nodes in the enrichment tree. A `"/"` traverses a level lower in the tree and `"*"` acts as a for-each operator in the context. Numerous examples in this article illustrate the [the syntax](cognitive-search-skill-annotation-language.md).
### How built-in skills are structured
search Cognitive Search Skill Annotation Language https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-annotation-language.md
+
+ Title: Skill context and input annotation reference language
+
+description: Annotation syntax reference for annotation in the context, inputs and outputs of a skillset in an AI enrichment pipeline in Azure Cognitive Search.
+++++ Last updated : 01/27/2022+
+# Skill context and input annotation language
+
+This article is the reference documentation for skill context and input syntax. It's a full description of the expression language used to construct paths to nodes in an enriched document.
+
+Azure Cognitive Search skills can use and [enrich the data coming from the data source and from the output of other skills](cognitive-search-defining-skillset.md).
+The data working set that represents the current state of the indexer work for the current document starts from the raw data coming from the data source and is
+progressively enriched with each skill iteration's output data.
+That data is internally organized in a tree-like structure that can be queried to be used as skill inputs or to be added to the index.
+The nodes in the tree can be simple values such as strings and numbers, arrays, or complex objects and even binary files.
+Even simple values can be enriched with additional structured information.
+For example, a string can be annotated with additional information that is stored beneath it in the enrichment tree.
+The expressions used to query that internal structure use a rich syntax that is detailed in this article.
+The enriched data structure can be [inspected from debug sessions](cognitive-search-debug-session.md#ai-enrichments-tab--enriched-data-structure).
+Expressions querying the structure can also be [tested from debug sessions](cognitive-search-debug-session.md#expression-evaluator).
+
+Throughout the article, we'll use the following enriched data as an example.
+This data is typical of the kind of structure you would get when enriching a document using a skillset with [OCR](cognitive-search-skill-ocr.md), [key phrase extraction](cognitive-search-skill-keyphrases.md), [text translation](cognitive-search-skill-text-translation.md), [language detection](cognitive-search-skill-language-detection.md), [entity recognition](cognitive-search-skill-entity-recognition-v3.md) skills and a custom tokenizer skill.
+
+|Path|Value|
+|||
+|`document`||
+|&emsp;`merged_content`|"Study of BMN 110 in Pediatric Patients"...|
+|&emsp;&emsp;`keyphrases`||
+|&emsp;&emsp;&emsp;`[0]`|"Study of BMN"|
+|&emsp;&emsp;&emsp;`[1]`|"Syndrome"|
+|&emsp;&emsp;&emsp;`[2]`|"Pediatric Patients"|
+|&emsp;&emsp;&emsp;...||
+|&emsp;&emsp;`locations`||
+|&emsp;&emsp;&emsp;`[0]`|"IVA"|
+|&emsp;&emsp;`translated_text`|"Étude de BMN 110 chez les patients pédiatriques"...|
+|&emsp;&emsp;`entities`||
+|&emsp;&emsp;&emsp;`[0]`||
+|&emsp;&emsp;&emsp;&emsp;`category`|"Organization"|
+|&emsp;&emsp;&emsp;&emsp;`subcategory`|`null`|
+|&emsp;&emsp;&emsp;&emsp;`confidenceScore`|0.72|
+|&emsp;&emsp;&emsp;&emsp;`length`|3|
+|&emsp;&emsp;&emsp;&emsp;`offset`|9|
+|&emsp;&emsp;&emsp;&emsp;`text`|"BMN"|
+|&emsp;&emsp;&emsp;...||
+|&emsp;&emsp;`organizations`||
+|&emsp;&emsp;&emsp;`[0]`|"BMN"|
+|&emsp;&emsp;`language`|"en"|
+|&emsp;`normalized_images`||
+|&emsp;&emsp;`[0]`||
+|&emsp;&emsp;&emsp;`layoutText`|...|
+|&emsp;&emsp;&emsp;`text`||
+|&emsp;&emsp;&emsp;&emsp;`words`||
+|&emsp;&emsp;&emsp;&emsp;&emsp;`[0]`|"Study"|
+|&emsp;&emsp;&emsp;&emsp;&emsp;`[1]`|"of"|
+|&emsp;&emsp;&emsp;&emsp;&emsp;`[2]`|"BMN"|
+|&emsp;&emsp;&emsp;&emsp;&emsp;`[3]`|"110"|
+|&emsp;&emsp;&emsp;&emsp;&emsp;...||
+|&emsp;&emsp;`[1]`||
+|&emsp;&emsp;&emsp;`layoutText`|...|
+|&emsp;&emsp;&emsp;`text`||
+|&emsp;&emsp;&emsp;&emsp;`words`||
+|&emsp;&emsp;&emsp;&emsp;&emsp;`[0]`|"it"|
+|&emsp;&emsp;&emsp;&emsp;&emsp;`[1]`|"is"|
+|&emsp;&emsp;&emsp;&emsp;&emsp;`[2]`|"certainly"|
+|&emsp;&emsp;&emsp;&emsp;&emsp;...||
+|&emsp;&emsp;&emsp;&emsp;...
+|&emsp;&emsp;...||
+
+## Document root
+
+All the data is under one root element, for which the path is `"/document"`. The root element is the default context for skills.
+
+## Simple paths
+
+Simple paths through the internal enriched document can be expressed with simple tokens separated by slashes.
+This syntax is similar to [the JSON Pointer specification](https://datatracker.ietf.org/doc/html/rfc6901.htmlhttps://datatracker.ietf.org/doc/html/rfc6901.html).
+
+### Object properties
+
+The properties of nodes that represent objects add their values to the tree under the property's name.
+Those values can be obtained by appending the property name as a token separated by a slash:
+
+|Expression|Value|
+|||
+|`/document/merged_content/language`|`"en"`|
+
+Property name tokens are case-sensitive.
+
+### Array item index
+
+Specific elements of an array can be referenced by using their numeric index like a property name:
+
+|Expression|Value|
+|||
+|`/document/merged_content/keyphrases/1`|`"Syndrome"`|
+|`/document/merged_content/entities/0/text`|`"BMN"`|
+
+### Escape sequences
+
+There are two characters that have special meaning and need to be escaped if they appear in an expression and must be interpreted as is instead of as their special meaning: `'/'` and `'~'`.
+Those characters must be escaped respectively as `'~0'` and `'~1'`.
+
+## Array enumeration
+
+An array of values can be obtained using the `'*'` token:
+
+|Expression|Value|
+|||
+|`/document/normalized_images/0/text/words/*`|`["Study", "of", "BMN", "110" ...]`|
+
+The `'*'` token doesn't have to be at the end of the path. It's possible to enumerate all nodes matching a path with a star in the middle or with multiple stars:
+
+|Expression|Value|
+|||
+|`/document/normalized_images/*/text/words/*`|`["Study", "of", "BMN", "110" ... "it", "is", "certainly" ...]`|
+
+This example returns a flat list of all matching nodes.
+
+It's possible to maintain more structure and get a separate array for the words of each page by using a `'#'` token instead of the second `'*'` token:
+
+|Expression|Value|
+|||
+|`/document/normalized_images/*/text/words/#`|`[["Study", "of", "BMN", "110" ...], ["it", "is", "certainly" ...] ...]`|
+
+The `'#'` token expresses that the array should be treated as a single value instead of being enumerated.
+
+### Enumerating arrays in context
+
+It is often useful to process each element of an array in isolation and have a different set of skill inputs and outputs for each.
+This can be done by setting the context of the skill to an enumeration instead of the default `"/document"`.
+
+In the following example, we use one of the input expressions we used before, but with a different context that changes the resulting value.
+
+|Context|Expression|Values|
+||||
+|`/document/normalized_images/*`|`/document/normalized_images/*/text/words/*`|`["Study", "of", "BMN", "110" ...]`<br/>`["it", "is", "certainly" ...]`<br>...|
+
+For this combination of context and input, the skill will get executed once for each normalized image: once for `"/document/normalized_images/0"` and once for `"/document/normalized_images/1"`. The two input values corresponding to each skill execution are detailed in the values column.
+
+When enumerating an array in context, any outputs the skill produces will also be added to the document as enrichments of the context.
+In the above example, an output named `"out"` will have its values for each execution added to the document respectively under `"/document/normalized_images/0/out"` and `"/document/normalized_images/1/out"`.
+
+## Literal values
+
+Skill inputs can take literal values as their inputs instead of dynamic values queried from the existing document. This can be achieved by prefixing the value with an equal sign. Values can be numbers, strings or Boolean.
+String values can be enclosed in single `'` or double `"` quotes.
+
+|Expression|Value|
+|||
+|`=42`|`42`|
+|`=2.45E-4`|`0.000245`|
+|`="some string"`|`"some string"`|
+|`='some other string'`|`"some other string"`|
+|`="unicod\u0065"`|`"unicode"`|
+|`=false`|`false`|
+
+## Composite expressions
+
+It's possible to combine values together using unary, binary and ternary operators.
+Operators can combine literal values and values resulting from path evaluation.
+When used inside an expression, paths should be enclosed between `"$("` and `")"`.
+
+### Boolean not `'!'`
+
+|Expression|Value|
+|||
+|`=!false`|`true`|
+
+### Negative `'-'`
+
+|Expression|Value|
+|||
+|`=-42`|`-42`|
+|`=-$(/document/merged_content/entities/0/offset)`|`-9`|
+
+### Addition `'+'`
+
+|Expression|Value|
+|||
+|`=2+2`|`4`|
+|`=2+$(/document/merged_content/entities/0/offset)`|`11`|
+
+### Subtraction `'-'`
+
+|Expression|Value|
+|||
+|`=2-1`|`1`|
+|`=$(/document/merged_content/entities/0/offset)-2`|`7`|
+
+### Multiplication `'*'`
+
+|Expression|Value|
+|||
+|`=2*3`|`6`|
+|`=$(/document/merged_content/entities/0/offset)*2`|`18`|
+
+### Division `'/'`
+
+|Expression|Value|
+|||
+|`=3/2`|`1.5`|
+|`=$(/document/merged_content/entities/0/offset)/3`|`3`|
+
+### Modulo `'%'`
+
+|Expression|Value|
+|||
+|`=15%4`|`3`|
+|`=$(/document/merged_content/entities/0/offset)%2`|`1`|
+
+### Less than, less than or equal, greater than and greater than or equal `'<'` `'<='` `'>'` `'>='`
+
+|Expression|Value|
+|||
+|`=15<4`|`false`|
+|`=4<=4`|`true`|
+|`=15>4`|`true`|
+|`=1>=2`|`false`|
+
+### Equality and non-equality `'=='` `'!='`
+
+|Expression|Value|
+|||
+|`=15==4`|`false`|
+|`=4==4`|`true`|
+|`=15!=4`|`true`|
+|`=1!=1`|`false`|
+
+### Logical operations and, or and exclusive or `'&&'` `'||'` `'^'`
+
+|Expression|Value|
+|||
+|`=true&&true`|`true`|
+|`=true&&false`|`false`|
+|`=true||true`|`true`|
+|`=true||false`|`true`|
+|`=false||false`|`false`|
+|`=true^false`|`true`|
+|`=true^true`|`false`|
+
+### Ternary operator `'?:'`
+
+It is possible to give an input different values based on the evaluation of a Boolean expression using the ternary operator.
+
+|Expression|Value|
+|||
+|`=true?"true":"false"`|`"true"`|
+|`=$(/document/merged_content/entities/0/offset)==9?"nine":"not nine"`|`"nine"`|
+
+### Parentheses and operator priority
+
+Operators are evaluated with priorities that match usual conventions: unary operators, then multiplication, division and modulo, then addition and subtraction, then comparison, then equality, and then logical operators.
+Usual associativity rules also apply.
+
+Parentheses can be used to change or disambiguate evaluation order.
+
+|Expression|Value|
+|||
+|`=3*2+5`|`11`|
+|`=3*(2+5)`|`21`|
+
+## See also
++ [Create a skillset in Azure Cognitive Search](cognitive-search-defining-skillset.md)++ [Reference annotations in an Azure Cognitive Search skillset](cognitive-search-concept-annotations-syntax.md)
search Search Howto Managed Identities Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-managed-identities-data-sources.md
A debug session targets a container. Be sure to include the name of an existing
+ [Security overview](search-security-overview.md) + [AI enrichment overview](cognitive-search-concept-intro.md) + [Indexers overview](search-indexer-overview.md)
-+ [Authenticate with Azure Active Directory](/azure/architecture/framework/security/design-identity-authentication.md)
-+ [About managed identities (Azure Active Directory)](../active-directory/managed-identities-azure-resources/overview.md)
++ [Authenticate with Azure Active Directory](/azure/architecture/framework/security/design-identity-authentication)++ [About managed identities (Azure Active Directory)](../active-directory/managed-identities-azure-resources/overview.md)
search Search Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-overview.md
Previously updated : 01/24/2022 Last updated : 02/16/2022
This article describes the security features in Azure Cognitive Search that prot
## Data flow (network traffic patterns)
-A search service is hosted on Azure and is typically accessed by client applications using public network connections. Understanding the search service's points of entry and network traffic patterns is useful background for setting up development and production environments.
+A search service is hosted on Azure and is typically accessed by client applications using public network connections. While that pattern is predominant, it's not the only traffic pattern that you need to care about. Understanding all points of entry and outbound traffic is necessary background for protecting your development and production environments.
Cognitive Search has three basic network traffic patterns:
Cognitive Search has three basic network traffic patterns:
### Inbound traffic
-Inbound requests that target a search service endpoint consist of creating objects, processing data, and querying an index.
+Inbound requests that target a search service endpoint consist of:
-For inbound access to data and operations on your search service, you can implement a progression of security measures, starting with API keys on the request. You can also use Azure Active Directory and role-based access control for data plane operations (currently in preview). You can then supplement with [network security features](#service-access-and-authentication), either inbound rules in an IP firewall, or private endpoints that fully shield your service from the public internet.
++ Creating and managing objects++ Sending requests for indexing, running indexer jobs, executing skills++ Querying an index+
+For inbound access to data and operations on your search service, you can implement a progression of security measures, starting with [network security features](#service-access-and-authentication). You can create either inbound rules in an IP firewall, or private endpoints that fully shield your search service from the public internet.
+
+Independent of network security, all inbound requests must be authenticated. Key-based authentication is the default. Alternatively, you can use Azure Active Directory and role-based access control for data plane operations (currently in preview).
### Outbound traffic
-Outbound requests from a search service to other applications are typically made by indexers. Outbound requests include both read and write operations:
+Outbound requests from a search service to other applications are typically made by indexers for text-based indexing and some aspects of AI enrichment. Outbound requests include both read and write operations:
-+ Indexers connect to external data sources to read data for indexing.
++ Indexers connect to external data sources to pull in data for indexing. + Indexers can also write to Azure Storage when creating knowledge stores, persisting cached enrichments, and persisting debug sessions. + A custom skill runs external code that's hosted off-service. An indexer sends the request for external processing during skillset execution.++ Search connects to Azure Key Vault for a customer-managed key used to encrypt and decrypt sensitive data.+
+Outbound connections can be made using a full access connection string that includes a shared access key or a database login, or a managed identity if you're using Azure Active Directory.
-Indexer connections can be made under a managed identity if you're using Azure Active Directory, or a connection string that includes shared access keys or a database login.
+If your Azure resources are behind a firewall, you'll need to create rules that admit indexer or service requests. For resources protected by Azure Private Link, you can create a shared private link that an indexer uses to make its connection.
### Internal traffic
-Internal requests are secured and managed by Microsoft. Internal traffic consists of service-to-service calls for tasks like authentication and authorization through Azure Active Directory, diagnostic logging in Azure Monitor, encryption, private endpoint connections, and requests made to Cognitive Services for built-in skills.
+Internal requests are secured and managed by Microsoft. Internal traffic consists of service-to-service calls for tasks like authentication and authorization through Azure Active Directory, diagnostic logging in Azure Monitor, private endpoint connections, and requests made to Cognitive Services for built-in skills.
<a name="service-access-and-authentication"></a> ## Network security
-Inbound security features protect the search service endpoint through increasing levels of security and complexity. Cognitive Search uses [key-based authentication](search-security-api-keys.md), where all requests require an API key for authenticated access.
-
-Optionally, you can implement additional layers of control by setting firewall rules that limit access to specific IP addresses. For advanced protection, you can enable Azure Private Link to shield your service endpoint from all internet traffic.
-
-### Inbound connection over the public internet
-
-By default, a search service endpoint is accessed through the public cloud, using key-based authentication for admin or query access to the search service endpoint. Keys are required. Submission of a valid key is considered proof the request originates from a trusted entity. Key-based authentication is covered in the next section. Without API keys, you'll get 401 and 404 responses on the request.
+[Network security](../security/fundamentals/network-overview.md) protects resources from unauthorized access or attack by applying controls to network traffic. Azure Cognitive Search supports networking features that can be your first line of defense against unauthorized access.
### Inbound connection through IP firewalls
-To further control access to your search service, you can create inbound firewall rules that allow access to specific IP address or a range of IP addresses. All client connections must be made through an allowed IP address, or the connection is denied.
+A search service is provisioned with a public endpoint that allows access using a public IP address. To restrict which traffic comes through the public endpoint, create an inbound firewall rule that admits requests from a specific IP address or a range of IP addresses. All client connections must be made through an allowed IP address, or the connection is denied.
:::image type="content" source="media/search-security-overview/inbound-firewall-ip-restrictions.png" alt-text="sample architecture diagram for ip restricted access":::
-You can use the portal to [configure inbound access](service-configure-firewall.md).
+You can use the portal to [configure firewall access](service-configure-firewall.md).
Alternatively, you can use the management REST APIs. Starting with API version 2020-03-13, with the [IpRule](/rest/api/searchmanagement/2020-08-01/services/create-or-update#iprule) parameter, you can restrict access to your service by identifying IP addresses, individually or in a range, that you want to grant access to your search service. ### Inbound connection to a private endpoint (network isolation, no Internet traffic)
-You can establish a [private endpoint](../private-link/private-endpoint-overview.md) for Azure Cognitive Search allows a client on a [virtual network](../virtual-network/virtual-networks-overview.md) to securely access data in a search index over a [Private Link](../private-link/private-link-overview.md).
+For more stringent security, you can establish a [private endpoint](../private-link/private-endpoint-overview.md) for Azure Cognitive Search allows a client on a [virtual network](../virtual-network/virtual-networks-overview.md) to securely access data in a search index over a [Private Link](../private-link/private-link-overview.md).
The private endpoint uses an IP address from the virtual network address space for connections to your search service. Network traffic between the client and the search service traverses over the virtual network and a private link on the Microsoft backbone network, eliminating exposure from the public internet. A VNET allows for secure communication among resources, with your on-premises network as well as the Internet.
The private endpoint uses an IP address from the virtual network address space f
While this solution is the most secure, using additional services is an added cost so be sure you have a clear understanding of the benefits before diving in. For more information about costs, see the [pricing page](https://azure.microsoft.com/pricing/details/private-link/). For more information about how these components work together, [watch this video](#watch-this-video). Coverage of the private endpoint option starts at 5:48 into the video. For instructions on how to set up the endpoint, see [Create a Private Endpoint for Azure Cognitive Search](service-create-private-endpoint.md).
-### Outbound connections to external services
-
-Indexers and skillsets are both objects that can make external connections. You'll provide connection information as part of the object definition, using one of these mechanisms.
-
-+ Credentials in the connection string
-
-+ Managed identity in the connection string
-
- You can [set up a managed identity](search-howto-managed-identities-data-sources.md) to make search a trusted service when accessing data from Azure Storage, Azure SQL, Cosmos DB, or other Azure data sources. A managed identity is a substitute for credentials or access keys on the connection.
- ## Authentication
-For inbound requests to the search service, authentication is on the request (not the calling app or user) through an [API key](search-security-api-keys.md), where the key is a string composed of randomly generated numbers and letters)that proves the request is from a trustworthy source.
+Once a request is admitted, it must still undergo authentication and authorization that determines whether the request is permitted.
+
+For inbound requests to the search service, authentication is on the request (not the calling app or user) through an [API key](search-security-api-keys.md), where the key is a string composed of randomly generated numbers and letters) that proves the request is from a trustworthy source. Keys are required on every request. Submission of a valid key is considered proof the request originates from a trusted entity.
-Alternatively, there is new support for Azure Active Directory authentication and role-based authorization, [currently in preview](search-security-rbac.md), that establishes the caller (and not the request) as the authenticated identity.
+Alternatively, there's new support for Azure Active Directory authentication and role-based authorization, [currently in preview](search-security-rbac.md), that establishes the caller (and not the request) as the authenticated identity.
Outbound requests made by an indexer are subject to the authentication protocols supported by the external service. The indexer subservice in Cognitive Search can be made a trusted service on Azure, connecting to other services using a managed identity. For more information, see [Set up an indexer connection to a data source using a managed identity](search-howto-managed-identities-data-sources.md).
At the storage layer, data encryption is built in for all service-managed conten
### Data in transit
-In Azure Cognitive Search, encryption starts with connections and transmissions, and extends to content stored on disk. For search services on the public internet, Azure Cognitive Search listens on HTTPS port 443. All client-to-service connections use TLS 1.2 encryption. Earlier versions (1.0 or 1.1) are not supported.
+In Azure Cognitive Search, encryption starts with connections and transmissions, and extends to content stored on disk. For search services on the public internet, Azure Cognitive Search listens on HTTPS port 443. All client-to-service connections use TLS 1.2 encryption. Earlier versions (1.0 or 1.1) aren't supported.
### Data at rest
For data handled internally by the search service, the following table describes
#### Service-managed keys
-Service-managed encryption is a Microsoft-internal operation, based on [Azure Storage Service Encryption](../storage/common/storage-service-encryption.md), using 256-bit [AES encryption](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard). It occurs automatically on all indexing, including on incremental updates to indexes that are not fully encrypted (created before January 2018).
+Service-managed encryption is a Microsoft-internal operation, based on [Azure Storage Service Encryption](../storage/common/storage-service-encryption.md), using 256-bit [AES encryption](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard). It occurs automatically on all indexing, including on incremental updates to indexes that aren't fully encrypted (created before January 2018).
#### Customer-managed keys (CMK)
Customer-managed keys require an additional billable service, Azure Key Vault, w
#### Double encryption
-In Azure Cognitive Search, double encryption is an extension of CMK. It is understood to be two-fold encryption (once by CMK, and again by service-managed keys), and comprehensive in scope, encompassing long-term storage that is written to a data disk, and short-term storage written to temporary disks. Double encryption is implemented in services created after specific dates. For more information, see [Double encryption](search-security-manage-encryption-keys.md#double-encryption).
+In Azure Cognitive Search, double encryption is an extension of CMK. It's understood to be two-fold encryption (once by CMK, and again by service-managed keys), and comprehensive in scope, encompassing long-term storage that is written to a data disk, and short-term storage written to temporary disks. Double encryption is implemented in services created after specific dates. For more information, see [Double encryption](search-security-manage-encryption-keys.md#double-encryption).
-## Security management
+## Security administration
-### API keys
+### Manage API keys
Reliance on API key-based authentication means that you should have a plan for regenerating the admin key at regular intervals, per Azure security best practices. There are a maximum of two admin keys per search service. For more information about securing and managing API keys, see [Create and manage api-keys](search-security-api-keys.md).
-#### Activity and diagnostic logs
+### Activity and diagnostic logs
-Cognitive Search does not log user identities so you cannot refer to logs for information about a specific user. However, the service does log create-read-update-delete operations, which you might be able to correlate with other logs to understand the agency of specific actions.
+Cognitive Search does not log user identities so you can't refer to logs for information about a specific user. However, the service does log create-read-update-delete operations, which you might be able to correlate with other logs to understand the agency of specific actions.
Using alerts and the logging infrastructure in Azure, you can pick up on query volume spikes or other actions that deviate from expected workloads. For more information about setting up logs, see [Collect and analyze log data](monitor-azure-cognitive-search.md) and [Monitor query requests](search-monitor-queries.md).
For compliance, you can use [Azure Policy](../governance/policy/overview.md) to
Azure Policy is a capability built into Azure that helps you manage compliance for multiple standards, including those of Azure Security Benchmark. For well-known benchmarks, Azure Policy provides built-in definitions that provide both criteria as well as an actionable response that addresses non-compliance.
-For Azure Cognitive Search, there is currently one built-in definition. It is for diagnostic logging. With this built-in, you can assign a policy that identifies any search service that is missing diagnostic logging, and then turns it on. For more information, see [Azure Policy Regulatory Compliance controls for Azure Cognitive Search](security-controls-policy.md).
+For Azure Cognitive Search, there's currently one built-in definition. It's for diagnostic logging. With this built-in, you can assign a policy that identifies any search service that is missing diagnostic logging, and then turns it on. For more information, see [Azure Policy Regulatory Compliance controls for Azure Cognitive Search](security-controls-policy.md).
## Watch this video
sentinel Network Normalization Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/network-normalization-schema.md
Microsoft Sentinel provides the following out-of-the-box, product-specific Netwo
| **Source** | **Built-in parsers** | **Workspace deployed parsers** | | | | | | **AWS VPC** collected using the AWS S3 connector |`_ASim_NetworkSession_AWSVPC` (regular)<br> `_Im_NetworkSession_AWSVPC` (filtering) | `ASimNetworkSessionAWSVPC` (regular)<br> `vimNetworkSessionAWSVPC` (filtering) |
-| **Azure Monitor VMConnection** collected as part of the Azure Monitor [VM Insights solution](/azure/azure-monitor/vm/vminsights-overview.md) |`_ASim_NetworkSession_VMConnection` (regular)<br> `_Im_NetworkSession_VMConnection` (filtering) | `ASimNetworkSessionVMConnection` (regular)<br> `vimNetworkSessionVMConnection` (filtering) |
+| **Azure Monitor VMConnection** collected as part of the Azure Monitor [VM Insights solution](/azure/azure-monitor/vm/vminsights-overview) |`_ASim_NetworkSession_VMConnection` (regular)<br> `_Im_NetworkSession_VMConnection` (filtering) | `ASimNetworkSessionVMConnection` (regular)<br> `vimNetworkSessionVMConnection` (filtering) |
| **Microsoft 365 Defender for Endpoint** | `_ASim_NetworkSession_Microsoft365Defender` (regular)<br><br>`_Im_NetworkSession_Microsoft365Defender` (filtering) | `ASimNetworkSessionMicrosoft365Defender` (regular)<br><br> `vimNetworkSessionMicrosoft365Defender` (filtering) | | **Microsoft Defender for IoT - Endpoint** |`_ASim_NetworkSession_MD4IoT` (regular)<br><br>`_Im_NetworkSession_MD4IoT` (filtering) | `ASimNetworkSessionMD4IoT` (regular)<br><br> `vimNetworkSessionMD4IoT` (filtering) | | **Palo Alto PanOS** collected using CEF |`_ASim_NetworkSession_PaloAltoCEF` (regular)<br> `_Im_NetworkSession_PaloAltoCEF` (filtering) | `ASimNetworkSessionPaloAltoCEF` (regular)<br> `vimNetworkSessionPaloAltoCEF` (filtering) |
For more information, see:
- [Advanced Security Information Model (ASIM) schemas](normalization-about-schemas.md) - [Advanced Security Information Model (ASIM) parsers](normalization-parsers-overview.md) - [Advanced Security Information Model (ASIM) content](normalization-content.md)-
sentinel Normalization About Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization-about-schemas.md
The following fields are defined by ASIM for all schemas:
| <a name="eventoriginalresultdetails"></a>**EventOriginalResultDetails** | Optional | String | The original result details provided by the source. This value is used to derive [EventResultDetails](#eventresultdetails), which should have only one of the values documented for each schema. | | <a name="eventseverity"></a>**EventSeverity** | Enumerated | String | The severity of the event. Valid values are: `Informational`, `Low`, `Medium`, or `High`. | | <a name="eventoriginalseverity"></a>**EventOriginalSeverity** | Optional | String | The original severity as provided by the source. This value is used to derive [EventSeverity](#eventseverity). |
-| <a name="eventproduct"></a>**EventProduct** | Mandatory | String | The product generating the event. <br><br>Example: `Sysmon`<br><br>**Note**: This field might not be available in the source record. In such cases, this field must be set by the parser. |
+| <a name="eventproduct"></a>**EventProduct** | Mandatory | String | The product generating the event. The value should be one of the values listed in [Vendors and Products](#vendors-and-products).<br><br>Example: `Sysmon` |
| **EventProductVersion** | Optional | String | The version of the product generating the event. <br><br>Example: `12.1` |
-| <a name="eventvendor"></a>**EventVendor** | Mandatory | String | The vendor of the product generating the event. <br><br>Example: `Microsoft` <br><br>**Note**: This field might not be available in the source record. In such cases, this field must be set by the parser. |
+| <a name="eventvendor"></a>**EventVendor** | Mandatory | String | The vendor of the product generating the event. The value should be one of the values listed in [Vendors and Products](#vendors-and-products).<br><br>Example: `Microsoft` <br><br> |
| **EventSchema** | Mandatory | String | The schema the event is normalized to. Each schema documents its schema name. | | **EventSchemaVersion** | Mandatory | String | The version of the schema. Each schema documents its current version. | | **EventReportUrl** | Optional | String | A URL provided in the event for a resource that provides more information about the event.|
Based on these entities, [Windows event 4624](/windows/security/threat-protectio
|**Hostname** | Computer | Alias | | | | | | |
+## Vendors and products
+
+To maintain consistency, the list of allowed vendors and products is set as part of ASIM, and may not directly correspond to the value sent by the source, when available.
+
+The currently supported list of vendors and products used in the [EventVendor](#eventvendor) and [EventProduct](#eventproduct) fields respectively is:
+
+| Vendor | Products |
+| | -- |
+| Apache | Squid Proxy |
+| AWS | - CloudTrail<br> - VPC |
+| Cisco | - ASA<br> - Umbrella |
+| Corelight | Zeek |
+| GCP | Cloud DNS |
+| Infoblox | NIOS |
+| Microsoft | - AAD<br> - Azure Defender for IoT<br> - Azure Firewall<br> - Azure File Storage<br> - DNS Server<br> - M365 Defender for Endpoint<br> - NSGFlow <br> - Security Events<br> - Sharepoint 365<br>- Sysmon<br> - Sysmon for Linux<br> - VMConnection<br> - Windows Firewall<br> - WireData <br>
+| Okta | Okta |
+| Palo Alto | - PanOS<br> - CDL<br> |
+| Zscaler | - ZIA DNS<br> - ZIA Firewall<br> - ZIA Proxy |
+|||
+
+If you are developing a parser for a vendor or a product which are not listed here, contact the [Microsoft Sentinel](mailto:azuresentinel@microsoft.com) team to allocate a new allowed vendor and product designators.
+ ## Next steps This article provides an overview of normalization in Microsoft Sentinel and ASIM.
sentinel Normalization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/normalization.md
Microsoft Sentinel ingests data from many sources. Working with various data typ
Sometimes, you'll need separate rules, workbooks, and queries, even when data types share common elements, such as firewall devices. Correlating between different types of data during an investigation and hunting can also be challenging.
-This article provides an overview of the Advanced Security Information Model (ASIM), which provides a solution for the challenges of handling multiple types of data.
+The Advanced Security Information Model (ASIM) is a layer that is located between these diverse sources and the user. ASIM follows the [robustness principal](https://en.wikipedia.org/wiki/Robustness_principle): **"Be strict in what you send, be flexible in what you accept"**. Using the robustness principal as design pattern, ASIM transforms Microsoft Sentinel's inconsistent and hard to use source telemetry to user friendly data.
+
+This article provides an overview of the Advanced Security Information Model (ASIM), its use cases and major components. Refer to the [next steps](#next-steps) section for more details.
> [!TIP]
-> Also watch the [ASIM Webinar](https://www.youtube.com/watch?v=WoGD-JeC7ng) or review the [webinar slides](https://1drv.ms/b/s!AnEPjr8tHcNmjDY1cro08Fk3KUj-?e=murYHG). For more information, see [Next steps](#next-steps).
+> Also watch the [ASIM Webinar](https://www.youtube.com/watch?v=WoGD-JeC7ng) or review the [webinar slides](https://1drv.ms/b/s!AnEPjr8tHcNmjDY1cro08Fk3KUj-?e=murYHG).
> > [!IMPORTANT]
sentinel Sap Deploy Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap-deploy-solution.md
This procedure describes how to ensure that your SAP system has the correct prer
1. Download and install one of the following SAP change requests from the [Microsoft Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/CR):
- - **SAP version 750 or later**: Install the SAP change request *NPLK900180*
- - **SAP version 740**: Install the SAP change request *NPLK900179*
+ - **SAP version 750 or later**: Install the SAP change request *NPLK900198*
+ - **SAP version 740**: Install the SAP change request *NPLK900200*
When you're performing this step, be sure to use binary mode to transfer the files to the SAP system, and use the **STMS_IMPORT** SAP transaction code.
sentinel Sap Solution Detailed Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap-solution-detailed-requirements.md
For example, in Ubuntu, you can mount a disk to the `/var/lib/docker` directory
The following SAP log change requests are required for the SAP solution, depending on your SAP Basis version: -- **SAP Basis versions 7.50 and higher**, install NPLK900180-- **For lower versions**, install NPLK900179
+- **SAP Basis versions 7.50 and higher**, install NPLK900198
+- **For lower versions**, install NPLK900200
- **To create an SAP role with the required authorizations**, for any supported SAP Basis version, install NPLK900163. For more information, see [Configure your SAP system](sap-deploy-solution.md#configure-your-sap-system) and [Required ABAP authorizations](#required-abap-authorizations). > [!NOTE]
service-bus-messaging Service Bus Nodejs How To Use Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-nodejs-how-to-use-queues.md
Title: Get started with Azure Service Bus queues (JavaScript)
description: This tutorial shows you how to send messages to and receive messages from Azure Service Bus queues using the JavaScript programming language. Previously updated : 11/09/2020 Last updated : 02/16/2022 ms.devlang: javascript
service-bus-messaging Service Bus Nodejs How To Use Topics Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-nodejs-how-to-use-topics-subscriptions.md
Title: Get started with Azure Service Bus topics (JavaScript)
description: This tutorial shows you how to send messages to Azure Service Bus topics and receive messages from topics' subscriptions using the JavaScript programming language. Previously updated : 11/09/2020 Last updated : 02/16/2022 ms.devlang: javascript
The following sample code shows you how to send a batch of messages to a Service
Received message: Nikolaus Kopernikus ```
-In the Azure portal, navigate to your Service Bus namespace, and select the topic in the bottom pane to see the **Service Bus Topic** page for your topic. On this page, you should see three incoming and three outgoing messages in the **Messages** chart.
+In the Azure portal, navigate to your Service Bus namespace, switch to **Topics** in the bottom pane, and select your topic to see the **Service Bus Topic** page for your topic. On this page, you should see 10 incoming and 10 outgoing messages in the **Messages** chart.
-If you run the only the send app next time, on the **Service Bus Topic** page, you see six incoming messages (3 new) but three outgoing messages.
+If you run only the send app next time, on the **Service Bus Topic** page, you see 20 incoming messages (10 new) but 10 outgoing messages.
-On this page, if you select a subscription, you get to the **Service Bus Subscription** page. You can see the active message count, dead-letter message count, and more on this page. In this example, there are three active messages that haven't been received by a receiver yet.
+On this page, if you select a subscription in the bottom pane, you get to the **Service Bus Subscription** page. You can see the active message count, dead-letter message count, and more on this page. In this example, there are 10 active messages that haven't been received by a receiver yet.
## Next steps See the following documentation and samples:
service-bus-messaging Service Bus Python How To Use Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-python-how-to-use-queues.md
description: This tutorial shows you how to send messages to and receive message
documentationcenter: python Previously updated : 11/18/2020 Last updated : 02/16/2022 ms.devlang: python
service-bus-messaging Service Bus Python How To Use Topics Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-python-how-to-use-topics-subscriptions.md
description: This tutorial shows you how to send messages to Azure Service Bus t
documentationcenter: python Previously updated : 11/18/2020 Last updated : 02/16/2022 ms.devlang: python
service-connector Tutorial Django Webapp Postgres Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-django-webapp-postgres-cli.md
Title: 'Tutorial: Using Service Connector to build a Django app with Postgres on Azure App Service' description: Create a Python web app with a PostgreSQL database and deploy it to Azure. The tutorial uses the Django framework, the app is hosted on Azure App Service on Linux, and the App Service and Database is connected with Service Connector.
+ms.devlang: python
service-connector Tutorial Java Spring Confluent Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-java-spring-confluent-kafka.md
Title: 'Tutorial: Deploy a Spring Boot app connected to Apache Kafka on Confluent Cloud with Service Connector in Azure Spring Cloud' description: Create a Spring Boot app connected to Apache Kafka on Confluent Cloud with Service Connector in Azure Spring Cloud.
+ms.devlang: java
service-fabric How To Managed Cluster Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-autoscale.md
Title: Configure autoscaling for Service Fabric managed cluster nodes description: Learn how to configure autoscaling policies on Service Fabric managed cluster. Previously updated : 10/25/2021 Last updated : 2/14/2022
-# Introduction to Autoscaling on Service Fabric managed clusters (preview)
+# Introduction to Autoscaling on Service Fabric managed clusters
[Autoscaling](../azure-monitor/autoscale/autoscale-overview.md) gives great elasticity and enables addition or reduction of nodes on demand on a secondary node type. This automated and elastic behavior reduces the management overhead and potential business impact by monitoring and optimizing the amount of nodes servicing your workload. You configure rules for your workload and let autoscaling handle the rest. When those defined thresholds are met, autoscale rules take action to adjust the capacity of your node type. Autoscaling can be enabled, disabled, or configured at any time. This article provides an example deployment, how to enable or disable autoscaling, and how to configure an example autoscale policy. **Requirements and supported metrics:**
-* In order to use autoscaling on managed clusters, you need to be using API version `2021-07-01-preview` or later.
+* The Service Fabric managed cluster resource apiVersion should be **2022-01-01** or later.
* The cluster SKU must be Standard. * Can only be configured on a secondary node type in your cluster. * After enabling autoscale for a node type, configure `vmInstanceCount` property to `-1` when redeploying the resource.
The following example will set a policy for `nodeType2Name` to be at least 3 nod
"metricTrigger": { "metricName": "Percentage CPU", "metricNamespace": "",
- "metricResourceUri": "[concat('/subscriptions/',subscription().subscriptionId,'/resourceGroups/SFC_', reference(resourceId('Microsoft.ServiceFabric/managedClusters', parameters('clusterName')), '2021-07-01-preview').clusterId,'/providers/Microsoft.Compute/virtualMachineScaleSets/',parameters('nodeType2Name'))]",
+ "metricResourceUri": "[concat('/subscriptions/',subscription().subscriptionId,'/resourceGroups/SFC_', reference(resourceId('Microsoft.ServiceFabric/managedClusters', parameters('clusterName')), '2022-01-01').clusterId,'/providers/Microsoft.Compute/virtualMachineScaleSets/',parameters('nodeType2Name'))]",
"timeGrain": "PT1M", "statistic": "Average", "timeWindow": "PT30M",
The following example will set a policy for `nodeType2Name` to be at least 3 nod
"metricTrigger": { "metricName": "Percentage CPU", "metricNamespace": "",
- "metricResourceUri": "[concat('/subscriptions/',subscription().subscriptionId,'/resourceGroups/SFC_', reference(resourceId('Microsoft.ServiceFabric/managedClusters', parameters('clusterName')), '2021-07-01-preview').clusterId,'/providers/Microsoft.Compute/virtualMachineScaleSets/',parameters('nodeType2Name'))]",
+ "metricResourceUri": "[concat('/subscriptions/',subscription().subscriptionId,'/resourceGroups/SFC_', reference(resourceId('Microsoft.ServiceFabric/managedClusters', parameters('clusterName')), '2022-01-01').clusterId,'/providers/Microsoft.Compute/virtualMachineScaleSets/',parameters('nodeType2Name'))]",
"timeGrain": "PT1M", "statistic": "Average", "timeWindow": "PT30M",
service-fabric How To Managed Cluster Enable Disk Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-enable-disk-encryption.md
Title: Enable Disk Encryption for Service Fabric managed cluster nodes description: Learn how to enable disk encryption for Azure Service Fabric managed cluster nodes in Windows using an ARM template. Previously updated : 11/8/2021 Last updated : 2/14/2022 # Enable disk encryption for Service Fabric managed cluster nodes
Service Fabric managed clusters support two disk encryption options to help safeguard your data to meet your organizational security and compliance commitments. The recommended option is Encryption at host, but also supports Azure Disk Encryption. Review the [disk encryption options](../virtual-machines/disk-encryption-overview.md) and make sure the selected option meets your needs.
-## Enable encryption at host (preview)
+## Enable encryption at host
This encryption method improves on [Azure Disk Encryption](how-to-managed-cluster-enable-disk-encryption.md) by supporting all OS types and images, including custom images, for your VMs by encrypting data in the Azure Storage service. This method does not use your VMs CPU nor does it impact your VMs performance enabling workloads to use all of the available VMs SKU resources.
service-fabric How To Managed Cluster Managed Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-managed-disk.md
Title: Select managed disk types for Service Fabric managed cluster nodes description: Learn how to select managed disk types for Service Fabric managed cluster nodes and configure in an ARM template. Previously updated : 11/19/2021 Last updated : 2/14/2022 # Select managed disk types for Service Fabric managed cluster nodes
Azure Service Fabric manged clusters support the following managed disk types:
* Premium SSD locally redundant storage. Best for production and performance sensitive workloads. >[!NOTE]
-> Any temp disk associated with VM Size will *not* be used for storing any Service Fabric or application related data
+> Any temp disk associated with VM Size will *not* be used for storing any Service Fabric or application related data by default. [Stateless node types](how-to-managed-cluster-stateless-node-type.md) do support temp disks if required.
## Specifying a Service Fabric managed cluster disk type
service-fabric How To Managed Cluster Modify Node Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-modify-node-type.md
Title: Configure or modify a Service Fabric managed cluster node type description: This article walks through how to modify a managed cluster node type Previously updated : 12/10/2021 Last updated : 2/14/2022 # Service Fabric managed cluster node types
You can choose to enable automatic OS image upgrades to the virtual machines run
To enable automatic OS upgrades:
-* Use the `2021-05-01` (or later) version of *Microsoft.ServiceFabric/managedclusters* and *Microsoft.ServiceFabric/managedclusters/nodetypes* resources
+* Use apiVersion `2021-05-01` or later version of *Microsoft.ServiceFabric/managedclusters* and *Microsoft.ServiceFabric/managedclusters/nodetypes* resources
* Set the cluster's property `enableAutoOSUpgrade` to *true* * Set the cluster nodeTypes' resource property `vmImageVersion` to *latest*
For example:
```json {
- "apiVersion": "2021-05-01",
+ "apiVersion": "[variables('sfApiVersion')]",
"type": "Microsoft.ServiceFabric/managedclusters", ... "properties": {
For example:
}, }, {
- "apiVersion": "2021-05-01",
+ "apiVersion": "[variables('sfApiVersion')]",
"type": "Microsoft.ServiceFabric/managedclusters/nodetypes", ... "properties": {
Service Fabric managed cluster does not support in-place modification of the VM
* [Delete old node type via portal or PowerShell](how-to-managed-cluster-modify-node-type.md#remove-a-node-type). To remove a primary node type you will have to use PowerShell.
-## Configure multiple managed disks (preview)
+## Configure multiple managed disks
Service Fabric managed clusters by default configure one managed disk. By configuring the following optional property and values, you can add more managed disks to node types within a cluster. You are able to specify the drive letter, disk type, and size per disk. Configure more managed disks by declaring `additionalDataDisks` property and required parameters in your Resource Manager template as follows:
Configure more managed disks by declaring `additionalDataDisks` property and req
* Lun must be unique per disk and can not use reserved lun 0 * Disk letter cannot use reserved letters C or D and cannot be modified once created. S will be used as default if not specified. * Must specify a [supported disk type](how-to-managed-cluster-managed-disk.md)
-* The Service Fabric managed cluster resource apiVersion must be **2021-11-01-preview** or later.
+* The Service Fabric managed cluster resource apiVersion should be **2022-01-01** or later.
```json {
Configure more managed disks by declaring `additionalDataDisks` property and req
See [full list of parameters available](/azure/templates/microsoft.servicefabric/2021-11-01-preview/managedclusters)
-## Configure the Service Fabric data disk drive letter (preview)
+## Configure the Service Fabric data disk drive letter
Service Fabric managed clusters by default configure a Service Fabric data disk and automatically configure the drive letter on all nodes of a node type. By configuring this optional property and value, you can specify and retrieve the Service Fabric data disk letter if you have specific requirements for drive letter mapping. **Feature Requirements** * Disk letter cannot use reserved letters C or D and cannot be modified once created. S will be used as default if not specified.
-* The Service Fabric managed cluster resource apiVersion must be **2021-11-01-preview** or later.
+* The Service Fabric managed cluster resource apiVersion should be **2022-01-01** or later.
```json {
service-fabric How To Managed Cluster Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-networking.md
Title: Configure network settings for Service Fabric managed clusters description: Learn how to configure your Service Fabric managed cluster for NSG rules, RDP port access, load-balancing rules, and more. Previously updated : 11/10/2021 Last updated : 2/14/2022 # Configure network settings for Service Fabric managed clusters
Service Fabric managed clusters automatically creates load balancer probes for f
``` <a id="ipv6"></a>
-## Enable IPv6 (preview)
+## Enable IPv6
Managed clusters do not enable IPv6 by default. This feature will enable full dual stack IPv4/IPv6 capability from the Load Balancer frontend to the backend resources. Any changes you make to the managed cluster load balancer config or NSG rules will affect both the IPv4 and IPv6 routing. > [!NOTE] > This setting is not available in portal and cannot be changed once the cluster is created
+* The Service Fabric managed cluster resource apiVersion should be **2022-01-01** or later.
+ 1. Set the following property on a Service Fabric managed cluster resource. ```json
- "apiVersion": "2021-07-01-preview",
+ "resources": [
+ {
+ "apiVersion": "[variables('sfApiVersion')]",
"type": "Microsoft.ServiceFabric/managedclusters", ... "properties": { "enableIpv6": true }, }
+ ]
``` 2. Deploy your IPv6 enabled managed cluster. Customize the [sample template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/SF-Managed-Standard-SKU-2-NT-IPv6) as needed or build your own.
Managed clusters do not enable IPv6 by default. This feature will enable full du
<a id="byovnet"></a>
-## Bring your own virtual network (preview)
+## Bring your own virtual network
This feature allows customers to use an existing virtual network by specifying a dedicated subnet the managed cluster will deploy its resources into. This can be useful if you already have a configured VNet and subnet with related security policies and traffic routing that you want to use. After you deploy to an existing virtual network, it's easy to use or incorporate other networking features, like Azure ExpressRoute, Azure VPN Gateway, a network security group, and virtual network peering. Additionally, you can [bring your own Azure Load balancer](#byolb) if needed also. > [!NOTE]
This feature allows customers to use an existing virtual network by specifying a
3. Configure the `subnetId` property for the cluster deployment after the role is set up as shown below:
+* The Service Fabric managed cluster resource apiVersion should be **2022-01-01** or later.
+ ```JSON "resources": [ {
- "apiVersion": "2021-07-01-preview",
+ "apiVersion": "[variables('sfApiVersion')]",
"type": "Microsoft.ServiceFabric/managedclusters", ... },
This feature allows customers to use an existing virtual network by specifying a
"subnetId": "subnetId", ... }
+ ]
``` See the [bring your own VNet cluster sample template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/SF-Managed-Standard-SKU-2-NT-BYOVNET) or customize your own.
This feature allows customers to use an existing virtual network by specifying a
When you bring your own VNet subnet the public endpoint is still created and managed by the resource provider, but in the configured subnet. The feature does not allow you to specify the public ip/re-use static ip on the Azure Load Balancer. You can [bring your own Azure Load Balancer](#byolb) in concert with this feature or by itself if you require those or other load balancer scenarios that aren't natively supported. <a id="byolb"></a>
-## Bring your own Azure Load Balancer (preview)
+## Bring your own Azure Load Balancer
Managed clusters create an Azure public Standard Load Balancer and fully qualified domain name with a static public IP for both the primary and secondary node types. Bring your own load balancer allows you to use an existing Azure Load Balancer for secondary node types for both inbound and outbound traffic. When you bring your own Azure Load Balancer, you can: * Use a pre-configured Load Balancer static IP address for either private or public traffic
To configure bring your own load balancer:
In the following steps, we start with an existing load balancer named Existing-LoadBalancer1, in the Existing-RG resource group.
- Obtain the required `Id` property info from the existing Azure Load Balancer. We'll
+ Obtain the required `Id` property info from the existing Azure Load Balancer.
```powershell Login-AzAccount
To configure bring your own load balancer:
To configure the node type to use the default load balancer set the following in your template:
- * The Service Fabric managed cluster resource apiVersion should be **2021-11-01-preview** or later.
+ * The Service Fabric managed cluster resource apiVersion should be **2022-01-01** or later.
```json
+ "resources": [
{ "apiVersion": "[variables('sfApiVersion')]", "type": "Microsoft.ServiceFabric/managedclusters/nodetypes",
- ...
"properties": { "isPrimary": false, "useDefaultPublicLoadBalancer": true
- ...
+ }
}
+ ]
``` 4. Optionally configure an inbound application port and related probe on your existing Azure Load Balancer.
To configure bring your own load balancer:
<a id="accelnet"></a>
-## Enable Accelerated Networking (preview)
-Accelerated networking enables single root I/O virtualization (SR-IOV) to a virtual machine scale set VM that is the underlying resource for node types. This high-performance path bypasses the host from the data path, which reduces latency, jitter, and CPU utilization for the most demanding network workloads. Service Fabric managed cluster node types can be provisioned with Accelerated Networking on [supported VM SKUs](../virtual-machines/sizes.md). Reference this [limitations and constraints](../virtual-network/create-vm-accelerated-networking-powershell.md#limitations-and-constraints) for additional considerations.
+## Enable Accelerated Networking
+Accelerated networking enables single root I/O virtualization (SR-IOV) to a virtual machine scale set VM that is the underlying resource for node types. This high-performance path bypasses the host from the data path, which reduces latency, jitter, and CPU utilization for the most demanding network workloads. Service Fabric managed cluster node types can be provisioned with Accelerated Networking on [supported VM SKUs](../virtual-machines/sizes.md). Reference this [limitations and constraints](../virtual-network/accelerated-networking-overview.md#limitations-and-constraints) for additional considerations.
* Note that Accelerated Networking is supported on most general purpose and compute-optimized instance sizes with 2 or more vCPUs. On instances that support hyperthreading, Accelerated Networking is supported on VM instances with 4 or more vCPUs. Enable accelerated networking by declaring `enableAcceleratedNetworking` property in your Resource Manager template as follows:
-* The Service Fabric managed cluster resource apiVersion should be **2021-11-01-preview** or later.
+* The Service Fabric managed cluster resource apiVersion should be **2022-01-01** or later.
```json {
Scaling out infrastructure is required to enable Accelerated Networking on an ex
<a id="auxsubnet"></a>
-## Configure Auxiliary Subnets (preview)
+## Configure Auxiliary Subnets
Auxiliary subnets provide the ability to create additional managed subnets without a node type for supporting scenarios such as [Private Link Service](../private-link/private-link-service-overview.md) and [Bastion Hosts](../bastion/bastion-overview.md). Configure auxiliary subnets by declaring `auxiliarySubnets` property and required parameters in your Resource Manager template as follows:
-* The Service Fabric managed cluster resource apiVersion should be **2021-11-01-preview** or later.
+* The Service Fabric managed cluster resource apiVersion should be **2022-01-01** or later.
```JSON "resources": [ { "apiVersion": "[variables('sfApiVersion')]", "type": "Microsoft.ServiceFabric/managedclusters",
- ...
- "properties": {
+ "properties": {
"auxiliarySubnets": [ { "name" : "mysubnet", "enableIpv6" : "true" }
- ]
+ ]
+ }
+ }
+ ]
```
-See [full list of parameters available](/azure/templates/microsoft.servicefabric/2021-11-01-preview/managedclusters)
+See [full list of parameters available](/azure/templates/microsoft.servicefabric/2022-01-01/managedclusters)
## Next steps [Service Fabric managed cluster configuration options](how-to-managed-cluster-configuration.md)
service-fabric How To Managed Cluster Stateless Node Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-stateless-node-type.md
Title: Deploy a Service Fabric managed cluster with stateless node types description: Learn how to create and deploy stateless node types in Service Fabric managed clusters Previously updated : 8/23/2021 Last updated : 2/14/2022 # Deploy a Service Fabric managed cluster with stateless node types Service Fabric node types come with an inherent assumption that at some point of time, stateful services might be placed on the nodes. Stateless node types relax this assumption for a node type. Relaxing this assumption enables node stateless node types to benefit from faster scale-out operations by removing some of the restrictions on repair and maintenance operations.
-* Primary node types cannot be configured to be stateless
-* Stateless node types require an API version of **2021-05-01** or later
-* This will automatically set the **multipleplacementgroup** property to **true** which you can [learn more here](how-to-managed-cluster-large-virtual-machine-scale-sets.md)
-* This enables support for up to 1000 nodes for the given node type
+* Primary node types cannot be configured to be stateless.
+* Stateless node types require an API version of **2021-05-01** or later.
+* This will automatically set the **multipleplacementgroup** property to **true** which you can [learn more here](how-to-managed-cluster-large-virtual-machine-scale-sets.md).
+* This enables support for up to 1000 nodes for the given node type.
+* Stateless node types can utilize a VM SKU temporary disk.
Sample templates are available: [Service Fabric Stateless Node types template](https://github.com/Azure-Samples/service-fabric-cluster-templates)
To configure a Stateless node type spanning across multiple availability zones f
>[!NOTE] > The zonal resiliency property must be set at the cluster level, and this property cannot be changed in place.
+## Temporary disk support
+Stateless node types can be configured to use temporary disk as the data disk instead of a Managed Disk. Using a temporary disk can reduce costs for stateless workloads. To configure a stateless node type to use the temporary disk set the **useTempDataDisk** property to **true**.
+
+* Temporary disk size must be 32GB or more. The size of the temporary disk depends on the VM size.
+* The temporary disk is not encrypted by server side encryption unless you enable encryption at host.
+* The Service Fabric managed cluster resource apiVersion should be **2022-01-01** or later.
+
+```json
+{
+ "apiVersion": "[variables('sfApiVersion')]",
+ "type": "Microsoft.ServiceFabric/managedclusters/nodetypes",
+ "name": "[concat(parameters('clusterName'), '/', parameters('nodeTypeName'))]",
+ "location": "[resourcegroup().location]",
+ "dependsOn": [
+ "[concat('Microsoft.ServiceFabric/managedclusters/', parameters('clusterName'))]"
+ ],
+ "properties": {
+ "isStateless": true,
+ "isPrimary": false,
+ "vmImagePublisher": "[parameters('vmImagePublisher')]",
+ "vmImageOffer": "[parameters('vmImageOffer')]",
+ "vmImageSku": "[parameters('vmImageSku')]",
+ "vmImageVersion": "[parameters('vmImageVersion')]",
+ "vmSize": "[parameters('nodeTypeSize')]",
+ "vmInstanceCount": "[parameters('nodeTypeVmInstanceCount')]",
+ "useTempDataDisk": true
+ }
+}
+```
++ ## Migrate to using stateless node types in a cluster For all migration scenarios, a new stateless node type needs to be added. Existing node type cannot be migrated to be stateless. You can add a new stateless node type to an existing Service Fabric managed cluster, and remove any original node types from the cluster.
service-fabric How To Managed Cluster Vmss Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-vmss-extension.md
Title: Add a virtual machine scale set extension to a Service Fabric managed cluster node type description: Here's how to add a virtual machine scale set extension a Service Fabric managed cluster node type Previously updated : 8/02/2021 Last updated : 2/14/2022 # Virtual machine scale set extension support on Service Fabric managed cluster node type(s)
Alternately, you can add a virtual machine scale set extension on a Service Fabr
} ```
-For more information on configuring Service Fabric managed cluster node types, see [managed cluster node type](/azure/templates/microsoft.servicefabric/2020-01-01-preview/managedclusters/nodetypes).
+For more information on configuring Service Fabric managed cluster node types, see [managed cluster node type](/azure/templates/microsoft.servicefabric/2022-01-01/managedclusters/nodetypes).
## Next steps
service-fabric Service Fabric Best Practices Replica Set Size Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-best-practices-replica-set-size-configuration.md
+
+ Title: Stateful service replica set size configuration
+description: Best practices for TargetReplicaSetSize and MinReplicaSetSize configuration
++ Last updated : 02/04/2022+++
+# Stateful service replica set size configuration
+
+Replica set size for stateful services is configured using two parameters.
+
+* TargetReplicaSetSize - number of replicas that the system creates and maintains for each replica set of a service
+* MinReplicaSetSize ΓÇô minimum allowed number of replicas for each replica set of a service
+
+The basic idea behind these two parameters is to allow for such a configuration that it allows for at least two concurrent failures to happen without partition going in quorum loss. That situation can happen when there's one planned failover (upgrade bringing node/replica down) and one unplanned failover (node crashes).
+
+For example, if TargetReplicaSetSize =5, MinReplicaSetSize =3, then normally (without failures) there will be five replicas in the Service Fabrics view of the replica set. When failures happen, Service Fabrics view of the replica set will decrease until it reaches MinReplicaSetSize.
+
+Service Fabric uses the majority quorum of the number of replicas maintained in this view, so majority quorum of the MinReplicaSetSize is minimum level of reliability of any operation. If the total number of replicas drops below the majority quorum of the MinReplicaSetSize then further writes will be disallowed. It's important to note that when service is in quorum loss, it can require replicas to come back in a specific order to get out of quorum loss.
+
+>[!IMPORTANT]
+>In the example where TargetReplicaSetSize = 5, MinReplicaSetSize = 3, majority quorum of MinReplicaSetSize is 2. That means that even if there are three concurrent failures that will result in only two remaining replicas running, Service Fabric will still have 3 replicas in its view of the replica set (two up and one down), and two remaining running replicas will be enough to satisfy the majority quorum.
+
+## Examples of suboptimal configurations
+
+### TargetReplicaSetSize = 3; MinReplicaSetSize = 2
+This kind of configuration will often go into [quorum loss](service-fabric-disaster-recovery.md#stateful-services) (whenever planned and unplanned failover happens at the same time). To recover from quorum loss, it's not enough for only one replica to come back up ΓÇô it's required for the exact replica which was part of replica set to come back up.
+
+![Image shows nodes in the cluster during each failover phase during below sequence when TargetReplicaSetSize = 3 and MinReplicaSetSize = 2](media/service-fabric-best-practices/service-fabric-best-practices-target-3-minimum-2-replica-set-size.png)
+1. Partition has three replicas: A, B, C
+2. Replica A goes down, Service Fabric downshifts replica set to 2 (B, C)
+3. Unplanned failover happens, replica B also goes down - partition is now in quorum loss state
+4. If replica A comes back, partition will remain in quorum loss state as A isn't part of the current replica set (B, C). Quorum loss will be fixed only when replica B comes back.
+
+### TargetReplicaSetSize = 3, MinReplicaSetSize = 3
+This kind of configuration will often go into [quorum loss](service-fabric-disaster-recovery.md#stateful-services) (whenever planned and unplanned failover happens at the same time). However, as soon as any of these replicas come back up, partition will recover from quorum loss.
+> [!WARNING]
+>This kind of configuration is still not optimal, it is only slightly better than TagetReplicaSetSize =3, MinReplicaSetSize = 2.
+
+![Image shows nodes in the cluster during each failover phase during below sequence when TargetReplicaSetSize = 3 and MinReplicaSetSize = 3](media/service-fabric-best-practices/service-fabric-best-practices-target-3-minimum-3-replica-set-size.png)
+1. Partition has three replicas: A, B, C
+2. Replica A goes down, replica set remains the same (A, B, C)
+3. Unplanned failover happens, replica B also goes down - partition is now in quorum loss state
+4. As soon as any of the replicas A or B come back up, partition will restore quorum as both A and B are part of current replica set
+++
+## Next steps
+
+* Learn about quorum loss and [Disaster recovery in Azure Service Fabric](service-fabric-disaster-recovery.md#stateful-services)
+* Learn about [Service Fabric support options](service-fabric-support.md).
++
+[image1]: media/service-fabric-best-practices/service-fabric-best-practices-target-3-minimum-2-replica-set-size.png
+[image2]: media/service-fabric-best-practices/service-fabric-best-practices-target-3-minimum-3-replica-set-size.png
site-recovery Azure Vm Disaster Recovery With Accelerated Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-vm-disaster-recovery-with-accelerated-networking.md
If you have enabled Accelerated Networking on the source virtual machine after e
The above process should also be followed for existing replicated virtual machines, that did not previously have Accelerated Networking enabled automatically by Site Recovery. ## Next steps-- Learn more about [benefits of Accelerated Networking](../virtual-network/create-vm-accelerated-networking-powershell.md#benefits).-- Learn more about limitations and constraints of Accelerated Networking for [Windows virtual machines](../virtual-network/create-vm-accelerated-networking-powershell.md#limitations-and-constraints) and [Linux virtual machines](../virtual-network/create-vm-accelerated-networking-cli.md#limitations-and-constraints).-- Learn more about [recovery plans](site-recovery-create-recovery-plans.md) to automate application failover.
+- Learn more about [benefits of Accelerated Networking](../virtual-network/accelerated-networking-overview.md#benefits).
+- Learn more about limitations and constraints of Accelerated Networking for [Windows virtual machines](../virtual-network/accelerated-networking-overview.md#limitations-and-constraints) and [Linux virtual machines](../virtual-network/accelerated-networking-overview.md#limitations-and-constraints).
+- Learn more about [recovery plans](site-recovery-create-recovery-plans.md) to automate application failover.
spring-cloud How To Enterprise Marketplace Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-cloud/how-to-enterprise-marketplace-offer.md
To purchase in the Azure Marketplace, you must meet the following prerequisites:
- Your organization allows [Azure Marketplace purchases](../cost-management-billing/manage/ea-azure-marketplace.md#enabling-azure-marketplace-purchases). - Your organization allows acquiring any Azure Marketplace software application listed in [Purchase policy management](/marketplace/azure-purchasing-invoicing.md#purchase-policy-management).
-## View Azure Spring Cloud Enterprise Tier with VMware Tanzu offering from Azure Marketplace
+## View Azure Spring Cloud Enterprise Tier offering from Azure Marketplace
To see the offering and read a detailed description, see [Azure Spring Cloud Enterprise Tier](https://aka.ms/ascmpoffer).
To see the supported plans in your market, select **Plans + Pricing**.
> [!NOTE] > If you see "No plans are available for market '\<Location>'", that means none of your Azure subscriptions can purchase the SaaS offering. For more information, see [No plans are available for market '\<Location>'](./troubleshoot.md#no-plans-are-available-for-market-location) in [Troubleshooting](./troubleshoot.md).
-To see the Enterprise Tier creation page, select **Set up + subscribe**
+To see the Enterprise Tier creation page, select **Subscribe**
## Next steps
static-web-apps Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/configuration.md
The following example configuration blocks anonymous access and redirects all un
``` > [!NOTE]
-> By default, all managed identity providers are enabled. To block an authentication provider, see [Authentication and authorization](authentication-authorization.md#block-an-authentication-provider).
+> By default, all pre-configured identity providers are enabled. To block an authentication provider, see [Authentication and authorization](authentication-authorization.md#block-an-authentication-provider).
## Fallback routes
For details on how to restrict routes to authenticated users, see [Securing rout
### Disable cache for authenticated paths
-If you have enabled [enterprise-grade edge](enterprise-edge.md), or set up [manual integration with Azure Front Door](front-door-manual.md), you may want to disable caching for your secured routes.
+If you set up [manual integration with Azure Front Door](front-door-manual.md), you may want to disable caching for your secured routes. If you have enabled [enterprise-grade edge](enterprise-edge.md) this is already configured for you.
To disable Azure Front Door caching for secured routes, add `"Cache-Control": "no-store"` to the route header definition.
storage Anonymous Read Access Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-client.md
Previously updated : 08/02/2020 Last updated : 02/16/2022
This article shows how to access a public container or blob from .NET. For infor
A client that accesses containers and blobs anonymously can use constructors that do not require credentials. The following examples show a few different ways to reference containers and blobs anonymously.
+> [!IMPORTANT]
+> Any firewall rules that are in effect for the storage account apply even when public access is enabled for a container.
+ ## Create an anonymous client object You can create a new service client object for anonymous access by providing the Blob storage endpoint for the account. However, you must also know the name of a container in that account that's available for anonymous access.
storage Anonymous Read Access Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-configure.md
Previously updated : 04/29/2021 Last updated : 02/16/2022
storage Monitor Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/monitor-blob-storage.md
Last updated 11/10/2021
+ms.devlang: csharp
storage Secure File Transfer Protocol Host Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-host-keys.md
When you connect to Blob Storage by using an SFTP client, you might be prompted
> [!div class="mx-tdBreakAll"] > | Region | Host key type | SHA 256 fingerprint <sup>1</sup> | Public key | > |||||
-> |eastus2euap | rsa-sha2-256| `dkP64W5LSbRoRlv2MV02TwH5wFPbV6D3R3nyTGivVfk`| `AAAAB3NzaC1yc2EAAAADAQABAAABAQC3PqLDKkkqUXrZSAbiEZsI6T1jYRh5cp+F5ktPCw7aXq6E9Vn2e6Ngu+vr+nNrwwtHqPzcZhxuu9ej2vAKTfp2FcExvy3fKKEhJKq0fJX8dc/aBNAGihKqxTKUI7AX5XsjhtIf0uuhig506g9ZssyaDWXuQ/3gvTDn923R9Hz5BdqQEH9RSHKW+intO8H4CgbhgwfuVZ0mD4ioJKCwfdhakJ2cKMDfgi/FS6QQqeh1wI+uPoS7DjW8Zurd7fhXEfJQFyuy5yZ7CZc7qV381kyo/hV1az6u3W4mrFlGPlNHhp9TmGFBij5QISC6yfmyFS4ZKMbt6n8xFZTJODiU2mT1` |
-> |eastus2euap | rsa-sha2-512 | `M39Ofv6366yGPdeFZ0/2B7Ui6JZeBUoTpxmFPkwIo4c` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC+1NvYoRon15Tr2wwSNGmL+uRi7GoVKwBsKFVhbRHI/w8oa3kndnXWI4rRRyfOS7KVlwFgtb/WolWzBdKOGVe6IaUHBU8TjOx2nKUhUvL605O0aNuaGylACJpponYxy7Kazftm2rV/WfxCcV7TmOGV1159mbbILCXdEWbHXZkA3qWe4JPGCT+XoEzrsXdPUDsXuUkSGVp0wWFI2Sr13KvygrwFdv4jxH1IkzJ5uk6Sxn0iVE+efqUOmBftQdVetleVdgR9qszQxxye0P2/FuXr0S+LUrwX4+lsWo3TSxXAUHxDd8jZoyYZFjAsVYGdp0NDQ+Y6yOx5L9bR6whSvKE1` |
-> |eastus2euap | ecdsa-sha2-nistp256 | `X+c1NIpAJGvWU31UJ3Vd2Os4J7bCfgvyZGh35b2oSBQ` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK+U6CE6con74cCntkFAm6gxbzGxm9RgjboKuLcwBiFanNs/uYywMCpj+1PMYXVx/nMM4vFbAjEOA20fJeoQtN8=` |
-> |eastus2euap | ecdsa-sha2-nistp384 | `Q3zIFfOI1UfCrMq6Eh7nP1/VIvgPn3QluTBkyZ2lfCw` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDWRjO+e8kZpalcdg7HblZ4I3q9yzURY5VXGjvs5+XFuvxyq4CoAIPskCsgtDLjB5u6NqYeFMPzlvo406XeugO4qAui+zUMoQDY8prNjTGk5t7JVc4wYeAWbBJ2WUFyMrQ==` |
-> |francec | ecdsa-sha2-nistp256 | `N61PH8SVCAXOq7Z7eIV4mRnotafmNoPrpc+TaLxtPX4` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK3UBFa/Ke9y3aLs1q1b8gh/tXiS7lpOTzUiDFpXdbq00/V9Ag+v2z5MIaicFdum9Ih4fls1Mg07Ert16bi5M8E=` |
-> |francec | ecdsa-sha2-nistp384 | `/CkQnHA57ehNeC9ZHkTyvVr8yVyl/P1dau2AwCg579k` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG/x6qX+DRtmxOoMZwe7d7ZckHyeLkBWxB7SNH6Wnw2tXvtNekI9d9LGl1DaSmiZLJnawtX+MPj64S31v8AhZcVle9OPVIvH5im3IcoPSKQ6TIfZ26e2WegwJxuc1CjZZg==` |
-> |francec | rsa-sha2-256 | `zYLnY1rtM2sgP5vwYCtaU8v2isldoWWcR8eMmQSQ9KQ` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDCmdsufvzqydsoecjXzxxL9AqnnRNCjlIRPRGohdspT9AfApKA9ZmoJUPY8461hD9qzsd7ps8RSIOkbGzgNfDUU9+ekEZLnhvrc7sSS9bikWyKmGtjDdr3PrPSZ/4zePAlYwDzRqtlWa/GKzXQrnP/h9SU4/3pj21gyUssOu2Mpr6zdPk59lO/n/w2JRTVVmkRghCmEVaWV25qmIEslWmbgI3WB5ysKfXZp79YRuByVZHZpuoQSBbU0s7Kjh3VRX8+ZoUnBuq7HKnIPwt+YzSxHx7ePHR+Ny4EEwU7NFzyfVYiUZflBK+Sf8e1cHnwADjv/qu/nhSinf3JcyQDG1lN` |
-> |francec | rsa-sha2-512 | `ixum/Dragma5DAMBzA/c5/MY02FjUBD/gI8+XQDzJvc` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDjTJ9EvMFWicBCcmYF0zO2GaWZJXLc7F5QrvFv6Nm/6pV72YrRmSdiY9znZowNK0NvwnucKjjQj0RkJOlwVEnsq7OVS+RqGA35vN6u6c0iGl4q2Jp+XLRm8nazC1B5uLVurVzYCH0SOl1vkkeXTqMOAZQlhj9e7RiFibDdv8toxU3Fl87KtexFYeSm3kHBVBJHoo5sD2CdeCv5/+nw9/vRQVhFKy2DyLaxtS+l2b0QXUqh6Op7KzjaMr3hd168yCaqRjtm8Jtth/Nzp+519H7tT0c0M+pdAeB7CQ9PAUqieXZJK+IvycM5gfi0TnmSoGRG8TPMGHMFQlcmr3K1eZ8h` |
-> |canadaeast | rsa-sha2-256 | `SRhd9gnvJS630A8VtCYMqc4djz5R8EiG7spwAUCYSJk` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQD2nSByIh/NC3ZHsjK3zt7mspcUUXcq9Y/jc9QQsfHXQetOH/fBalf17d5odCwQSyNY5Mm+RWTt+Aae5t8kGm0f+sKVO/4HcBIihNlAnXkf1ah5NoeJ+R0eFxRs6Uz/cJILD4wuJnyDptRk1GFhpAphvBi0fLEnvn6lGJbrfOxuHJSXhjJcxDCbmcTlcWoU1l+1SaYfOzkVBcqelYIimspCmIznMdE2D9vNar77FVaNlx4J9Ew+HQRPSLG1zAh5ae1806B6CHG1+4puuTUFxJR1AO+BuT6fqy1p0V77CrhkBTHs8DNqw9ZYI27fjyTrSW4SixyfcH16DAegeHO+d2YZ` |
-> |canadaeast | rsa-sha2-512 | `60yzcSSOHlubdGkuNPWMXB9j21HqIkIzGdJUv0J57iY` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDDmA4meGZwkdDzrgA9jAgcrlglZro0+IVzkLDCo791vsjJ29bTM6UbXVYFoKEkYliXSueL0q92W91IaFH/NhlOdW81Dbjs3jE+CuE4OX5pMisIMKx45QDcYCx3MJxqZrIOkDdS+m8JLs6XwM07LxiTX+6bH5vSwuGwvqg5gpnYfUpN0U5o7Wq7H7UplyUN8vsiDvTux3glXBLAI3ugjn6FC/YVPwMOq7Luwry3kxwEMx4Fnewe6hAlz47lbBHW6l/qmzzu4wfhJC20GqPzMJHD3kjHEGFBHpcmRbyijUUIyd7QBrnfS4J0xPVLftGJsrOOUP7Oq8AAru66/00We501` |
-> |canadaeast | ecdsa-sha2-nistp256 | `YPqDobCavdQ/zGV7FuR/gzYqgUIzWePgERDTQjYEE0M` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKlfnJ9/rlO5/YGeGle1K6I6Ctan4Z3cKpGE3W9BPe1ZcSfkXq47u/f6F/nR7WgrC6+NwJHaMkhiBGadEWbuA3Q=` |
-> |canadaeast | ecdsa-sha2-nistp384 | `Y6FK9rWscBkyKN7mgPAEj0jKFXrv4mGNzoaZ9ttc4io` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDS8gYaqmJ8eEjmDF2ET7d2d6WAO7SgBQdTvqt6cUEjp7I11AYATKVN4Isz1hx8qBCWGIjA42X1/jNzk3YR7Bv/hgXO7PgAfDZ41AcT4+cJd0WrAWnxv0xgOvgLKL/8GYQ==` |
-> |usnorth | rsa-sha2-256 | `9AV5CnZNkf9nd6WO6WGNu7x6c4FdlxyC0k6w6wRO0cs` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDJTv+aoDs1ngYi5OPrRl1R6hz+ko4D35hS0pgPTAjx/VbktVC9WGIlZMRjIyerfalN6niJkyUqYMzE4OoR9Z2NZCtHN+mJ7rc88WKg7RlXmQJUYtuAVV3BhNEFniufXC7rB/hPfAJSl+ogfZoPW4MeP/2V2g+jAKvGyjaixqMczjC2IVAA1WHB5zr/JqP2p2B6JiNNqNrsFWwrTScbQg0OzR4zcLcaICJWqLo3fWPo5ErNIPsWlLLY6peO0lgzOPrIZe4lRRdNc1D//63EajPgHzvWeT30fkl8fT/gd7WTyGjnDe4TK3MEEBl3CW8GB71I4NYlH4QBx13Ra20IxMlN` |
-> |usnorth | rsa-sha2-512 | `R3HlMn2cnNblX4qnHxdReba31GMPphUl9+BQYSeR6+E` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDeM6MOS9Av7a5PGhYLyLmT09xETbcvdt9jgNE1rFnZho5ikzjzRH4nz60cJsUbbOxZ38+DDyZdR84EfTOYR2Fyvv08mg98AYXdKVWMyFlx08w1xI4vghjN2QQWa8cfWI02RgkxBHMlxxvkBYEyfXcV1wrKHSggqBtzpxPO94mbrqqO+2nZrPrPFkBg4xbiN8J2j+8c7d6mXJjAbSddVfwEbRs4mH8GwK8yd/PXPd1U0+f62bJRIbheWbB+NTfOnjND5XFGL9vziCTXO8AbFEz0vEZ9NmxfFTuVVxGtJBePVdCAYbifQbxe/gRTEGiaJnwDRnQHn/zzK+RUNesJuuFJ` |
-> |usnorth | ecdsa-sha2-nistp256 | `6xMRs7dmIdi3vUOgNnOf6xOTbF9RlGk6Pj7lLk6z/bM` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJw1dXTy1YqYLJhAo1tB+F5NNaimQwDI+vfEDG4KXIFfS83mUFqr9VO9o+zgL3+0vTrlWQQTsP/hLHrjhHd9If8=` |
-> |usnorth | ecdsa-sha2-nistp384 | `0cJkHHeTNQpl7ewPTZwug5+/hfebiH6Yxl2rOTtYZQo` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG8aqja46A9Q5PmhPzhxcklcJGp+CiC3MCjVR6Qdl9oQGMywOHfe+kCD72YBKnA6KNudZdx7pUUB/ZahvI5vwt4bi593adUMTY1/RlTRjplz6c2fSfwSO/0Ia4+0mxQyjw==` |
-> |canadacentral | ecdsa-sha2-nistp256 | `HhbpllbdxrinWvNsk/OvkowI9nWd9ZRVXXkQmwn2cq4` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBuyYEUpBjzEnYljSwksmHMxl5uoErbC30R8wstMIDLexpjSpdUxty1u2nDE3WY7m4W/doyXVSBYiHUUYhdNFjg=` |
-> |canadacentral | ecdsa-sha2-nistp384 | `EjEadkKaEgaNfdwXtzlqanUbDigzsdzcZJeTzJfQXP0` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBmGFDJBLNDi3UWwk8IMuJQXK/927uHoYVK/wLH7zI7pvtmgb9/FdXa7rix8QVTsfk8uK8wxxqyIYYApUslOtUzkpkXwW9gx7d37wiZmTjEbsvVeHq+gD7PHmXTpLS8VPQ==` |
-> |canadacentral | rsa-sha2-256 | `KOYkeGvx4egH9DTGgxiONDMvSlkEkoU8cXWnynOEQRE` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC7jhZvp5GMrYyA2gYjbQXTC/QoSeDeluBUpji6ndy52KuqBNXelmHIsaEBc69MbixqfoopaFyJshdu7X8maxcRSsdDcnhbCgUO/MnJ+am6yb33v/25qtLToqzJRXb5y86o9/WtyA9DXbJMwwzQFqxIsa1gB` |
-> |canadacentral | rsa-sha2-512 | `tdixmLr++BVpFMpiWyVkr5iAXM4TDmj3jp5EC0x8mrw` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDNMZwL0AuF2Uyn4NIK+XdaHuon2jEBwUFSNAXo4JP7WDfmewISzMWqqi1btip/7VwZbxiz98C4NUEcsPNweaw3VdpYiXXXc7NN45cC32uM8yFeV6TqizoerHf+8Hm8avWQOfBv17kvGihob2vx8wZo4HkZg9KacQGvyuUyfUKa9LJI9BnpI2Wo3RPue4kbaV3JKmzxl8sF9i6OTT8Adj6+H7SkluITm105NX32uKBMjipEeMwDSQvkWGwlh2oZwJpL+Tvi2G0hQ/Q/FCQS5MAW9MCwnp0SSPWZaLiA9EDnzFrugFoundyBa0vRjNGZoj+X4+8MVG2fYgOzDED1JSPB` |
-> |europewest | ecdsa-sha2-nistp256 | `7Lrxb5z3CnAWI8pr2LK5eFHwDCl/Gtm/fhgGwB3zscw` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE/ewktdeHJc4bH41ytmxvMR3ch9IOR+CQ2i2Pejbavmgy6XmkOnhpIPKVNytXRCToDysIjWt7DLVsQ1EHv/xtg=` |
-> |europewest | ecdsa-sha2-nistp384 | `UpzudqPZw1MrBiBoK/HHtLLppAZF8bFD75dK7huZQnI` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEDYr3fSaCAcTygFUp7MKpND4RghNd6UBjnoMB6EveRWVAiBxLTsRHNHaZ+jk3Q8kCHSEJrWKAOY4aZl78WtWcrmlWLH8gfLtcfG/sXmXka8klstLhmkCvzUXzhBclBy7w==` |
-> |europewest | rsa-sha2-256 | `IeHrQ+N6WAdLMKSMsJiML4XqMrkF1kyOiTeTjh1PFyc` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDZL63ZKHrWlwN8gkPvq43uTh88n0V6GwlTH2/sEpIyPxN56/gpgWW6aDyzyv6PIRI/zlLjZNdOBhqmEO+MhnBPkAI8edlvFoVOA6c/ft5RljQOhv+nFzgELyP8qAlZOi1iQHx7UeB1NGkQ5AIwNIkRDImeft9Iga+bDF6yWu60gY43QdGQCTNhjglNuZ6lkGnrTxQtPSC01AyU51V1yXKHzgaTByrA4tK6cGtwjFjMBsnXtX2+yoyyuQz/xNnIN63awqpQxZameGOtjAYhLhtEgl39XEIgvpAs1hXDWcSEBSMWP4z04U/tw2R5mtorL3QU1CmokWmuAQZNQcLSLLlt` |
-> |europewest | rsa-sha2-512 | `7+VdJ21y+HcaNRZZeaaBtk1AjkCNK4weG5mkkoyabi0` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDYAmiv6Tk/o02McJi79dlIdPLu1I5HfhsdPlUycW+t1zQwZL+WaI182G6SY728hJOGzAz51XqD4e5yueAZYjOJwcGhHVq6MfabbhvT1sxWQplnk3QKrUMRXnyuuSua1j+AwXsm957RlbW9bi1aQKdJgKq3y2yz+hqBS76SX9d8BxOHWJl5KwCIFaaJWb0u32W2HGb9eLDMQNipzHyANEQXI9Uq2qRL7Z20GiRGyy7VPP6AbPYTprrivo3QpYXSXe9VUuuXA9g3Bz3itxmOw6RV9aGQhCSp22BdJKDl70FMxTm1d87LEwOQmAViqelEeY+DEowPHwVLQs3rIJrZHxYV` |
-> |switzerlandn | ecdsa-sha2-nistp256 | `DfyPsw04f2rU6PXeLx8iVRu+hrtSLushETT3zs5Dq7U` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJICveabT6GPfbyaCSeU7D553Q4Rr/IgGjTMC8vMCIUJKUzazeCeS3q46mXL2kwnBLIge9wTzzvP7JSWf+I2Fis=` |
-> |switzerlandn | ecdsa-sha2-nistp384 | `Rw0TLDVU4PqsXbOunR2BZcn2/wqFty6rCgWN4cCD/1Y` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLLhGaEyHYvfVU05lmKV4Rnrl9YiuSSOCXjUaJjJJRhe5ZXbDMHeiC67CAWW3mm/+c5i1hoob/8pHg7vmeC+ve+Ztu/ww12JsC4qy/CG8qIIQvlnDDqnfmOgr0Svw3/Izw==` |
-> |switzerlandn | rsa-sha2-256 | `4cXg5pca9HCvAxDMrE7GdwvUZl5RlaivApaqz8gl7vs` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCqqSS6hVSmykLqNCqZntOao0QSS1xG89BiwNaR7uQvz7Y2H+gJiXhgot6wtc4/A5743t7svXZqsCBGPvkpK05JMNZDUy0UTwQ1eI9WAcgFAHqzmazKT1B5/aK0P5IMcK00dVap4jTwxaoQbtc973E5XAiUW1ZRt6YComeoZB6cFVX28MaE6auWOPdEaSg8SlcmWyw73Q9X5SsJkDTW5543tzjJI5hnH03LAvPIs8pIvqxntsKPEeWnyIMHWtc5Vpg8LB7CnAr4C86++hxt3mws7+AOtcjfUu2LmLzG1A34B1yEa/wLqJCz7jWV/Wm21KlTp1VdBk+4qFoVfy2IFeX9` |
-> |switzerlandn | rsa-sha2-512 | `E63lmwPWd5a6K3wJLj4ksx0wPab1lqle2a4kwjXuR4c` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCtSlbkDdzwqHy2C/pAteV2mrkZFpJHAlL05iOrJSFk0dhq8iwsmOmQiF9Xwth6T1n3NVVncAodIN2MyHR7pQTUJu1dmHcikG/JU6wGPVN8law0+3f9aClbqWRV5tdOx1vWQP3uPrppYlT90bWbD0IBmmHnxPJXsXm+7tI1n+P1/bKewG7FvU1yF+gqOXyTXrdb3sEZOD6IYW/PusR44mDl/rV5dFilBvmluHY5155hk1O2HBOWlCiDGBdEIOmB73waUQabqBCicAWfyloGZqB1n8Eay6FksLtRSAUcCSyBSnA81phYdLiLBd9UmiVKPC7gvdBWPztWB+2MeLsXtim9` |
-> |australiaeast | ecdsa-sha2-nistp256 | `s8NdoxI0mdWchKMMt/oYtnlFNAD8RUDa1a4lO8aPMpQ` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBKG2nz5SnoR5KVYAnBMdt8be1HNIOkiZ5UrHxm4pZpLG3LCuzLEXyWlhTm8rynuM/8rATVB5FZqrDCIrnn8pkw=` |
-> |australiaeast | ecdsa-sha2-nistp384 | `YmeF1kX0R0W/ssqzKCkjoSLh3CciQvtV7iacYpRU2xc` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFJi5nieNPCIxkYS7HKMH2fQgONSy2kGkViQucVhWrTJCEQMVz5peL2JZJFjf2a6zaB2olPaBNEkeuJRHxGyW0luTII9ZXXUoiGQH9l05B41mweVtG6pljHfuKQ4HzoUJA==` |
-> |australiaeast | rsa-sha2-256 | `MrPZLU8llsG+SzgBN8eH702H4zuynyYgqqQLQmWGDEs` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDsRwHZ+DKINZZNP0GO6l7mFIIgTRnJy7ikg07h54iuk+KStoB2Cwppj+ulPs0NiR2RgAsP5nchWRZQysjsfYDui8wha6JEXKvWPlQ90rEYEs96gpUcbVQesgfH8ILXK06Kn1xY/4CWAHEc5U++66e+pHQulkkFyDXTsRYHsjTk574OiUI1` |
-> |australiaeast | rsa-sha2-512 | `jkDaVBMh+d9CUJq0QtH5LNwCIpc9DuWxetgJsE5vgNc` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDFHirQKaYqkcecqdutyMQr1eFwIaIM/h302KjROiocgb4iywMAJkBLXmhJn+sSbagM5tzIk4K4k5LRjAizEIhC26sc2aa7spyvDu7+HMqDmNQ+nRgBgvO7kpxVRcK45ZjFsgZ6+mq9jK/eRnA8wgG3LnM+3zWaNLhWlrcCM0Pdy87Cswev/CEFZu6o6E6PgpBGw0MiPVY8CbdhFoTkT8Nt6tx9VhMTpcA2yzkd3LT7JGdC2I6MvRpuyZH1q+VhW9bC4eUVoVuIHJ81hH0vzzhIci2DKsikz2P4pJT0osg5YE/o9hVJs+4CG5n1MZN/l11K8lVb9Ns7oXYsvVdtR2Jp` |
-> |asiaeast | ecdsa-sha2-nistp256 | `/iq1i88fRFHFBw4DBtZUX7GRbT5dQq4g7KfUi5346co` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCvI7Dc7W3K919GK2VHZZkzJhTM+n2tX3mxq4EAI7l8p0HO0UHSmucHdQhpKApTIBR0j9O/idZ/Ew6Yr4nusBwE=` |
-> |asiaeast | ecdsa-sha2-nistp384 | `KssXSE1WC6Oca0dS2CNySgObkbVshqRGE2JcaNsUvpA` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNEYGYGolx8LNs5TVJRF/yoxOCal3a4C0fw1Wlj1BxzUsDtxaQAxSfzQhZG+lFCF7RVQyiUwKjCxmWoZbSb19aE7AnRx9UOVmrbTt2PMD3dx8VmPj1K8rsPOSq+XX4KGdQ==` |
-> |asiaeast | rsa-sha2-256 | `XYuEB+zABdpXRklca8RCoWy4hZp9wAxjk50MD9w6UjE` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDNKlaGhRiHdomU5uGvkcEjFQ0mhmNnWXsHAUNoGUhH6BU8LmsgWS61QOKHf1d3qQ0C9bPaRWMpemAa3DKGGqbgIdRrI2Yd9op+tqM+3hrZ8cBvBCgqKgaj4ZitoFnYm+iwwuReOz+x0I2/NmWUxuQlbiHTzcu8TVIln/5sj+n9PbwXC8Zk6vsCt6aon/P7hESHBJ4yf2E+Io30m+vaPNzxQGmwHjmBrZXzX8gAjGi6p823v5zdL4mq3tT5aPPsFQcfjkSMRDGq6yFSMMEA7i2dfczBQmTIJkYihdS8LBE0Ir5islJbaoPQxeXIrF+EgYgla505kJEogrLprcTGCY/t` |
-> |asiaeast | rsa-sha2-512 | `FUYhL0FaN8Zkj/M0/VJnm8jPL+2WxMsHrrc/G+xo5QM` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC7x8s74EH+il+k99G2aLl1ip5cfGfO/WUd3foiwwq+qT/95xdtstPYmOP77VBQ4G6EnauP2dY6RHKzSM2qUdmBWiYaK0aaI/2lCAaPxco12Te5Htf7igWyAHYz7W99I2CfJCEUm1Soa0v/57gLtvUg/HOqTgFX44W+PEOstMhqGoU9bSpw2IKlos9ZP87C6IQB5xPQQ1HlnIQRIzluJoFCuT7YHXFWU+F4ZOwq5+uofNH3tLlCy7D4JlxLQ0hkqq3IhF4y5xXJyuWaBYF2H8OGjOL4QN+r9osrP7iithf1Q0EZwuPYqcT1QeIhgqI7OIYEKwqMfMIMNxZwnzKgnDC1` |
-> |germanywc | ecdsa-sha2-nistp256 | `Ce+h+7thT5tt75ypIkWZ6+JnmQMZEl1N7Tt3Ldalb64` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBmVDE0INhtrKI83oB4r8eU1tXq7bRbzrtIhZdkgiy3lrsvNTEzsEExtWae2uy8zFHdkpyTbBlcUYCZEtNr9w3U=` |
-> |germanywc | ecdsa-sha2-nistp384 | `hhQQi2iRjSX5d9c+4714hAFvTA3c63+TGknhuQi7Tss` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDlFF3ceA17ZFERfvijHkPI2Na1wuti9/AOY5E/bDvZfP08kkmYTb9Ma6omhB0dHR6e1CmRJfKmFXfTd81iVWPa7yXCxbS8yG+uNKCuHxuNv8hFhNM84h2727BSBHBBHBA==` |
-> |germanywc | rsa-sha2-256 | `0SKtGye+E9pp4QLtWNLLiPSx+qKvDLNjrqHwYcDjyZ8` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDsbkjxK9IJP8K98j+4/wGJQVdkO/x0Msf89wpjd/3O4VIbmZuQ/Ilfo6OClSMVxah6biDdt3ErqeyszSaDH9n3qnaLxSd5f+317oVpBlgr2FRoxBEgzLvR/a2ZracH14zWLiEmCePp/5dgseeN7TqPtFGalvGewHEol6y0C6rkiSBzuWwFK+FzXgjOFvme7M6RYbUS9/MF7cbQbq696jyetw2G5lzEdPpXuOxJdf0GqYWpgU7XNVm+XsMXn66lp87cijNBYkX7FnXyn4XhlG4Q6KlsJ/BcM3BMk+WxT+equ7R7sU/oMQ0ti/QNahd5E/5S/hDWxg6ZI1zN8WTzypid` |
-> |germanywc | rsa-sha2-512 | `9OYO7Hn5p+JJeGGVsTSanmHK3rm+iC6KKhLEWRPD9ro` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCwrSTqa0GD36iymT4ZxSMz3mf5iMIHk6APQ2snhR5FUvacnqTOHt3xhMF+UwYmGLbQtmr4HdXIKd7Dgn5EzHcfaYFbaLJs2aDngfv7Pd6TyLW3TtSgJ6K+mC1MDI/vHzGvRxizuxwdN0uMXv5kflQvnEtWlsKAHW/H7Ypk4R8s+Kl2AIVEKdy+PYwzRd2ojqqNs+4T2tPP5Y6pnJpzTlcHkIIIf7V0Bk/bFG2B7r73DG2cSUlnJz8QW9pLXIn7268YDOR/5nozSXj7DinVDBlE5oeZh4qkdMHO1FSWynj/isUCm5qBn76WNa6sAeMBS3dYiJHUgmKMc+ZHgpu6sqgd` |
-> |europenorth | ecdsa-sha2-nistp256 | `wUF5N8VjGTnA/PYBVzQrhcrMgHuCfAYL1tu+p6s28Ms` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCh4oFTmr3wzccXcayCwvcx+EyvZ7yANMYfc3epZqEzAcDeoPV+6v58gGhYLaEVDh69fGdhiwIvMcB7yWXtqHxE=` |
-> |europenorth | ecdsa-sha2-nistp384 | `w7dzF6HD42eE2dgf/G1O73dh+QaZ7OPPZqzeKIT1H68` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLgyasQj6FYeRa1jiQE4TzOGY/BcQwrWFxXNEmbyoG89ruJcmXD01hS2RzsOPaVLHfr/l71fslVrB8MQzlj3MFwgfeJdiPn7k/4owFoQolaZO7mr/vY/bqOienHN4uxLEA==` |
-> |europenorth | rsa-sha2-256 | `vTEOsEjvg/jHYH1xIWf2rKrtENlIScpBx450ROw52UI` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQChnfrsd1M0nb7mOYhWqgjpA+ChNf7Ch6Eul6wnGbs7ZLxXtXIPyEkFKlEUw4bnozSRDCfrGFY78pjx4FXrPe5/m1sCCojZX8iaxCOyj00ETj+oIgw/87Mke1pQPjyPCL29TeId16e7Wmv5XlRhop8IN6Z9baeLYxg6phTH9ilA5xwc9a1AQVoQslG0k/eTyL4gVNVOgjhz94dlPYjwcsmMFif6nq2YgQgJlIjFJ+OwMqFIzCEZIIME1Mc04tRtPlClnZN/I+Hgnxl8ysroLBJrNXGYhuRMJjJm0J1AZyFIugp/z3X1SmBIjupu1RFn/M/iB6AxziebQcsaaFEkee0l` |
-> |europenorth | rsa-sha2-512 | `c4FqTQY/IjTcovY/g7RRxOVS5oObxpiu3B0ZFvC0y+A` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCanDNwi72RmI2j6ZEhRs4/tWoeDE4HTHgKs5DRgRfkH/opK6KHM64WnVADFxAvwNws1DYT1cln3eUs6VvxUDq5mVb6SGNSz4BWGuLQ4onRxOUS/L90qUgBp4JNgQvjxBI1LX2VNmFSed34jUkkdZnLfY+lCIA/svxwzMFDw5YTp+zR0pyPhTsdHB6dST7qou+gJvyRwbrcV4BxdBnZZ7gpJxnAPIYV0oLECb9GiNOlLiDZkdsG+SpL7TPduCsOrKb/J0gtjjWHrAejXoyfxP5R054nDk+NfhIeOVhervauxZPWeLPvqdskRNiEbFBhBzi9PZSTsV4Cvh5S5bkGCfV5` |
-> |uscentraleuap | ecdsa-sha2-nistp256 | `J9oxrXZ6jDR01CcDWu6xhoRAY60R1SpqbeKA4S9EjNc` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPNv9UEan8fLKmcI/RK53nX+TD9Pm/RfOKVa1b/leKSByIzMBWQFwa6wxwtr/shl6zvjwT4E9uRu6TsRTYnk+AI=` |
-> |uscentraleuap | ecdsa-sha2-nistp384 | `SeX6s483/LpSdx8kIy+KWm5Bb6zy6wr3icyhq1DQydU` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGcFnKk6dlO6bG0YSPZruEp6bXZP3h6QvCA+jmxScUz7MIgufRT3lrxkrZs0RM9vp44i2HXOSowsgvVPDQMBJRF5gXsEU1Z9SrpqOUrlcyhzfy0SkaewuNM6VoAYjUn44g==` |
-> |uscentraleuap | rsa-sha2-256 | `tYuwFgj2b/FNC4MQm949sCucp1Atfq0z7NsF7pQU25c` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCmeyxAgrNZ+q5e1RSPL2RBtjVLTKim3lviSwdIRhddDzSQTqbl2oRAxegP5shXuBF6A5zkByGMWWfSE7sSU/zyYHH3+J8lN5NFOflPgILOcPNvQOS88i3vHdF2yguSETkWSxyBcBC36Fv5YAyRfJqEq97He1nbvIS30/1XEuOZOgk9qzaq+f18PsJjs+m24y9oqr3WgiVT/3DnD/5XW7JjESZy0YGDWRcivYZDasTQzFJTOIeMRqTXsqhYkaPkigPC/rWjUzgp9fXlknQeFrSgT/f3NvMZ+bG2WMSn28bzyOs9DZAU1LmYNkAcjABQLniQUqjoM+RRt439et9ZEOEJ` |
-> |uscentraleuap | rsa-sha2-512 | `6gy1BGZMfD37oV7ApF+SvUMcfhZumyftkNYGs5PN34I` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCzdsoT/Ark4c4jqgx0s/7fcSNLCfj/tWvdNKVFkl3B87npb26g+bJkV35/iqSdsE5T82+OILDxXGBDcZbfbZyvfYx/EEWKId7r/WRvrQDYkAcS/z1MJbpUFwxmcuqaRMYjWmzwcc/nde6Awelte0Rc9wueTq58ZUdL7VUvtPCI88SdrB5Nn5x9DoPcuGAn+8AC1UsRT4VJB2DgMRmxqUe0fUq1bMSDanAmL7ICc2s6GFvWA4JJ2g5D74MKMfvw/mBy02FJvFyJivQ1NPnQ+6CJ6CmfE0mRVCrrBZC3qBXST5NEVf4sVvhAacoR7Qn2vfRaS2tJXrFbLC5/omYNUy1J` |
-> |useast2 | ecdsa-sha2-nistp256 | `bouiC5HdtURUU19RJbym8R94fbMOTw/bUxFUkoAByoI` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJshJI18IECu6neLrash/Q622MAXO07C+hbIOiVPC6M/ZIJM8HyYvQEh4DKI1CMEaeAIs/HA905QKeU/syvt7QI=` |
-> |useast2 | ecdsa-sha2-nistp384 | `vWnPlGaQOY4LFj9XSQ2qN/NMF92+UOfKPjGNSPA2bOg` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBByJNAblwxCNVqedg5FcdbdwiuzTMVEWj/uF3uzI8wp890Xv2M4H+aMTpeItxgQsuiQCptgITsO+XCf2dBTHOGWpd90QtvcznzHyy/FEWVAKWs9brvyaNVe82c4TOFqYRg==` |
-> |useast2 | rsa-sha2-256 | `K+QQglmdpev3bvEKUgBTiOGMxwTlbD7gaYnLZhPfe1c` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDOA2Aj1tIG/TUXoVFHOltyW8qnCOtm2gq2NFTdfyDFw3/C4jk2HHQf2smDX54g8ixcLuSX3bRDtKRJItUhrKFY6A0AfN1+r46kkJJdFjzdcgi7C3M0BehH7HlHZ7Fv2u01VdROiXocHpNOVeLFKyt516ooe6b6bxrdc480RrgopPYpf6etJUm8d4WrDtGXB3ctip8p6Z2Z/ORfK77jTeKO4uzaHLM0W7G5X+nZJWn3axaf4H092rDAIH1tjEuWIhEivhkG9stUSeI3h6zw7q9FsJTGo0mIMZ9BwgE+Q2WLZtE2uMpwQ0mOqEPDnm0uJ5GiSmQLVyaV6E5SqhTfvVZ1` |
-> |useast2 | rsa-sha2-512 | `UKT1qPRfpm+yzpRMukKpBCRFnOd257uSxGizI7fPLTw` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC/HCjYc4tMVNKmbEDT0HXVhyYkyzufrad8pvGb3bW1qGnpM1ZT3qauJrKizJFIuT3cPu43slhwR/Ryy79x6fLTKXNNucHHEpwT/yzf5H6L14N+i0rB/KWvila2enB2lTDVkUW50Fo+k5U/JPTn8vdLPkYJbtx9s0s3RMwaRrRBkW6+36Xrh0h7rxV5LfY/EI1331f+1bgNM7xD59D3U76OafZMh5VfSbCisvDWyIPebXkOMF/eL8ATlaOfab0TAC8lriCkLQolR+El9ARZ69CJtKg4gBB3IY766Ag3+rry1/J97kr4X3aVrDxMps1Pq+Q8TCOf4zFDPf2JwZhUpDPp` |
+> | europewest | rsa-sha2-256 | `IeHrQ+N6WAdLMKSMsJiML4XqMrkF1kyOiTeTjh1PFyc=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDZL63ZKHrWlwN8gkPvq43uTh88n0V6GwlTH2/sEpIyPxN56/gpgWW6aDyzyv6PIRI/zlLjZNdOBhqmEO+MhnBPkAI8edlvFoVOA6c/ft5RljQOhv+nFzgELyP8qAlZOi1iQHx7UeB1NGkQ5AIwNIkRDImeft9Iga+bDF6yWu60gY43QdGQCTNhjglNuZ6lkGnrTxQtPSC01AyU51V1yXKHzgaTByrA4tK6cGtwjFjMBsnXtX2+yoyyuQz/xNnIN63awqpQxZameGOtjAYhLhtEgl39XEIgvpAs1hXDWcSEBSMWP4z04U/tw2R5mtorL3QU1CmokWmuAQZNQcLSLLlt` |
+> | europewest | rsa-sha2-512 | `7+VdJ21y+HcaNRZZeaaBtk1AjkCNK4weG5mkkoyabi0=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDYAmiv6Tk/o02McJi79dlIdPLu1I5HfhsdPlUycW+t1zQwZL+WaI182G6SY728hJOGzAz51XqD4e5yueAZYjOJwcGhHVq6MfabbhvT1sxWQplnk3QKrUMRXnyuuSua1j+AwXsm957RlbW9bi1aQKdJgKq3y2yz+hqBS76SX9d8BxOHWJl5KwCIFaaJWb0u32W2HGb9eLDMQNipzHyANEQXI9Uq2qRL7Z20GiRGyy7VPP6AbPYTprrivo3QpYXSXe9VUuuXA9g3Bz3itxmOw6RV9aGQhCSp22BdJKDl70FMxTm1d87LEwOQmAViqelEeY+DEowPHwVLQs3rIJrZHxYV` |
+> | europewest | ecdsa-sha2-nistp256 | `0WNMHmCNJE1YFBpHNeADuT5h+PfJ/jJPtUDHCxCSrO0=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBANx85rJLXM8QZi33y8fzvUbH+O5Cujn0oJFDGQrwhGJQTHsjIhd5bhFFgDvJ64/4SGrtP1LHDKLwr9+ltzgxIE=` |
+> | europewest | ecdsa-sha2-nistp384 | `90g+JfQChjbb3OOV0YIGSVTkNotnefCV2NcSuMdPrzY=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNJgtrLFy2zsyhNvXlwHUmDBw1De++05pr1ZTxOIVnB17XZix0Euwq/wZTs0cE01c5/kYdAp+gQHEz594e7AQXBTCTqUiIS1a4+IXzfiCcShVfMsLFBvzjm9Yn8qgW9Ofg==` |
+> | useast | rsa-sha2-256 | `F6pNN5Py68/1hVRGEoCwpY5H7vWhXZM/4L442lY4ydE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDAiUB94zwLf0e/++OeiAjE0X7Od2nuqyLyAqpOb7nfQUAOWyqgRL04yaan6R2Ir2YtI0FRwA6yRETUBf2+NuVhIONgLNsgPw3RakL1BUqAEzZAyF4sOjWnYE5/s/1KmYOE052SefzMciqjgkBV2+YrPW1CLivNhL4d1vuQh05kADLgHJiAVD6BqSM7Z6VoLhW+hfP4JklyQAojCF6ejXW7ZGWdqQGKLCUhdaOPSRAxjOmr9gZxJ69OvdJT2Cy6KO1YQt2gY2GbPs+4uAeNrz40swffjut4zn1NILImpHi8PTM+wcGYzbW4Nn7t5lhvT9kmX9BkSYXLVTlI9p1neT9t` |
+> | useast | rsa-sha2-512 | `MIpoRIiCtEKI23MN+S2bLqm5GKClzgmRpMnh90DaHx8=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC8Ut7Rq7Vak26F29czig5auq334N9mNaJmdtWoT32uDzWjZNw/N8uxXQS51oSeD7c0oXWIMBklH0AS8JR1xvMUGVnv5aRXwubicQ6z4poG5RSudYDA3BjMs61LZUKZH/DRj7qR/KUBMNieT1X+0DbopZkO9etxXdKx+VqJaK3fRC5Zflxj5Z9Stfx/XlaBXptDdqnInHZAUbZxnNziPYrBOuXYl5/Cd6W4lR7dBsMCbjINSIShvrhPpVfd3qOv/xPpU172nqkOx2VsV4mrfqqg62ZdcenLJDYsiXd/AVNUAL+dvzmj1/3/yVtFwadA2l83Em6CgGpqUmvK6brY3bPh` |
+> | useast | ecdsa-sha2-nistp256 | `ixDeCdmQOB9ROqdJiVdXyFVsRqLmJJUb2M4shrWj8gI=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNrdcVT12fftnDcjYL8K3zLX3bdMwYLjNu2ZJ1kUZwpVHSjNc+1KWB2CGHca+cMLuPSao4TxjIX0drn9/o+GHMQ=` |
+> | useast | ecdsa-sha2-nistp384 | `DPTC6EIORrsxzpGt6IZzAN67nlZUXzg5ANQ3QGz987Y=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEP3CUvPVWNVnFuojR43KRxTQt1xiClbgDzqN/s9F5luivP+Gh0QrK5UHf6diEju4ZQ9k2O10MEDs6c46g4fT56rY8CQkeBsaaBq8WYLRhSQsFZ6SZuw14oFNodniAO33g==` |
+> | indiawest | rsa-sha2-256 | `Fkh7r/tOJy1cZC6nI75VsO1sS3ugMvJ56U02uGGJHFo=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDHCzLI51bbBLWK7TcXvXvEHaLQMzuYKEwyoS1/oC5EN3NsLZl4BV5d2zbLETFDjsky/btWiAkCvHuzxealxGgzw69ll90aWSOEY/epaYJvueOTvGy4+rJY8Xyc64VdHml8n3EEZTQmBEi3Tn6bViLwvC0iT2/noLeYGXh0/NL0T3BeblwSm3cNXyemkBQO/zyYcchqRtKJu8w8brYVZYFINlTeBu4LyDP1k9DMtuewGoeH8SmvDxUmiIGh2VDlPmXe3IkMR0nSgz10jMl3F0fei7ZJ+8zdCVbBuIqsJf+koJa/q9npstWGMFddMX3nR0A3HnG4v5aCAGVmfl11iC0J` |
+> | indiawest | rsa-sha2-512 | `xDtcgfElRGUUgWlU9tRmSQ58WEOKoUSKrHFDruhgDIM=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCXehufp18nKehU4/GOWMkXJ87t22TyG5bNdVCPO2AgLJ88FBwZJvDurLgdPRDRuJImysbD7ucwk2WoDNC39q0TWtCRyIKTXfwvPmyG+JZKkT+/QfslMqiAXAPIQtVr2iXTeuHmn3tk+PksGXnTwb3oFV4wv40Wi1CbwvtCkUsBSujq4AR7BqksPnAqPrAyw+fFR3w4iD3EdtHBdIVULez3lkpMH/d04rf2bjh6lpI9YUdcdAmTGYeMtsf/ef8z0G2xpN2aniLCoCPQP85cooKq7YEhBDR8Lzem3vWnqS3gPc4rUrCJoDkGm0iL/4GCWRyG+RPi70WSdVysJ+HIm0Ct` |
+> | indiawest | ecdsa-sha2-nistp256 | `t+PVPMSVEgQ3FPNploXz7mO25PFiEwzxutMjypoA2DM=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCzR5dhW3wfN5bRqLfeZ2hlj7iRerE4lF5jk+iQl6HJHKXIsH6lQ63Wyg7wOzF65jNnvubAJoEmzyyYig+D3A+w=` |
+> | indiawest | ecdsa-sha2-nistp384 | `pLODd+3JNeLVcPYYnI0rSWoemhMWws0jLc3J8cV6+GU=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL2PEknfZpPAT4ejqBJW8InHPELP1G7hGvroW5J3evJr8Qrr//voa6aH8ZF7Ak0HcVVOOCSzfjcEpZYjjrXrzuCOekU48DkSF8i1kKqV4iXejNNQ1ohDCbsiAyoxQMY9cA==` |
+> | useast2 | rsa-sha2-256 | `K+QQglmdpev3bvEKUgBTiOGMxwTlbD7gaYnLZhPfe1c=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDOA2Aj1tIG/TUXoVFHOltyW8qnCOtm2gq2NFTdfyDFw3/C4jk2HHQf2smDX54g8ixcLuSX3bRDtKRJItUhrKFY6A0AfN1+r46kkJJdFjzdcgi7C3M0BehH7HlHZ7Fv2u01VdROiXocHpNOVeLFKyt516ooe6b6bxrdc480RrgopPYpf6etJUm8d4WrDtGXB3ctip8p6Z2Z/ORfK77jTeKO4uzaHLM0W7G5X+nZJWn3axaf4H092rDAIH1tjEuWIhEivhkG9stUSeI3h6zw7q9FsJTGo0mIMZ9BwgE+Q2WLZtE2uMpwQ0mOqEPDnm0uJ5GiSmQLVyaV6E5SqhTfvVZ1` |
+> | useast2 | rsa-sha2-512 | `UKT1qPRfpm+yzpRMukKpBCRFnOd257uSxGizI7fPLTw=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC/HCjYc4tMVNKmbEDT0HXVhyYkyzufrad8pvGb3bW1qGnpM1ZT3qauJrKizJFIuT3cPu43slhwR/Ryy79x6fLTKXNNucHHEpwT/yzf5H6L14N+i0rB/KWvila2enB2lTDVkUW50Fo+k5U/JPTn8vdLPkYJbtx9s0s3RMwaRrRBkW6+36Xrh0h7rxV5LfY/EI1331f+1bgNM7xD59D3U76OafZMh5VfSbCisvDWyIPebXkOMF/eL8ATlaOfab0TAC8lriCkLQolR+El9ARZ69CJtKg4gBB3IY766Ag3+rry1/J97kr4X3aVrDxMps1Pq+Q8TCOf4zFDPf2JwZhUpDPp` |
+> | useast2 | ecdsa-sha2-nistp256 | `bouiC5HdtURUU19RJbym8R94fbMOTw/bUxFUkoAByoI=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJshJI18IECu6neLrash/Q622MAXO07C+hbIOiVPC6M/ZIJM8HyYvQEh4DKI1CMEaeAIs/HA905QKeU/syvt7QI=` |
+> | useast2 | ecdsa-sha2-nistp384 | `vWnPlGaQOY4LFj9XSQ2qN/NMF92+UOfKPjGNSPA2bOg=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBByJNAblwxCNVqedg5FcdbdwiuzTMVEWj/uF3uzI8wp890Xv2M4H+aMTpeItxgQsuiQCptgITsO+XCf2dBTHOGWpd90QtvcznzHyy/FEWVAKWs9brvyaNVe82c4TOFqYRg==` |
+> | uswest | rsa-sha2-256 | `kqxoK1j6vHU8o8XyaiV87tZFEX9nE6o/yU0lOR5S6lE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDAd7gh0mFw3iRAKusX3ai6OE0KO5O2CMlezvZOAJEH88fzWQ/zp0RZ1j7zJ8sbwslA6v3oRQ7Cx9ptAMTrL8SW4CZYcwETlfL3ZP39Llh+t7rZovIgvCDU0tijYvsa1W0T9XZgcwWEm6cWQzdm+i9U0KUdh7KgsubPAhGQ7xrOVEqgB9MYMofSSdIfKMt8K7xOSam6mhWiTSSIEGgeMTIZ9TgXkgAEJ8TNl3QHRoM8HxMnRFjtkXbT3EeSg6VOqi69Cei3hrmS64qvdzt2WwoTQwTFjxHocWGgA+Ow53wqWt8iYgOudpoB1neXiIcF4p0CN8zjvXNiRbZPg9lXFM9R` |
+> | uswest | rsa-sha2-512 | `/PP9B/9KEa+QUyump1Yt05Lfk0LY/eyQhHyojh5zMEg=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC8R8bFe8QSTYKK+4evMpnlB8y0rQCqikTyviqD4rva7i4f1f/JxmptJQ/wkipHPXk6E7Du6oK/iJaZ+wjZ03tNIWwAGn0SdlTvWuwQwigK9k3JRlLYO+Uj/SSnBQWf8Dmp+cA6RDalteHpM2KwaUK65BHYC75bWKHaNntadTIU4kQ0BvFzmNRcJWL6otd5RkdYXjJWHu21zcv4EpRHGmVCD0na+UWce6UGDbLDtsZVJd2Q7IyeTrXpWxEO0fFN2Gu9gINfWC1FpuffGaqWSa4nK69n39lUKz4PUdu6Owmd9aNbLXknvtnW4+xGbX6oQa8wYulINHjdNz8Ez6nOiNZ9` |
+> | uswest | ecdsa-sha2-nistp256 | `peqBbfcWZRW4QzLi69HicUUTwdtfW7/E9WGkgRMheAo=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBcTos/zmSn15kzn1Lk8N8QQh9hzOwqOSOf/bCpu6AQbWJtvjf1pHMuZlS2PpIV7G+/ImxXGpqpHqQlcD+Lg8Ro=` |
+> | uswest | ecdsa-sha2-nistp384 | `sg63Cc3Mvnn9hoapGaEuZByscUEMa+xgw/3ruz49szk=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGzX2t9ALjFwpcBLm4B0+/D47PMrDya0KTva5w4E5GZNb5OwQujQvtUS2owd8BcKdMBeXx2S7qbcw6cQFffRxE+ZTr4J+3GoCmDM0PqraXxJHBRxyeK6vlrSR8ojRzIApA==` |
+> | useast2euap | rsa-sha2-256 | `dkP64W5LSbRoRlv2MV02TwH5wFPbV6D3R3nyTGivVfk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC3PqLDKkkqUXrZSAbiEZsI6T1jYRh5cp+F5ktPCw7aXq6E9Vn2e6Ngu+vr+nNrwwtHqPzcZhxuu9ej2vAKTfp2FcExvy3fKKEhJKq0fJX8dc/aBNAGihKqxTKUI7AX5XsjhtIf0uuhig506g9ZssyaDWXuQ/3gvTDn923R9Hz5BdqQEH9RSHKW+intO8H4CgbhgwfuVZ0mD4ioJKCwfdhakJ2cKMDfgi/FS6QQqeh1wI+uPoS7DjW8Zurd7fhXEfJQFyuy5yZ7CZc7qV381kyo/hV1az6u3W4mrFlGPlNHhp9TmGFBij5QISC6yfmyFS4ZKMbt6n8xFZTJODiU2mT1` |
+> | useast2euap | rsa-sha2-512 | `M39Ofv6366yGPdeFZ0/2B7Ui6JZeBUoTpxmFPkwIo4c=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC+1NvYoRon15Tr2wwSNGmL+uRi7GoVKwBsKFVhbRHI/w8oa3kndnXWI4rRRyfOS7KVlwFgtb/WolWzBdKOGVe6IaUHBU8TjOx2nKUhUvL605O0aNuaGylACJpponYxy7Kazftm2rV/WfxCcV7TmOGV1159mbbILCXdEWbHXZkA3qWe4JPGCT+XoEzrsXdPUDsXuUkSGVp0wWFI2Sr13KvygrwFdv4jxH1IkzJ5uk6Sxn0iVE+efqUOmBftQdVetleVdgR9qszQxxye0P2/FuXr0S+LUrwX4+lsWo3TSxXAUHxDd8jZoyYZFjAsVYGdp0NDQ+Y6yOx5L9bR6whSvKE1` |
+> | useast2euap | ecdsa-sha2-nistp256 | `X+c1NIpAJGvWU31UJ3Vd2Os4J7bCfgvyZGh35b2oSBQ=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK+U6CE6con74cCntkFAm6gxbzGxm9RgjboKuLcwBiFanNs/uYywMCpj+1PMYXVx/nMM4vFbAjEOA20fJeoQtN8=` |
+> | useast2euap | ecdsa-sha2-nistp384 | `Q3zIFfOI1UfCrMq6Eh7nP1/VIvgPn3QluTBkyZ2lfCw=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDWRjO+e8kZpalcdg7HblZ4I3q9yzURY5VXGjvs5+XFuvxyq4CoAIPskCsgtDLjB5u6NqYeFMPzlvo406XeugO4qAui+zUMoQDY8prNjTGk5t7JVc4wYeAWbBJ2WUFyMrQ==` |
+> | australiac | rsa-sha2-256 | `q2pDjwwgUuAMU3irDl2D+sbH8wQpPB5LHBOFFzwM9Sk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDnqOrNklxmyreRYe7N72ylBCensxxPTBWX/CfbdbGfEbcGRtMGHReeojkvf4iJ7mDMZRzecgYxZ9o2bwTH9UImNkuZTsFNH6APuJ075WyxoDgdBX1UAQ3eE6BrCNI0BcwLakU9lq0rNhmxMpt/quBXxxWbRieKR9liTOg5CGSqoUPo7TpwaZQBltJCEf7rN5wGUlHV49iuiJIasSldYT6F1c3vS4bJb2sdIvVnKVLq+yTMzaPzWn34BD+KHx/pkB+s7/vQtdMfBBEdgEdPVvMPsyXtIKhx4Q79LnfZT19RDY8KW1mJrbPo67oEcjJYTXSZTKysjCUNmNNrnXvp6sHd` |
+> | australiac | rsa-sha2-512 | `+tdLVAC4I+7DhQn9JguFBPu0/Hdt5Ru2zjuOOat+Opw=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCnd0ETMwpU8w7uwWu1AWDv6COIwLKMmxXu+/1rR429cNXuPrBkMldUfI7NKVaiwnu1kLPuJsgUDkvs/fc7lxx2l5i6mYBWJOXcWzAfXSBfB1a+1SK+2tDPYT3j4/W/KRW74DFPokWTINre22UVc+8sbzkmdtX/FzZdVcqI4+xJSjwdsp2hbzcsVWkxWhrFzKmBU40m5E/YwKQwAcbkzmX6AN5O8s66TQs2uPkRuTItDWI3ShW7QzW05jb6W8TeYdkouZ5PY0Yz/h3/oysFzo4VaUc0y3JP98KRWNXPiBrmvphpKnU1TQrjvVkYEsiCBHMOUnNVHdR1oIHd2zPRneK5` |
+> | australiac | ecdsa-sha2-nistp256 | `m2HCt3ESvMLlVBMwuo9jsQd9hJzPc/fe0WOJcoqO3RA=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBElXRuNJbnDPWZF84vNtTjt4I/842dWBPvPi2fkgOV//2e/Y9gh0koVVAYp6MotNodg4L9MS7IfV9nnFSKaJW3o=` |
+> | australiac | ecdsa-sha2-nistp384 | `uoYLwsgkLp4D5diAulDKlLb7C5nT4gMCyf9MFvjr7qg=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBARO/VI5TyirrsNZZkI2IBS0TelywsJKj71zjBGB8+mmki+mmdtooSTPgH0zmmyWb/z3iJG+BnEEv/58zIvJ+cXsVoRChzN+ewvsqdfzfCqVrzwyro52x5ymB08yBwDYig==` |
+> | usnorth | rsa-sha2-256 | `9AV5CnZNkf9nd6WO6WGNu7x6c4FdlxyC0k6w6wRO0cs=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDJTv+aoDs1ngYi5OPrRl1R6hz+ko4D35hS0pgPTAjx/VbktVC9WGIlZMRjIyerfalN6niJkyUqYMzE4OoR9Z2NZCtHN+mJ7rc88WKg7RlXmQJUYtuAVV3BhNEFniufXC7rB/hPfAJSl+ogfZoPW4MeP/2V2g+jAKvGyjaixqMczjC2IVAA1WHB5zr/JqP2p2B6JiNNqNrsFWwrTScbQg0OzR4zcLcaICJWqLo3fWPo5ErNIPsWlLLY6peO0lgzOPrIZe4lRRdNc1D//63EajPgHzvWeT30fkl8fT/gd7WTyGjnDe4TK3MEEBl3CW8GB71I4NYlH4QBx13Ra20IxMlN` |
+> | usnorth | rsa-sha2-512 | `R3HlMn2cnNblX4qnHxdReba31GMPphUl9+BQYSeR6+E=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDeM6MOS9Av7a5PGhYLyLmT09xETbcvdt9jgNE1rFnZho5ikzjzRH4nz60cJsUbbOxZ38+DDyZdR84EfTOYR2Fyvv08mg98AYXdKVWMyFlx08w1xI4vghjN2QQWa8cfWI02RgkxBHMlxxvkBYEyfXcV1wrKHSggqBtzpxPO94mbrqqO+2nZrPrPFkBg4xbiN8J2j+8c7d6mXJjAbSddVfwEbRs4mH8GwK8yd/PXPd1U0+f62bJRIbheWbB+NTfOnjND5XFGL9vziCTXO8AbFEz0vEZ9NmxfFTuVVxGtJBePVdCAYbifQbxe/gRTEGiaJnwDRnQHn/zzK+RUNesJuuFJ` |
+> | usnorth | ecdsa-sha2-nistp256 | `6xMRs7dmIdi3vUOgNnOf6xOTbF9RlGk6Pj7lLk6z/bM=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJw1dXTy1YqYLJhAo1tB+F5NNaimQwDI+vfEDG4KXIFfS83mUFqr9VO9o+zgL3+0vTrlWQQTsP/hLHrjhHd9If8=` |
+> | usnorth | ecdsa-sha2-nistp384 | `0cJkHHeTNQpl7ewPTZwug5+/hfebiH6Yxl2rOTtYZQo=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG8aqja46A9Q5PmhPzhxcklcJGp+CiC3MCjVR6Qdl9oQGMywOHfe+kCD72YBKnA6KNudZdx7pUUB/ZahvI5vwt4bi593adUMTY1/RlTRjplz6c2fSfwSO/0Ia4+0mxQyjw==` |
+> | brazilsouth | rsa-sha2-256 | `qNzxx1kid41tZGcmbbyZrzlCIPJ9TFa20pUqvRbcjro=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC04g5K8emsS4NpL6jCT3wlpi6Msb5ax6QGlefO3IKp3wDKWAEqN+PvqBdrNp1PsitTKeyRSCLofq9k2wzeAMzV2n3UVqmUpNf9Q0Yd8SuXPhKG6VhqG2hL5+ztrlVTMI2Ak18SLaAEA1x7y9Z1lkEYGvCzJQaAw5EG8kd7XHGaI9nSCJ7RFOdJQF/40gq8z6E+bWW9Xs55JpWQ0i44i/ZvQUEiv5nyAa7D86y23wk1pTIFkRT99Kwdua0GtyUlcgCRDDTOzsCTn4qTo/MAF1Uq/ol4G0ZxwKnAEkazSZ1c+zEmh6GJNwT64nWBZ+pt5Rp3ugW+iDc/mIlXtxEV2k7V` |
+> | brazilsouth | rsa-sha2-512 | `KAmGT8A7nRdxxQD7gulgmGTJvRhRdWPVDdagGCDmJug=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC6W0FiaS21Dze6Sr6CTB8rLBu1T1Zej+11m7Kt283PSkQNYmjDDPUx0wSgylHoElTnFcXG+eFMznnjxDqkH+GnYBmlpW3nxxdTYD/MrdP4dX9ibPCFjDupIDJ4thv+9xWCw/0RpGc1NlUx2YmenDVMFJtYqjB1IDa2UUEeUHeQa1qmiBs1tbBQdlws1MCFnfldliB5H+cO4xnbAUjOlaa01k7GKqPf0H75+R83VcIcFw8hSuCvgMT+86H6jRRfqiIzE7WGbQBTPQs0rGcvxcGR3oGOmtB2UmOD232XTEk+sG3q2RxtPKWTz8wz1Tt2c1BOxmtuXTtzXnigZjB2t8y5` |
+> | brazilsouth | ecdsa-sha2-nistp256 | `rbOdmodk5Aq+nxHt04TN7g6WyuwbW5o+sDbj86l6jp8=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNFqueXBigofrM5Kp2eA4wd4XxHcwcNgNFWGgEd0EoNdKWt9NroU47bN43f79Y5vPiSa4prKW1ccMBl40nNN4S4=` |
+> | brazilsouth | ecdsa-sha2-nistp384 | `cenQeg58JZ+Dvu3AC7P7lC/Jq7V3+YPaS37/BBn3OlQ=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHBhfnlfXV9/m6ZgOoqmSgX3VPnRdTOraBhMv8v7lEN1lWwyBpiWepu52KS0jR1RhttfXB+n+p6i2+9djJ1zT7fHq4sNn/d/3k2J6IjJlymZ32GwPvDk+fGefupUtabvRQ==` |
+> | ukwest | rsa-sha2-256 | `2NQ5z6fQjt4SZKdViPS+I2kX7GoXOx3fVE81t8/BCVE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDNq0xtA0tdZmkSDTNgA05YLH5ZuLFKD7RbruzuL4KVU2In0DQUtJkVqRXIaB3f+cEBTs9QrMUqolOdCCunhzosr5FvCO3I6HZ8BLnVNshtUBf2C1aT9yonlkdiIyc2pCHonds8vHKC4SBNu3Jr584bhyan8NuzJqzPCnKTdHwyWjf8m5mB4liK/ka4QGiaLLYTAjCCXmaXXOVZI2u0yDcJQXAjAP5niCOQaPHgdGk6oSjs0YKB29V+lIdB8twUnBaJA9jgECM2brywksmXrAyUPnIFD6AVEiFZsUH3iwgFAH7O6PLZTOSgJuu994CNwigrOXTbABfpH2YMjvUF///5` |
+> | ukwest | rsa-sha2-512 | `MrfRlQmVjukl5Q5KvQ6YDYulC3EWnGH9StlLnR2JY7Q=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQClZODHJMnlU29q0Dk1iwWFO0Sa0whbWIvUlJJFLgOKF5hGBz9t9L6JhKFd1mKzDJYnP9FXaK1x9kk7l0Rl+u1A4BJMsIIhESuUBYq62atL5po18YOQX5zv8mt0ou2aFlUDJiZQ4yuWyKd44jJCD5xUaeG8QVV4A8IgxKIUu2erV5hvfVDCmSK07OCuDudZGlYcRDOFfhu8ewu/qNd7M0LCU5KvTwAvAq55HiymifqrMJdXDhnjzojNs4gfudiwjeTFTXCYg02uV/ubR1iaSAKeLV649qxJekwsCmusjsEGQF5qMUkezl2WbOQcRsAVrajjqMoW/w1GEFiN6c70kYil` |
+> | ukwest | ecdsa-sha2-nistp256 | `bNYdYVgicvl1yaOR/1xLqocxT8bamjezGFqFdO6Od0I=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKWKoJuxB3EO5bKjxnviF+QTv3PBSViD1SNKbfj0qYfAjObQKZuiqcFYeDoPrkhk9jfan2jU6oCEN4+KDwivz3k=` |
+> | ukwest | ecdsa-sha2-nistp384 | `6V8vLtRf6I5BjuLgToJ1cROM72UqPD+SC0N9L9WG6PA=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA+7R/5qSfsXACmseiErhfwhiE7Rref/TNHONqiFlAZq2KCW3w3u8+O4gpJEflibMFP/Mj5YeoygdPUwflFNcST9K+vnkEL3/lqzoGOarGBYIKtEZwixv3qlBR+KyoRUkw==` |
+> | uswestcentral | rsa-sha2-256 | `aSNxepEhr3CEijxbPB4D5I+vj8Um7OO6UtpzJ/iVeRg=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDDWmd8Zd7dCfamYd/c1i4wYhhRnaIgUmK7z/o8ehr4bzJgWRbjrxMtbkD2y7ImjE2NIBG5xglz6v9z4CFNjCKUmoUl7+Le3Rsc5sJ/JmHAmEXb0uiDMzhq9f6Qztp+Pb9uqLfsPmm6pt1WOcpu+KNpiGtPWTL21sJApv6JPKU+msUrrCIekutsHtW6044YPXNVOnvUXv08BaPFhbpeGZ4zkrji0mCdGfz2RNcgLw0y3ZzgUuv0Lw+xV0/xwanJu4IOFI1X9Ab7NnoGMkqN/upBLJ4lRhjYVTNEv01IX2/r5WZzTn4c38Nfw4Ma3hR0BiLMTFfklFVGg2R64Z7IILoB` |
+> | uswestcentral | rsa-sha2-512 | `vVHVYoH1kU1IZk+uZnStj3Qv2UCyOR9qVxJfmTc20jQ=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC9Q8Tvvnea8hdaqt+SZr4XN1JIeR43nX6vrdhcS6yyfRgaTcEdKbAKQbwj9Fu3kq80c4F+SNzh1KQWlqLu3MJHSaSdQLN9RaHO1Dd+iVK1WgZtsPM9+6U7wupMZq8Hdmao5sqaMT5lj7g+win2J+Wibz7t8YwS7g2Xi+ode8tFPFKduZ5WvKLjI0EiAS4mvcyWEWca142E8fxV9TobUjAICfgtL4vCpmLYKnSL/kUgplD0ow86k/MHp9zghDLVSVDj8MGMra+IJEpgHOUrFNnuyua2WSJVuXR2ITfaecRKrGg7Z4IJzExPoQzDIWdCHptiGLAqvtKT0NE2rPj9U4Rp` |
+> | uswestcentral | ecdsa-sha2-nistp256 | `rkHjcTK2BvryQAFvjugTHdbpYBGfOdbBUNOuzctzJqM=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKMjEAUTIttG+f5eocMzRIhRx5GjHH7yYPzh/h9rp9Yb3c9q2Yxw/j35JNWxpGwpkb9W1QG86Hjt4xbB+7q/D8c=` |
+> | uswestcentral | ecdsa-sha2-nistp384 | `gS9SYvaH6dCqyugArvFb13cwi8q90glNaK+fyfPie+Y=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD0HqM8ubcDBRMwuruX5zqCWSp1DaLcS9cA9ndXbQHzb2gJ5bJkjzxZEeIOM+AHPJB8UUZoD12It4tCRCCOkFnKgruT61hXbn0GSg4zjpTslLRYsbJzJ/q6F2DjlsOnvQQ==` |
+> | uscentral | rsa-sha2-256 | `GOPn34T1cCkLHO0xjLwmkEphxKKBQIjIf9QE1OAk3lU=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC9oA4N2MxbHjcdSrOlJOdIPjTB2LpQUMwJJj+aI2KEALYDGWWJnv0E14XjY1/M35jk8z0hX4MHGE/MEocSsTVdFRdWdW9CKTWT6eICpg9frTj6wfkB/Dxz/BAYb/YXq5OMrFlkFJUG8FMp9N80W6UWichcltmSrCpRi5N3ZGpVXEYhJF+I0mCH7Yheoq2KzIG2iWU/EJT5fui4t51wD8CQ1NWG8/THnNr0gjCr3AtB+ZPAl/6N7i2vO3FlZEHUj6BHoQ4dhIGjGCkgFDNU6RpdifqMJRihP9fSMOq4qksch1TE5sOnp0sOaP/RQvChb4oXB8Pru+j45RxPzIvzzOZZ` |
+> | uscentral | rsa-sha2-512 | `VLhZbZjHzoNRMyRSa3GYvk2rgacjmldxQ2YNzvsMpZA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDPnuJixcZd6gAIifABQ377Mn0ZootRmdJs1J3R8/u7mbdfmpX2ItI0VfgMh4BzGEdgCqewx4BjADhfXRurfimuP8P9PLRq89AHX2V+IfeizLZkrnrxKiijjGEz640gORzzwIp2X+bmnBABWzEZjSNOeE3CKVr4ONvH80bYGFFqR4+arOelDqWEgxktM1QBlId7xR7efmtEGAuAhFbZVaqjBNsbqyiR/hlkMQfmWn1bjGSoenUoPojc7UAp9+Xf6ujkhCihRV/O4A69tVvp5E0Qv5MJ1Qj3kzAYbHQcIQ2l47MQq1wdZYxkYBHmH5leAjHgQbbccPalOLSbLRYjF169` |
+> | uscentral | ecdsa-sha2-nistp256 | `qN1Fm+zcCQ4xEkNTarKiQduCd9S+Aq3vH8BlfCaqL74=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN6KpNy9XBIlV6jsqyRDSxPO2niTAEesFjIScsq8q36bZpKTXOLV4MjML0rOTD4VLm0mPGCwhY5riLZ743fowWA=` |
+> | uscentral | ecdsa-sha2-nistp384 | `9no3/m09BEugyFxhaChilKiwyOaykwghTlV+dWfPT6c=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCiEYrlw9pskKzDp/6HsA2g0uMXNxJKrO5n16cHwXS1lVlgYMK3mmQdA+CjzMdJflvVw7sZO2caApr+sxI3rMmGWCt5gNvBaU6E9oUN8kdcNDvsfFavCb3vguOgtgbvHTg==` |
+> | europenorth | rsa-sha2-256 | `vTEOsEjvg/jHYH1xIWf2rKrtENlIScpBx450ROw52UI=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQChnfrsd1M0nb7mOYhWqgjpA+ChNf7Ch6Eul6wnGbs7ZLxXtXIPyEkFKlEUw4bnozSRDCfrGFY78pjx4FXrPe5/m1sCCojZX8iaxCOyj00ETj+oIgw/87Mke1pQPjyPCL29TeId16e7Wmv5XlRhop8IN6Z9baeLYxg6phTH9ilA5xwc9a1AQVoQslG0k/eTyL4gVNVOgjhz94dlPYjwcsmMFif6nq2YgQgJlIjFJ+OwMqFIzCEZIIME1Mc04tRtPlClnZN/I+Hgnxl8ysroLBJrNXGYhuRMJjJm0J1AZyFIugp/z3X1SmBIjupu1RFn/M/iB6AxziebQcsaaFEkee0l` |
+> | europenorth | rsa-sha2-512 | `c4FqTQY/IjTcovY/g7RRxOVS5oObxpiu3B0ZFvC0y+A=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCanDNwi72RmI2j6ZEhRs4/tWoeDE4HTHgKs5DRgRfkH/opK6KHM64WnVADFxAvwNws1DYT1cln3eUs6VvxUDq5mVb6SGNSz4BWGuLQ4onRxOUS/L90qUgBp4JNgQvjxBI1LX2VNmFSed34jUkkdZnLfY+lCIA/svxwzMFDw5YTp+zR0pyPhTsdHB6dST7qou+gJvyRwbrcV4BxdBnZZ7gpJxnAPIYV0oLECb9GiNOlLiDZkdsG+SpL7TPduCsOrKb/J0gtjjWHrAejXoyfxP5R054nDk+NfhIeOVhervauxZPWeLPvqdskRNiEbFBhBzi9PZSTsV4Cvh5S5bkGCfV5` |
+> | europenorth | ecdsa-sha2-nistp256 | `wUF5N8VjGTnA/PYBVzQrhcrMgHuCfAYL1tu+p6s28Ms=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCh4oFTmr3wzccXcayCwvcx+EyvZ7yANMYfc3epZqEzAcDeoPV+6v58gGhYLaEVDh69fGdhiwIvMcB7yWXtqHxE=` |
+> | europenorth | ecdsa-sha2-nistp384 | `w7dzF6HD42eE2dgf/G1O73dh+QaZ7OPPZqzeKIT1H68=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLgyasQj6FYeRa1jiQE4TzOGY/BcQwrWFxXNEmbyoG89ruJcmXD01hS2RzsOPaVLHfr/l71fslVrB8MQzlj3MFwgfeJdiPn7k/4owFoQolaZO7mr/vY/bqOienHN4uxLEA==` |
+> | uaen | rsa-sha2-256 | `Vazz+KIADh85GQHAylrlI1tTY8/ckoRqTe/kbLXPmd0=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDRGQHLLR9ruI0GcNF2u3EpS2CbHdZlqcgSR1bkaOXA9ZufHyxuhIpzG2IgYQ8wrjGzIilYds6UIH7CAw9FApKLNpLR6qdm8qjM0tJiyHLm3KloU27FfjCQjE9JhmsbTWCRH3N52A9HXIdiVCE3BBSoXhg/mF+3cvm1JvabKr1twoyfbUgDFuF7fDyhSxJ/MTig8SpgzWqcd5J+wbzjXG0ob2yWVhwtrcB6k97g25p77EKXo3VhSs0jN7VR+SAHupVwWsUgx4fZzi2I5xTUTBdOXW+e3EiXytPL2N5N/MtFKVY/JVhFkKkcTRgeuOds51tkByteSkc32kakcUxw6CjJ` |
+> | uaen | rsa-sha2-512 | `NDeTZPUor2OuTdgSjLLhSaqJiTJUdfwTAzpkjNbNQUY=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDAx9LfiyVmWwGD/rjQeHiHTMWYaE/mMP6rxmfs9/I4wEFkaTBbc4qewxUlrB1jd7Se2a0kljI3lqQJ9h+gjtH/IaVTZOKCOZD8yV9Dh4ZENRqH/TOVz6LCvZifVbjUtxRtbvOuh1lJIVBSBFciNr0HThFMnTEIwcs5V48EFIT6eS9Krggu+cWAX2RbjM0VQnIgkA5BeM33MjSjNz86zhO+e7e1lhflPKL5RTIswtWbwatgkyvfM33pJql/zJz+3/usSpIA/pgWw23c8WziYXiHPTShJXN+N+9iLKf9YUkpzQUZSaRw8XDPyjJNx327Lot0Bh4YLpe37R0SrOvitBsN` |
+> | uaen | ecdsa-sha2-nistp256 | `vAuGgsr0IQnOLUaWCCOBt+Jg0DV9C6rqHhnoJnwORM8=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEYpnxgANJNJ4IIvSwvrRtjgbejCpTc3D+l5iob7dBK4KQ7MB40rq+CtdBDGZ1J7d6oCevW6gb1SIxU/PxCuvMI=` |
+> | uaen | ecdsa-sha2-nistp384 | `A5fa4Pzkdl0H2kVJxlNiEQkOhPzBYkrfQrcviQUUWUA=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOz4ENDgFpo0547D5XCRCJLg8brp+iUyId2IdEhZAhuNX9spxlVe6uSkiQbd+8D5hHPVNuLFTFx7v2wXObycM8tr/WGejn/934BvSUhM6lDpU+d5n+ZcxEEhp4gDiy1l+Q==` |
+> | germanywc | rsa-sha2-256 | `0SKtGye+E9pp4QLtWNLLiPSx+qKvDLNjrqHwYcDjyZ8=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDsbkjxK9IJP8K98j+4/wGJQVdkO/x0Msf89wpjd/3O4VIbmZuQ/Ilfo6OClSMVxah6biDdt3ErqeyszSaDH9n3qnaLxSd5f+317oVpBlgr2FRoxBEgzLvR/a2ZracH14zWLiEmCePp/5dgseeN7TqPtFGalvGewHEol6y0C6rkiSBzuWwFK+FzXgjOFvme7M6RYbUS9/MF7cbQbq696jyetw2G5lzEdPpXuOxJdf0GqYWpgU7XNVm+XsMXn66lp87cijNBYkX7FnXyn4XhlG4Q6KlsJ/BcM3BMk+WxT+equ7R7sU/oMQ0ti/QNahd5E/5S/hDWxg6ZI1zN8WTzypid` |
+> | germanywc | rsa-sha2-512 | `9OYO7Hn5p+JJeGGVsTSanmHK3rm+iC6KKhLEWRPD9ro=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCwrSTqa0GD36iymT4ZxSMz3mf5iMIHk6APQ2snhR5FUvacnqTOHt3xhMF+UwYmGLbQtmr4HdXIKd7Dgn5EzHcfaYFbaLJs2aDngfv7Pd6TyLW3TtSgJ6K+mC1MDI/vHzGvRxizuxwdN0uMXv5kflQvnEtWlsKAHW/H7Ypk4R8s+Kl2AIVEKdy+PYwzRd2ojqqNs+4T2tPP5Y6pnJpzTlcHkIIIf7V0Bk/bFG2B7r73DG2cSUlnJz8QW9pLXIn7268YDOR/5nozSXj7DinVDBlE5oeZh4qkdMHO1FSWynj/isUCm5qBn76WNa6sAeMBS3dYiJHUgmKMc+ZHgpu6sqgd` |
+> | germanywc | ecdsa-sha2-nistp256 | `Ce+h+7thT5tt75ypIkWZ6+JnmQMZEl1N7Tt3Ldalb64=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBmVDE0INhtrKI83oB4r8eU1tXq7bRbzrtIhZdkgiy3lrsvNTEzsEExtWae2uy8zFHdkpyTbBlcUYCZEtNr9w3U=` |
+> | germanywc | ecdsa-sha2-nistp384 | `hhQQi2iRjSX5d9c+4714hAFvTA3c63+TGknhuQi7Tss=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDlFF3ceA17ZFERfvijHkPI2Na1wuti9/AOY5E/bDvZfP08kkmYTb9Ma6omhB0dHR6e1CmRJfKmFXfTd81iVWPa7yXCxbS8yG+uNKCuHxuNv8hFhNM84h2727BSBHBBHBA==` |
+> | switzerlandw | rsa-sha2-256 | `yoVjbjB+U4Cp/ZpMgKKuji9T2pIFtdeXnJudyeNvPs0=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDFl9NO3CJyKTdYxDIgCjygwIxlT1ppJQm/ykv2zDz6C7mjweiuuwhVM3LRua3WyP5mbgl3qYm+PHlA7UyIMY5jtsg7GaSfhiBSGZAdfgfDgOp3qRkgyep84P69SLb2b0hwgsPVkx8eWLDDVbOEdQLLx7TVndyxtdw+X4bZs6UdEcLMvLUWl7v3SoD5oiuJN6vOJPQl0VBeEaK/uhujjFgnlEu7/31rYEKQ8vQBbx22a4kIyBtUSAGo/VfKGRWF9oXL7Umh2xHAPwNbGwP+DdCKUY27wWG7Qe18O+QS9AOu0yL4+MRIHZg8ODLQsk0Hp3q8Iw2JjohSkk4lcjHYgb69` |
+> | switzerlandw | rsa-sha2-512 | `UgWxFaVY0YYMiNQ82Wt3D1LDg3xta1DfRUUKWjZYllk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC6svukqfg7147raZZrA1bZFOO/EDFgi+WRsxoOfH/EEWGmZ89QQ5m855TpsTPZ5ZARQD9kxrYEtqefcSPuWgth4Ze5PNVwRfAwedsSfnYwHZqHRlRM54bOQ6Img7T292ERl4KNJUI7SLyF+kKB7eXqp5nMBrTZ4rSHXoeibv2yZAph0cyf4V/NnfRj6KZSf6YDs0LW1VuovWAC6S7mpBjwtabBmd1gIiJleWhB7Jj48yiyh0m7L9oIoR4NRiuFC535JwqCYhrgFwujuk6iIR9ScRdayEr6gVcv6tBms3MyR16ytA/MHRxYHfPKb1kHUrpFjDQZZZswoDJDnhQGOm8Z` |
+> | switzerlandw | ecdsa-sha2-nistp256 | `5MyZiuHQIMDh/+QEnbr3Zm6/HnsLpYT2GXetsWD6M8Q=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEj5nXHEjkVlLcf9R9fPQw9k2QGyUUP6NrFRj1gbxKzwHsgG2YKWDdOJiyguiro0xV9+JRdW3VC49/psIYUFDPA=` |
+> | switzerlandw | ecdsa-sha2-nistp384 | `nS9RIUnm5ULmNIG+d7qSeIl/kNzuJxAX9/PcwfCxcB0=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB/Ps4Wp15xhNenavSHZijwVXdZcvhzVq8IcfHR3+Gz3tKLed36OdHRTdWpvjrg0mENw4L1mEZnHnDx96WMtA+FfagGWXMVMMfcyM4riIedemHsz45KAR2suqcdkNHfdVA==` |
+> | swedenc | rsa-sha2-256 | `feu0rEf3KhvHGfhxEjcuFcPtIl+f0ZVzOMXyxy+f8f4=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDOimUzZHr0DxrjdWEPQqkrBudLW2P2dvPE9DoaXSNbehU13bxzsF6lzO65JBPh9rlNwwyt2yWtrR4XI0Qh/QSXmBntefOeH6BZVrN06aHrsd1dQBr4UFT5chCwy6Keu0ARW3fY8kO9lycTmMIeoiaYahicxyRRC8WLs0cSCH8tO0dA2aoaMxafBWqR6D5dNzu00rIcsCxvyjtN3Y8C4fw3YnNvPB/qWHdZ4aNcu7sQMRhCYVNPqX9UNGeXkbw8gHf9uL9dFu1c+P+VFIEs5bIecgT5HiGvtuXsWRdtEcM1v3mrRnNdmeWWQIqXzLrs5svipMIbnYXekhhLYHIlVo4d` |
+> | swedenc | rsa-sha2-512 | `5fx+Ic5p/MMR6TZvjj2yrb4HMHwc1TgM4x1xQw4aD3Y=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC2nRaxWTg4KGLClTZLQ5QgPZPyQ/XYbH4prjhg1uK7m/JKlmJw5LjmIUVKnlXS38qTKpWpJZyGU/eBCa5FPQODvoAXfNncgtIQxd7j00P8aO2tho+uIxSgiTCte8sgrAyx22uIJlORJn2x1cBFBJrlgQDJOKEAs9IakMNdLvlfjJV405gk7pstF4eeIANRWC3eOTrMs0O1gCTt2rnWR5BNQJu8swj9FEWreNQ3PvUliM6Ig6u8b+4d8ryYGuzh5+E8wy/aNxlowkoCI4D/+dBnH43pSYyjhrVx966JMlrJZjDmbgtygkJI+FoEEfBoFlrpIGfisqIX41Np9ZRre4Ux` |
+> | swedenc | ecdsa-sha2-nistp256 | `6HikgYBMSL9VguDq9bmwRaVXdOIUKEQUf4hlOjfvv6I=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBErZhZNNmDhMKbSUXLB1VcTmR7pXcXWAqfFpdI81OP1FeCxBtpRNpIeWMyVoP3FeO3yWcODLm/ZkK7BICFpjleo=` |
+> | swedenc | ecdsa-sha2-nistp384 | `apRb96GLQ3LZ3E+rt2dyr9imMFDXYbaZERiireEO6ks=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKA5kwsqDKzZWmQCjIFBBjZun3rjg62pv8BOULwvwImaPvMFuR2OipExQZIyKSbR7wS9HA4/QKVA5rLRrSGpYvOBG438/7fwVZy5rOj3GXq6X7Havr1ExRXwsw5rJ56acA==` |
+> | asiaeast | rsa-sha2-256 | `XYuEB+zABdpXRklca8RCoWy4hZp9wAxjk50MD9w6UjE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDNKlaGhRiHdomU5uGvkcEjFQ0mhmNnWXsHAUNoGUhH6BU8LmsgWS61QOKHf1d3qQ0C9bPaRWMpemAa3DKGGqbgIdRrI2Yd9op+tqM+3hrZ8cBvBCgqKgaj4ZitoFnYm+iwwuReOz+x0I2/NmWUxuQlbiHTzcu8TVIln/5sj+n9PbwXC8Zk6vsCt6aon/P7hESHBJ4yf2E+Io30m+vaPNzxQGmwHjmBrZXzX8gAjGi6p823v5zdL4mq3tT5aPPsFQcfjkSMRDGq6yFSMMEA7i2dfczBQmTIJkYihdS8LBE0Ir5islJbaoPQxeXIrF+EgYgla505kJEogrLprcTGCY/t` |
+> | asiaeast | rsa-sha2-512 | `FUYhL0FaN8Zkj/M0/VJnm8jPL+2WxMsHrrc/G+xo5QM=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC7x8s74EH+il+k99G2aLl1ip5cfGfO/WUd3foiwwq+qT/95xdtstPYmOP77VBQ4G6EnauP2dY6RHKzSM2qUdmBWiYaK0aaI/2lCAaPxco12Te5Htf7igWyAHYz7W99I2CfJCEUm1Soa0v/57gLtvUg/HOqTgFX44W+PEOstMhqGoU9bSpw2IKlos9ZP87C6IQB5xPQQ1HlnIQRIzluJoFCuT7YHXFWU+F4ZOwq5+uofNH3tLlCy7D4JlxLQ0hkqq3IhF4y5xXJyuWaBYF2H8OGjOL4QN+r9osrP7iithf1Q0EZwuPYqcT1QeIhgqI7OIYEKwqMfMIMNxZwnzKgnDC1` |
+> | asiaeast | ecdsa-sha2-nistp256 | `/iq1i88fRFHFBw4DBtZUX7GRbT5dQq4g7KfUi5346co=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCvI7Dc7W3K919GK2VHZZkzJhTM+n2tX3mxq4EAI7l8p0HO0UHSmucHdQhpKApTIBR0j9O/idZ/Ew6Yr4nusBwE=` |
+> | asiaeast | ecdsa-sha2-nistp384 | `KssXSE1WC6Oca0dS2CNySgObkbVshqRGE2JcaNsUvpA=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNEYGYGolx8LNs5TVJRF/yoxOCal3a4C0fw1Wlj1BxzUsDtxaQAxSfzQhZG+lFCF7RVQyiUwKjCxmWoZbSb19aE7AnRx9UOVmrbTt2PMD3dx8VmPj1K8rsPOSq+XX4KGdQ==` |
+> | southafrican | rsa-sha2-256 | `qU1qry+E/fBbRtDoO+CdKiLxxKNfGaI9gAplekDpYvk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC2UBC1KeTx8/tQIxVEBUypcu/5n3B/g0zqE7tFmPYMFYngrXqEysIzgAdpiu2+ZX/vY8AF/0UkhYec/X/rwKQL8CCVwYqa2hufbSrX/qSuUHZd/95LFB2Nh+hJ23fn3EK8Gpgo/Xkmx9YVZoaQPGPsWVWVKjU6aVpM54cd6iuDT3y9SAnqbUMqgwwz3mK7bQGFPrbUVOUwVIcYKZD9HMNZhpo8HpjllKYIt1AFy4db8lSrLyuX8Nn/U7XAlPUndUCpKsAfWw8SemyuxSHziFDHF5xo8eLU+QYxdtzirgDAgEYWv9aa0TSx5Q2Mq8XJ7POffQxKj44ocHzmMGq/wPS1` |
+> | southafrican | rsa-sha2-512 | `1/ogzd+xjh3itFg3IpAYA2pwj1o3DprEabjObSpY/DY=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDLAkEygbVyp189UwvaslGRgaqcGWXaYJVq+gUB0906xkkjGoJeqSgTW5C/77vOk0zBCZM3yBgtDFZL1d6lze1QJZ6kGGPynJa5SeyydAds9G745yaFFuE53zJUyMy+y5I1ytfx003PKvk8+fHZK3rPYYr+LKm2u+9BmnuDB/0t561oFg1ZiMCPgNnDdUwkya2EtsJAifkUaBlYmzBZAFbIYyGfb898utZHyI+ix2TrMS/RHEDIchG8qSBMpOPmcpa29ADVsmAQDd5ds5D7WjirfMXwBxgJTMyuy+N9rJRgHoqDnt/GsgI2GtoPM7YSET8uYug941hAvFm5TI/dW3YR` |
+> | southafrican | ecdsa-sha2-nistp256 | `e6v7pRdZE0i1U2/VePcQLguy7d+bHXdQf3RZ4jhae+g=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIEQemJxERZKre+A+MAs0T0R7++E6XanZ7uiKXZEFCyDgqjVcjk8Xvtrpk5pqo4+tMWM7DbtE0sgm1XmKhDSWFs=` |
+> | southafrican | ecdsa-sha2-nistp384 | `NmxPlXzK2GpozWY374nvAFnYUBwJ2cCs9v/VEnk0N6Q=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKgEuS9xgExVxicW0HMK4RLO5ZC6S0ZyENe5XVVJY0WKZ5IfIXEhVTkYXMnbtrYIdfrTdDuHstoWY9uu4bS8PtFDheNn3MyNfObqpoBPAh1qJdwfJgzo5e7pEoxVORUMnw==` |
+> | uksouth | rsa-sha2-256 | `3nrDdWUOwG0XgfrFgW27xhWSizttjabHXTRX8AOHmGw=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCdLm+9OROp5zrc6nLKBJWNrTnUeCeo8n1v9Y3qWicwYMqmRs/sS9t5V3ABWnus4TxH3bqgnQW3OqWLgOHse/3S+K1wGERmBbEdKOl7A7kQ9QgDkWEZoftwJ9hp+AMVTfCYhcOOsG+gW021difNx+WW2O5TldL31pk+UvdhnQKRHLX31cqx5vuUmiwq4mlbBx+rY8B/xngP2bzx/oYXdy1I9fZbWWAQ6FwJBav1sSWL0l7snRdOsy5ASeMnYollEw1IATwYeUv8g3PzrryZuru+7gu/Ku9w8d5jbFyI6Up4KLwjs/gZNuqQ5dif7utiQYbVe4L0TPWOmuLA25JJRZaF` |
+> | uksouth | rsa-sha2-512 | `Csnl8SFblkdpVVsJC1jNVSyc2eDWdCBVQj9t6J3KHvw=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDIwNEfrP6Httmm5GoxwprQ57AyD6b3EOVe5pTGQWIOzxnrIw2KnDPL07KNa33xZOmtXro5PYyhr5eNXUkFiQMEe+RblilZSNAvc4MHbp2TVD0L9N7Pdy2SetoF4m5BCXdC48kZntqgkpzXoDbFiaAVln5zQCHB5fOuBPS1id8+k3zqG0o+K0MHb6qcbYV8gdQeOn/PlJzKE4M0Ie8na3aWHdGvfJjDdK/hNN0J+eUK8qIb9KCJkSMDj/l3rnue9L8XgeKKA2Pkvh3nch4VBXCcCsDVhgSf+aoiJ0Fy8GVOTk2s7QDMzD9y37D9V2OPl66q4pjFGOfK0mJmrgqxWNy5` |
+> | uksouth | ecdsa-sha2-nistp256 | `weMVzOmQnlMdMp5XBoU9SdN5meBbx/8nvA8dB45w8Ck=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEnBllEm4/HsTP+ZMhlc8YnSAYWF23tibZDqGxf0yBRTU/ncuaavuQdIJ5TcJb0NcXG7skEmq3StwHT0FPMWN8Y=` |
+> | uksouth | ecdsa-sha2-nistp384 | `HpsZ8zoOCCsUbpD3nAOtxpuKIvn0L8KGyg1KMLuMUqU=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGd/672brwX1kOhH31ZTdBRj+bcEmemcdmTEe0J88cJ3RRQy7nDFs25UrnR+h3P0ov9Uq24EJQS8auxRgNCUJ3i3ZH9QjcwX/MDRFPnUrNosH8NkcPmJ/pezVeMJLqs3Qw==` |
+> | australiasoutheast | rsa-sha2-256 | `YafIMxux7NtoKCrjQ2eDxaoRKHBzpanatwsYbRhrDKQ=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC7omLu37G00gLoGvrPOJXpRcI5GTszUSldKjrARq0WeJIXEdekaSTz5qv2kSN/JaBDJiO9E4AJFI9q5AvchdmMVs4I59EIJ0bsR9lK+9eRP4071EEc8pb3u/EPFaZQ8plKkvINJrdK6p0R2FhlFxa7wrRlKybenF1v7aU3Vw79RQYYXaZifiNrIQFB8XQy3QQj2DcWoEEOjbOgZir9XzPBvmeR8LLEWPTLYangYd3TsQtybDpP6acpOKaGYDEyXiA8Lxv8O276LUBny6katPrNvfGZScxn6vbTEZyog+By8vyXMWKEbC1Qc/ecBBk5obFzbUnq3RP1VXLJspo99cex` |
+> | australiasoutheast | rsa-sha2-512 | `FpFCo9sNUkdnD1kOSkWDIfnasYhExvRr1pJlE631QrA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDmuW2VZAhR6IoIOr32WnLlsr/rt3y4bPFpFcNhXaLifCenkflj9BufX3lk5aEXadcemfKlUJJdwBTvTt1j4+X3P2ecCZX1/GSsRKSTuiivuOgkPxk3UlfggkgN9flE9EdUxHi/jN/OQ9CjGtHxxk72NJSMNAjvIe0Ixs7TfqqyEytYAcirYcSGcc0r70juiiWozflXlt+bS7mXvkxpqMjjIivX+wTAizzzJRaC6WcRbjQAkL2GP6UCFfBI1o9NBfXbz+qvs1KTmNA0ugRQ7g6MdiNOePHrvoF1JgTlCxEjy+/IqPiC8nNQUVCW6/gcATQoDQn0n9Lwm1ekycS35xEh` |
+> | australiasoutheast | ecdsa-sha2-nistp256 | `4xc49pnNg4t/tr91pdtbZLDkqzQVCguwyUc16ACuYTc=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCdswzJ+/Bw5ia/wRMaa0llZOjlz67MyZXkq7Ye38XMSHbS4k/GwM0AzdX+qFEwR00lxZCmpHH28SS+RyCdIzO0=` |
+> | australiasoutheast | ecdsa-sha2-nistp384 | `DEyjMyyAYkegwLtMBROR/8STr1kNoQzEV+EZbAMhb1s=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJRZx6caZTXnbTW/zRzKfoKC4LGzvD5fnr2p8yGWxDq27CjUEMxkctWcT6XS3AstF2MLMTOFp/UkkSr8eP0vXI8g99YDeFGXtoBiFSIgYF2Aiu/kpfEu3feiIUl3SVzxKw==` |
+> | frances | rsa-sha2-256 | `aywTR4RYJBQrwWsiALXc1lDDHpJ34jIEnq3DQhYny0g=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDELY4UcRAMkJpEBZT40Oh5TIxI6o6Enmlv+KxWkkcyFcNJlFtaF2Hl+afWlysrg+lB5Un4XpveWY64pl7a/dSju7aPfujcXowELIPqFSoWW7xQ+jkfJdyI0daa0l2h2oNCPqWnx8+04Vx5kcb2GktlNG4RMLx7Q6COJgQ3pGHtyfZ5fnmrWNBsuv4mvsXp0u1KGWX6s2LZtO+BpKE6DegSNLMVapAZ0ju8pagqtm6aeWEtqmkAvsI0U31qhL25FQX4DzjIbGzXd6I25AJcSXcpnwQefsaOwO/ztvIKeIf3i/h2rXdigXV1wyhvIdKm1uWwj6ph4XvOiHMZhsRUe02B` |
+> | frances | rsa-sha2-512 | `+y5oZsLMVG6kfdlHltp475WoKuqhFbTZnvY0KvLyOpA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDmsS9WimMMG95CMXFZiStR/peQU1VA6dklMbGmYwLqpxLNxxsaQuQi6NpyU6/TS8C3CX0832v1uutW38IfQGrQfcTGdAz+GjKverzaSXqZGgTMh/JSj06rxreSKvRjYae596aPdxX5P+9YVuTEeTMSdzeklpxaElPfOoZ7Ba5A2iCnB/5l/piHiN8qlXBPmfGLdZrTUFtgRkE4Ie4zaoWo19611XgUDMDX4N4be/qilb95cUBE73ceXwdVKJ3QVQinZgbwWFUq0fMlyd8ZNb9XN6bwXH7K6cLS6HYGgG6uJhkYSAqpAZK2pOFn3MCh8gw2BkM/Rg+1ahqPNAzGPVz9` |
+> | frances | ecdsa-sha2-nistp256 | `LHWlPtDIQAHBlMkOagvMJUO0Wr41aGgM+B/zjsDXCl0=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHdj2SxQdvNbizz8DPcRSZHLyw5dOtQbzNgjedSmFwOqiRuZ2Vzu88m2v5achBwIj9gp0Ga14T7YMGyAm04OOA0=` |
+> | frances | ecdsa-sha2-nistp384 | `btqtCD/hJWVahHWz/qftHV3B+ezJPY1I3JEI/WpgOuQ=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB2rbgGSTtFMciVSpWMvmGGTu8p1vGYfS2nlm+5pAM85A4Em1mYlgHfVZx+SdG5FSYcsX4vTWt4Yw2OnDmxV3W0ycrKBs4Bx3ASX4rx3oZezVafHsUUV0ErM+LmdmKfH8g==` |
+> | uswest2 | rsa-sha2-256 | `ktnBebdoHk7eDo2tvJuv36XnMtfauhvF/r0uSo6DBfk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDoskHzExtM+YSXGK6cBgmnLlXsZLWXkEexPKC7wHdt0kSqkIk9F31wD+2LefZzaTGfAmY5/EWrOsyBJvIgOoksH+ZPMnE9+TOWqy6vsS+Ml/ITvUkWajS1bKDPDSoIrCM1rQ9PlbgMQFg4o0FfyxLVCP7hcgvHO+aycOxkiDqtvwANvIn2Qwt7xwpIv1Mnc4OpcBSxbigb7ISlrvR9XWivE/piWjXS3IEYkGv7VitAlmWEoTt9L7K94bYol2nCXSXJ33X6xVVwVNpdxVtnUQBIRinN+vkOccgG0jvWtWPtrMjyDg/lyvr6lBdO/CQy4VO4VrIBuL6pjsS8KfIfTxKd` |
+> | uswest2 | rsa-sha2-512 | `i8v3Xxh/phaa5EyZEr5NM4nTSC/Rz7Nz0KJLdvAL0Ls=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDOOo5f0ACypThvoDEokPfzGJUxbkyMoQKca9AgEb3YkQ/lsYCfLtfGxMr2FTOGQyx5wfhOrN0B2SpI4DBgF3B0YSLK0omZRY7fpVPspWWHrsbTOJm/Fn7bWCM+p63xurZ6RUPCA6J1gXd3xbdW7WQXLGBJZ6fjG7PbqphIOfFtwcs/JvjhjhvleHrXOtfGw9b4Jr8W1ldtgKslGCU1mnUhOWWXUi+AhwGFTI0G/AShlpX8ywulk2R+fxet3SNGNQmjydnNkcsrBI/EMytO1kwoJB3KmLHEeijaQzK7iJxRDZEHlHWos6G7jwaGPI4rV5/S1N+xnG+OhCDYAUbunp5R` |
+> | uswest2 | ecdsa-sha2-nistp256 | `rt5kaA0neIFDIWTP2MjWk9cOSapzEyafirEgPGt+9HM=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKEKP+1QZf3GfEvkNZtzoKr05iAwGq+yPhUsVdyA7uKnwvTwZAi7NBr4hMkGIIdgQlGrMNNXKS0V+rhMNI1sH48=` |
+> | uswest2 | ecdsa-sha2-nistp384 | `g0vDKd4G5MKnxWewbnCYahCv1lZgfnZeEXfPAhv+trs=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB1+/Qu9Y1BqqV3seN0+0ITYCueuv0TFAnfG9z1Io8VYjmxLvdmaDvTi9kJ0ShjFJRKjbCfYKNekqYZDW4gBfmE9EyvMPI6VXPVLNY3TQ/z+Y7qO/oa28cSirW9fhp7vbA==` |
+> | indiasouth | rsa-sha2-256 | `5gFLJvQvQodZxKBi3DnGywpf9dliWguiMTqcgkTmtu8=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDlxVnaYnmg1cK+g/PI1jB1fgQQJiX39ZmfBss3mSW3kUxP3KWhm7lHBTkrbnfhVHnGpP6GcGFy09YBQa6UiyVpD8p8APtx0j9Jp8m3yhhgqOIjup0C7crl49NqMVryOZmCLOvA7KTyTxxV37GpRI+ffqQ8LOO+anWVWVaJlVCYBMct/OVhA7ePXblcbJg5eu5JjUiWW+cPdVqAqWojNHZzzprCFEBTCvYaZtzBx4kFGiipPmJSN6yvBPEfnA7Lzr/T9iXV/XkmI1txuJRBasoQMt+4jCZG25sCCN8y4iuUJCioUELr//TWaDyTsQAR4MbRW+L/GSIM9VUY4Uc+Impp` |
+> | indiasouth | rsa-sha2-512 | `T4mrHCEHbFNAQSng//m0Viu/hXfi11JMnyA0PqAuTtg=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCz9tQa7D4dyrULCLH75yKwH27AQMRNWFUqgUQQXYHR1MYegLf7JEmFn126bxgEHPRO0bNwBM9S626Gcr1R1uDI/luL6uvG0Q57k+Pmv7HNQtv12J3fAuxuhSPcppE5IE5QR94Qgd1RzGXv954TK1Z+kCXHyLA583XTQ4btOEwqUo/16tSCqaoTSdyNp17q8BrOCPaTWMqT774lSGELIDc6RaGKHRu/Qa+F5FRMswdZt5YJDEKtlKdvbyIiSfIP2GZGhWBiSW2D6xpnzSjstR3LfRfFek/ryGkDPu5c5HNaVJwc1fatP6ggAbhVCcyDgWiCCpEBICV2wnPpGfDUdbRh` |
+> | indiasouth | ecdsa-sha2-nistp256 | `7PQhzR5S6sEFYkn2s3GxK6k8bwHgAy0000zb07YvI44=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLgZw/ouE23XQnzO8bBPSCJp/KR+N/xfuJS5QtWU/PzlNLmSYS20b65GRP6ThwZdaigMhwHOEc8twpJ7aA7LBu0=` |
+> | indiasouth | ecdsa-sha2-nistp384 | `sXR2nhTTNof58ne5K+Xjm9Pu8miEbKJn4Bo9NYoqQs4=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLwbzUI8q9f5YTLIs6ddRTPlHdb35xrbsJeOQII/nEXhlNjzpdL9XnDJjQunQL2vg6XND1pfp3TNBJ9LF3mud442LbpwSt9B7EZD8tQ5u0+2NeNjn8JnCu6/tdvS+xoNiA==` |
+> | japanwest | rsa-sha2-256 | `DRVsSje7BbYlVJCfXqLzIzncbVU4/ETFGzdxFwocl8E=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDl/rlTgQpomq4FmJKSR2fjgAklV818RcjR/e/C1VUJVpbntJoWUlBhKYDFPTVQaHXDTK5HyJU5APsdy6CJo8ia32qc2E/573LDNk4dgFFrh+KFRiD+ULt3IH15i1DieVw61MAVOvzh+DmTJHPLaTufXoQ62YACm3yC1st1kXv4bawfXs0ssmeqrBcCOQvMvW/DexnnGXO6QXYTcjUktNrO2h2dd355n5FP4fcsBEdGmfT79HYPM6ZoqkItRZEO6Nel65KxtenAwQub8SK3iJgFyJwd3zIH4OCHp3z4tcGXw5yNAX15dJMSnls0zvzhx0f4ThwfgB4t1g9jVb47Ig7B` |
+> | japanwest | rsa-sha2-512 | `yLl9t2jlkrTVWAxsZ59Wbpq+ZCnwHfdMW8foUmMvGwI=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC9zrpnjY7c0dHpE1BMv+sUp+hmvkBl3zPW/uCInYM5SgtViSQqn/DowySeq+2qMEnOpHGZ8DnEjq55PmHEumkYGWUUAs38xVGdvRZk6yU7TxGU42GBz0fT/sdungPHLQ2WvqTZYOFqBeulRaWsSBgovrOnQEa2bNTejK9m353/dmAtKHfu68zVT+XYADrT3PY5KZ1tpKJA0ZO9/ScUvXEAYs20WSYRZBcNDoSC9xz4K8hv9/6w3O3k0LyBKMFM5ZW8WVDfpZx1X0GBCypqS+RNZuVvx81h3nxVAZSx80CygYcV4UHml7wtnWDYEIBSyVRsJWVNGBlQrQ4voNdoTrk5` |
+> | japanwest | ecdsa-sha2-nistp256 | `VYWgC6A4ol1B7MolIeKuF2zhhgdzQAoGBj5WgnQj9XE=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFLIuhTo1et/bNvYUj+sanWnQEqiqs9ArKANIWG19F9Db6HVMtX8Y4i7qX6eFxXhZL17YB2cEpReNSEjs+DcEw4=` |
+> | japanwest | ecdsa-sha2-nistp384 | `+gvZrOQRq3lVOUqDqgsSawKvj6v/IWraGInqvqOmC6I=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD3ZiyS1p7F1xdf6sJ3ebarzA5DbQl1HazzLUJCqnrA84U8yliLvPolUQJw4aYORIb5pMgijsN3v9l0spRYwjGHxbJZY/V6tmcaGbNPekJWzgXA1DY35EbFYJTkxh/Yezw==` |
+> | norwaye | rsa-sha2-256 | `vmcel/XXoNut7YsRo79fP5WAKYhTQUOrUcwnbasj/fQ=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC4Y1b2Bomv8tc/JwPgW0jR5YQhF031XOk4G0l3FOdZWY31L8fLTW6rOaJdizOnWCvMwYQK39tyHe6deN9TZESobh0kVVuCWaZNI6NUR0PSHi0OfbUkuV0gm/nwtwJkH5G9QbtiJ5miNb4Ys3+467/7JkqFZmqN6vBLhL9RVInO00LPYkUGtGfTv+/hmsPDGzSAujNDCFybti4c+wMgkrIH6/uqenGfA1zW3AjBYN2bBBDZopzysLHNJYQi3nQHQSiD4Mdl7IGZtJQeC/tH9CKH5R4U4jdPN1TmvNMuaBR/Etw4+v0vrDALG1aTmWJ7kJiBXEZKoWq/vWRfLzhxd4oB` |
+> | norwaye | rsa-sha2-512 | `JZPRhXmx44tEnXp+wPvexDys1tSYq9EDkolj9k6nRCM=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC11j19LeEqRzJOs8sWeNarue+bknE3vvkvSnsewApVMQH35t9kpqRGMSr6RTU2QCYDiQTCKI2vzLSTLGoizoPBiY/7lvdylDRCbeEpuFUkgvKZrapkJ6JqKOySPpFNhqCs27rdY5dJ2C7/nmTL/kvcyhXFXZT2lJaOIdRSKv/1Q3DAWQ9icNGbDokQDubF5etlkquqTV6r/ioFuh7hdKE+fJooyHa2oYTD+j5cNDKBxrJWBEidOe2HwplR4lYPggUcVtGu9aoSVIMmswztFF6+MNIdOT1kdvHewKLjkVB1hbIHl/E+uexsyMGcCg5fPy7dDIipFi1aED+6R7CnAynJ` |
+> | norwaye | ecdsa-sha2-nistp256 | `mE43kdFMTV2ioIOQxwwHD7z+VvI4lvLCYW8ZRDtWCxI=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDWP6vJCbOhnvdmr7gPe8awR/E+Bx+c8fhjeFLRwp6/0xvhcywT9a1AFp7FdAhkVahNKuNMU1dZ0WTbOEhEGvdg=` |
+> | norwaye | ecdsa-sha2-nistp384 | `cKF2asIQufOuV0C/wau4exb9ioVTrGUJjJDWfj+fcxg=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDGb8w8jVrPU1n68/hz9lblILow6YA9SPOYh5r9ClAW0VdaVvCIR/9cvQCHljOMJQbWwfQOcBXUQkO5yI4kgAN3oCTwLpFYcCNEK6RVug9Q5ULQh1MRcGCy3IcUcmvnYdg==` |
+> | francec | rsa-sha2-256 | `zYLnY1rtM2sgP5vwYCtaU8v2isldoWWcR8eMmQSQ9KQ=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDCmdsufvzqydsoecjXzxxL9AqnnRNCjlIRPRGohdspT9AfApKA9ZmoJUPY8461hD9qzsd7ps8RSIOkbGzgNfDUU9+ekEZLnhvrc7sSS9bikWyKmGtjDdr3PrPSZ/4zePAlYwDzRqtlWa/GKzXQrnP/h9SU4/3pj21gyUssOu2Mpr6zdPk59lO/n/w2JRTVVmkRghCmEVaWV25qmIEslWmbgI3WB5ysKfXZp79YRuByVZHZpuoQSBbU0s7Kjh3VRX8+ZoUnBuq7HKnIPwt+YzSxHx7ePHR+Ny4EEwU7NFzyfVYiUZflBK+Sf8e1cHnwADjv/qu/nhSinf3JcyQDG1lN` |
+> | francec | rsa-sha2-512 | `ixum/Dragma5DAMBzA/c5/MY02FjUBD/gI8+XQDzJvc=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDjTJ9EvMFWicBCcmYF0zO2GaWZJXLc7F5QrvFv6Nm/6pV72YrRmSdiY9znZowNK0NvwnucKjjQj0RkJOlwVEnsq7OVS+RqGA35vN6u6c0iGl4q2Jp+XLRm8nazC1B5uLVurVzYCH0SOl1vkkeXTqMOAZQlhj9e7RiFibDdv8toxU3Fl87KtexFYeSm3kHBVBJHoo5sD2CdeCv5/+nw9/vRQVhFKy2DyLaxtS+l2b0QXUqh6Op7KzjaMr3hd168yCaqRjtm8Jtth/Nzp+519H7tT0c0M+pdAeB7CQ9PAUqieXZJK+IvycM5gfi0TnmSoGRG8TPMGHMFQlcmr3K1eZ8h` |
+> | francec | ecdsa-sha2-nistp256 | `N61PH8SVCAXOq7Z7eIV4mRnotafmNoPrpc+TaLxtPX4=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK3UBFa/Ke9y3aLs1q1b8gh/tXiS7lpOTzUiDFpXdbq00/V9Ag+v2z5MIaicFdum9Ih4fls1Mg07Ert16bi5M8E=` |
+> | francec | ecdsa-sha2-nistp384 | `/CkQnHA57ehNeC9ZHkTyvVr8yVyl/P1dau2AwCg579k=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG/x6qX+DRtmxOoMZwe7d7ZckHyeLkBWxB7SNH6Wnw2tXvtNekI9d9LGl1DaSmiZLJnawtX+MPj64S31v8AhZcVle9OPVIvH5im3IcoPSKQ6TIfZ26e2WegwJxuc1CjZZg==` |
+> | uswest3 | rsa-sha2-256 | `pOKzaf3mrTJhfdR/9dbodyNza30TpQrYRFwKAndeaMo=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC0KEDBaFSLsI28jdc854Rq6AL9Ku8g8L+OWQfWvb1ooBChMMd/oqVvFF9hkLzJ8nFPQw7+esVKys5uFwRTpBNuobF/RVtY0zLsNd+jkPxoUhs7Yl0hI2XXAPdp3uCsID56O+OrB7XbOsPCrJ2aXfiaRheRQg84/92c357uQ/epsva8XCMjIIGOAyEL6d4mnCNJ2Y0mXPJT1lfswoC8i2GSUKdJZhTLCe9zVDvTCTWuZJSH3A8nM3RVtnNgMXfNjh2blwW9YFv5BrMOXA205fahuDcPjwvXo9OMfEneDsrODmiEGYzbYLby/5/KPzz5OVn7BDJma6HL0z07i3PmEzXN` |
+> | uswest3 | rsa-sha2-512 | `KKcoWCeuJeepexnJCxoFqKJM88XrpsPKavXOoNFEGuY=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDNzhiVgDjCIarGEjKgmSxRh4vWjV6PxFbNK3cD0M4jWGlxPx/otJNEXCMee0hW29b7bwo2+aiyv3AEt7JYTeM/G9SHmenU6MTpqD/lC/LABtqTB7EV9FIFkc8MbbOvEkdTnRJw1d09MTqqwbkR9wq297AWggSzCuPDqMq+268UzsthMzODRVqW3yTr3M6vhlBCPfN5ptcvYwqRaa7Yhe4bdRZ+xYB5I2+ZMkalfn7SQiySSgAGjUJxrxK+LnJKSi32CfqTU8KjWNjCc40eAqexLFjg6AN9BtC0+ZYcD2KQmeqJ8oRCWw9r4CsaduSmcjc7XD75RKGdArjYzjeiVSlt` |
+> | uswest3 | ecdsa-sha2-nistp256 | `j4NlZP/wOXKnM3MkNEcTksqwBdF0Z46+rdi2Ic1Oj54=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBETvvRvehAQ2Ol0FfTt649/4Xsd0DQQ7vyZ666B92wRhvyziGIrOhy8klXHcijmRYRz3EjTHyXHZ4W8kcSKB4Lo=` |
+> | uswest3 | ecdsa-sha2-nistp384 | `DkJet/6Pm6EXfpz2Ut6aahJ94OvjG3R7+dlK0H4O1ts=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEu+HpgDp0a02miiJjD5qVcMcjWiZg5iIExECqD/KQVkfyraJ3WZ8P28JwB+IYlEGa2SHQxScDjG2t3iOSuU9BtpA0KK5PGtu3ZxhN1UmZbQgz6ANov7/+WHChg7/lhK0Q==` |
+> | indiacentral | rsa-sha2-256 | `OcX6wPaofbb+UG/lLYr30CPhntKUXyTW7PUAhC6iDN0=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDWuKbOJR8ZZqhE3k2HMBWO99rNWoHwUa+PVWPQHyCELyLR19hfrygNL9uugMQKTvPYPWx8VM6PrQBrvioifktc/HMNRsoOxlBifQETfRgBseXcIWorNlslfFhBnSn6ZGn8q4XICGgZ1hWUj9z1PUmcM2LZDjJS33LLzd23uIdLePizAliJAzlPyea8JNpCVjfmwnNwtuxXc48uAUXlmX+e0ZXRwuEGble8c1PbrWWTWU4xhWNJ+MInyvIGv9s6cGN7+fxAFaUAJS0wNEa3poCnhyNxrckvaqiI3WhPQ8Hefy2DpXTY03mdxCz8PZPcLWdQU3H5nmuAc/pypnc0Avax` |
+> | indiacentral | rsa-sha2-512 | `HSgc5u8s+QILdyBq6wGJkxRcK5nxj81gxvpkR5bcH6k=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDSO/R/yw8q33yLkSHOw0Bi2WKDWQPrll8skh3hdRUB6wtw9dvtQFEV3suvFJsTVvAbnGBe2Fjgi69X0zkIygxg74XuQsx7GZO6gyaKDwljyanFoCzer+OzFSpDcVJ0zOfhY99uHeYT6k4leb2ngABqjiqieDHMZ9JQX12KOK3cAks/oytrNUo9krGb1Nyv5BYu4dWXHmuFgtigDd043khaARfdWkg88lKgb6G9k+vQTGKphLnFMqhada/aP8GsaA2Dq5d/LH5P5CTU7MRPA8TuuyLOtbv8FtQ2TyaAXhYCplCQELtto1yXZ79WVjQE/uKuX8xK5M2rfOH+H5ck/Rxl` |
+> | indiacentral | ecdsa-sha2-nistp256 | `zBKGtf770MPVvxgqLl/4pJinGPJDlvh/mM963AwH6rs=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBjHx8+PF0VBspl6l9Xa3BGyJwSx2eDX0qTDnhrdayzHMWsHGX3vz0wr7oMeBVdQ26dOckExa6iPrEDSt8foV1M=` |
+> | indiacentral | ecdsa-sha2-nistp384 | `PzKXWvO/DR/KnUElcVWIwSdabp6ZJqce37DJZzNl3Sk=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJwEy1f+GYN4rxhlCAkXGgqAU1S7ssI4JPEJs8z1mcs8dDAAVy1cqdsir9yZ9RSZzOz/BOIubZsG137G2+po0Pz0FfJ0jZVGzlx1UHXu7OMuKQ7d2+1TkPpBYFy6PiCa3w==` |
+> | koreasouth | rsa-sha2-256 | `J1W5chMr9yRceU2fqpywvhEQLG7jC6avayPoqUDQTXHtB2oTlQy2rQB` |
+> | koreasouth | rsa-sha2-512 | `sHzKpDvhndbXaRAfJUskmpCCB3HgPbsDFI/9HFrSi3U=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCfGUmJIogHgbhxjEunkOALMjG77m+jgZqujO3MwTIQxQNd/mDeNDQaWDBVb2FJrw15TD3uvkctztGn2ear3lLOfPFt0NjYAaZ8u5g9JYCtdZUTo5CETQFU/sfbu2P2RJ/vIucMMg8HuuuIMO059+etsDZ5dZHu9cySfwbz/XtGA0jDaTlWG0ZDT+evOE0KmFABjgMFWyPnupzmSEXAjzlD/muGeeUhtXUB8F6HVUCXLz7ffzgYiYj+1OB0eZlG/cF8+aW7MOpnWvfpBxwm16soSE1gmZnXhPrz/KXlqPmEhgIhq7Cwk54r3rgfg/wCqFw+1JcbNOv5d4levu/aA7pt` |
+> | koreasouth | ecdsa-sha2-nistp256 | `XM5xNHAWcYsC5WxEMUMIFCoNJU/g90kjk/rfLdqK7aw=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHTHpO85vgsI6/SEJWgEhP5VDTikLrNrSi6myIqoJvRx6x8+doTPkH87L/bOe/pTU/rCgkuPi1kXTC7iUTSzZYk=` |
+> | koreasouth | ecdsa-sha2-nistp384 | `6T8uMI9gcG3HtjYUYqNNxi99ksghHvsDitIYpdQ4BL4=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAgPPIDWZqvB/kuIguFnmCws7F4vzb6QG7pqSG/L9E1VfhlJBeKfngQwyUJxzS2tCSwXlto/1/W302g0HQSIzCtsR4vSbx827Hu2pGMGECPJmNrN3g82P8M0zz7y3dSJPA==` |
+> | ussouth | rsa-sha2-256 | `n7P8NrxY8pWNSaNIh8tSZxi9rXi11g3JuzWZF93Ws4g=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQD4PgB8PxPPpGfvrIUGSiiDFIfkRk2/u1DmhcoGoIfW+/KR8KC2JA0kY4Yj+AceGnDUiBbSPz7lcmy2eGATfCCL6fC5swgJoDoYDJiJoaKACuVA0Nk2y0OeO58kS6qVHGX/XHzx8+IkfGdlhUUttcga7RNeppT5iqSz49q9x6Ly42yrV3DIAkOgh+f9SsMMfR6dQQmvWN3HYDOtiO2DvVN+ZenViQVcsynspF3z4ysk53ZYw5YcLhZu8JFw4u0F6QJAznR6TfNqIlhSjR1ub8DiHvIwrmDNf8TgG5kPVGhIcibYPf+y0B0M8nr9OKCxZzUTlXX4Xcnx+VOQ1e1qGHvV` |
+> | ussouth | rsa-sha2-512 | `B2oOtHpXzwezblrKxGcNBc3QJLQG/TiVgOjnmNorqkA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC+LJA8W3BcwITzJv6CAkx/0HBPdy3LjKPK2NQgV9mxSMw8mhz4Ere59u2vRsVFcdW6iAeGrH66VF6mJSCgUKiYnyZAfTp1O6p6DnUg4tktMQFo4BEwSz1S5SGDuRhpWvoKjzvljESf/vZBqgms7nMRWe3MGuvlUWBqB+2CnJ7bxhvGQCdBTQeoPO9EZKYKi/fPlcxBmLFGcZnRRpB6nu/Cxhhj1aHLJdjqCd+4ahtjBHeFrPxeQv9gTJ1B+EipJZu7WgPZOTI8iZaIcnCbhuGOy0iOFXeuexC9/ptHDW9UEgKVLyZ4UIPJkSLFVgW5NRujWyZ/thc5+EfHY9Db3UAl` |
+> | ussouth | ecdsa-sha2-nistp256 | `Wg9hTlPmrRH9aC9lTSf8hGFqa85AnW3jqvSXjmHAdg4=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJnEz4iwyq7aaBNKiABce+CsVIUfiw9Jw3pp6pGbL6cUaJs9mEVg1RMLHgPg2I+7XV0doisYhYb/XtufxzGCe94=` |
+> | ussouth | ecdsa-sha2-nistp384 | `rgRhPelmxAix6TBDahmGqXnKjdImdI3MnDPVc6qhF2o=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKXGKbWfVe18G9gbCxFQiBGkGYM9LktSPKkRI18WRQ50qyuxVRXRDoV+iIEJyCQTpuFTPprQ6glQYeF+ztEb4MZaXpVrcs1/Og191dcEtty3UWuJBCrv/t1kezlwBWKyXg==` |
+> | koreacentral | rsa-sha2-256 | `Ek+yOmuAfsZhTF4w7ToRcWdOevgZPYXCxLiM10q44oA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCyUTae7QtAd3lmH+4lKJNEBNWnPUB+PELE9f4us5GxP8rGYRar1v3ZGXiP2gzPF1km1cGNrPvBChlwFMjW+O5HavIFYugVIe8NzfI7S3t+kgTylXegSo1cWen18MAZe6Q5vxqqFzfs+ZChWEa/P37lTXVkLVOYCe5NJUPm8Zvip7DHB2vk25Fk3HMHG9M50KNj1Hp4etPI7yiLNLNCh5V410mf3xhZChMUrH6PMl/A+sVv68ulcVeIZ68eMuQktxz1ULohBdSExZGmknVrwfF/fLTKWxHlVBjB3yDlLIJO3nTFKaQ4RzPa/0If+FcbY+hIdzSjIAK6W3fRlbVuWHYR` |
+> | koreacentral | rsa-sha2-512 | `KAji7Q8E2lT3+lSe7h74L6rfPnLEfGVzYZ/xyM96//0=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDxZYb5eIWhBmWSwNU6G9FFDRgqlZjYYorMSXJ4swHm4YYHKGZTf4JOE5d87MNtkVgKe2942TQxA1t2TaENlmNejeVG5QZ4to+nVnwsFov2iqAYChoI6GlhpwzyPsO0RkqLB8mvhoKMel1sNGfmxjxYVFt4OSPHDzNIU4XjGfW24YURx/xRkLU1M9zBNADDx+41EMNRT7aBXrKW9MzsxkfCM3bYwjdBbI2Yi2nUqARm+e/sBPLTqVfjuMFvosacYc43MqepFSQoZE5snwYxkLJzltAbxNUysJs277isnGgezh9p5T2MCxtCERU0lvp7M52hd1p75QEtNrdadfDprzT9` |
+> | koreacentral | ecdsa-sha2-nistp256 | `XjVSEyGlBtkONdvdw11tA0X1nKpw5nlCvN/0vXEy1Gc=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPYiomaLB3ROxZvdfqiilpZ+2XJiPDeIIv4/fnQRZxnCBCFrUm7ATB6bMBSUTd00WfMhnOGj4hKRGFjkE+7SPy4=` |
+> | koreacentral | ecdsa-sha2-nistp384 | `p/jQTk9VsbsKZYk09opQVQi3HzvJ/GOKjALEMYmCRHw=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN3NA7U4ZC576ibkC/ACWl6/jHvQixx+C6i2gUeYxp7Tq6k4YLn7Gr1GNr+XIitT6Jy5lNgTkTqmuLTOp7Bx9rGIw9Or37EGf7keUM42Urtd+9xF1dyVKbBw0pIuUSSy+w==` |
+> | asiasoutheast | rsa-sha2-256 | `f0cyRMVxUgtpsa9J6pwAMysk2MY/sybo5ioPjhy9LZk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDWPK6PAGMTdzNkwKZt+A3Dhbnete6jyLLboOXWdv/QdhvjR2pNCMhGuWUxadaiLUxzZM7IvugSLGexQlZi5aCJ06DpaVYqZk/Q8l+QUydp9TfNg/kP+0OJXCJ6XdsVggboDIfrEN8ku4nfasD4QTo2tnmqZhmbIDUr38SP16PsH2bQAi2lZKg4DfWgnSFyj5sbMSDLljBEY6JQkLGiPcbqlYEN4kjB5mudE9c/ts6Jn1fhizBwJY/pE3kOydq8dCMXYFMZ6NafPacCi7Pe5zcTKfi/daioVlSXQhWK3jNzCVENonF2xWSPH+1T5F2IOV0wb0HL2l8d02x5Bw2Su4aF` |
+> | asiasoutheast | rsa-sha2-512 | `vh8Uh40NCD3iHVh5KEcURUZrT3hictlF9pMDEoK5Rxk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCdL+E/W2RpmJiWMRg5EtMs0AE7BF2Qb5jnXXaIbwqr5/BGuUPLm43eVJJt5R0BmEJe2lYfYLAzinC9MhsxKSTHIt5u8QleyIAxI759M3DWZwFSKngjsHFRe/SvZOzc7gvtR7osdnVaXCTXY5NccLT34gDybEbjlmp+SEvSZZmXyy2wmUR3O022euBifKN0t9Tk1mkLYhbfRySQi0ZADWazjd7loM9ZHArVe8y9oDrs7QYX4eHIVRbgtsBbkR3g9zP3VWVMERFyi6cU0Dyvue8DCx9YzNsdmKjkB2dvYTMVcUkad81pbO81jpLb1wL25WPHIPHqTOLZhdn9JxLn245Z` |
+> | asiasoutheast | ecdsa-sha2-nistp256 | `q7OsE02p9SZ6E63b+Mxri1wbI5WfkdWcIJgAP2+WTg8=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEbvjkwSA0RQuT2nQf8ABKc21s/kcC/7I5431oNEwQPZQ8S18RAKktv6ti19Ju8op6NOZZ3Up9lOn3iybxHgy+s=` |
+> | asiasoutheast | ecdsa-sha2-nistp384 | `HpneuSwbRG7eiqHGEAkSXF0HtjvccoT3OIgeQbPDzoE=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGAMUN+0oyuXuf6rkS+eopeoISA2US3UrgAovMwoqAeYSPoHKy9n/WKczsHPy/G+FKsXM4VlMHtNhEAxYwjtueF0Sb2GRZFzngeXMfVZPVL5Twph/pT6ZJnUD8iloW0Mw==` |
+> | australiaeast | rsa-sha2-256 | `MrPZLU8llsG+SzgBN8eH702H4zuynyYgqqQLQmWGDEs=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDsRwHZ+DKINZZNP0GO6l7mFIIgTRnJy7ikg07h54iuk+KStoB2Cwppj+ulPs0NiR2RgAsP5nchWRZQysjsfYDui8wha6JEXKvWPlQ90rEYEs96gpUcbVQesgfH8ILXK06Kn1xY/4CWAHEc5U++66e+pHQulkkFyDXTsRYHsjTk574OiUI1` |
+> | australiaeast | rsa-sha2-512 | `jkDaVBMh+d9CUJq0QtH5LNwCIpc9DuWxetgJsE5vgNc=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDFHirQKaYqkcecqdutyMQr1eFwIaIM/h302KjROiocgb4iywMAJkBLXmhJn+sSbagM5tzIk4K4k5LRjAizEIhC26sc2aa7spyvDu7+HMqDmNQ+nRgBgvO7kpxVRcK45ZjFsgZ6+mq9jK/eRnA8wgG3LnM+3zWaNLhWlrcCM0Pdy87Cswev/CEFZu6o6E6PgpBGw0MiPVY8CbdhFoTkT8Nt6tx9VhMTpcA2yzkd3LT7JGdC2I6MvRpuyZH1q+VhW9bC4eUVoVuIHJ81hH0vzzhIci2DKsikz2P4pJT0osg5YE/o9hVJs+4CG5n1MZN/l11K8lVb9Ns7oXYsvVdtR2Jp` |
+> | australiaeast | ecdsa-sha2-nistp256 | `s8NdoxI0mdWchKMMt/oYtnlFNAD8RUDa1a4lO8aPMpQ=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBKG2nz5SnoR5KVYAnBMdt8be1HNIOkiZ5UrHxm4pZpLG3LCuzLEXyWlhTm8rynuM/8rATVB5FZqrDCIrnn8pkw=` |
+> | australiaeast | ecdsa-sha2-nistp384 | `YmeF1kX0R0W/ssqzKCkjoSLh3CciQvtV7iacYpRU2xc=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFJi5nieNPCIxkYS7HKMH2fQgONSy2kGkViQucVhWrTJCEQMVz5peL2JZJFjf2a6zaB2olPaBNEkeuJRHxGyW0luTII9ZXXUoiGQH9l05B41mweVtG6pljHfuKQ4HzoUJA==` |
+> | japaneast | rsa-sha2-256 | `P3w0fZQMpmRcFBtnIQH2R88eWc+fYudlPy7fT5NaQbY=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCZucqkz4UicI20DdIyMMeuFs+xUxMytNp7QaqufmA2SgUOoM387jesl27rwvadT6PlJmzFIBCSnFzjWe5xYy3GE59hv4Q3Fp3HMr5twlvAdYc5Ns5BEBEKiU0m88VPIXgsXfoWbF0wzhChx8duxHgG4Cm+F8SOsEw/yvl+Z/d42U9YzliQ1AafNj4siFVcAkoytxKZZgIqIL4VUI322uc93K5OBi9lgBqciFnvLMiVjxTWS/wXtVEjORFqbuTAu/gM4FuKHqKzD1o39hvBenyZF2BjIAfkiE6iYqROd75KaVfZlBSOOIIgrkdhvyj9IfaZFYs3HkLc7XgawYe6JVPR` |
+> | japaneast | rsa-sha2-512 | `4adNtgbPGYD+r/yLQZfuSpkirI9zD5ase01a+G7ppDw=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCjHai98wsFv0iy+RPFPxcSv8fvTs3hN/YnuPxesS21tUtf0j5t8BTZiicFg6MLOQJxT4jv5AfwEwlfTqvSj3db6lZaUf/7qs/X9aN1gSoQNnUvALgnQDYGjNYO8frhR7S0/D/WggQo2YKMAeNLRScT7Pg/MJaOI12UhoUloCXbTAP1c85hYx0TGKlGWpFjfen/2fwYEKR1vuqaQxj+amRatnG+k18KWsqvHKze8I2D19cn5fp2VkqXzh6zQ1s5AMc5B9qIF48NIec9FAemb9pXzOoYBDFna0qNT4dfeWOQK6tM/Ll10jafaw2P32dGBF8MQKXB2sxtcC0nU4EEtS5d` |
+> | japaneast | ecdsa-sha2-nistp256 | `IFt/j4bH2Jc0UvhUUADfcy3TvesQO+vhVdY4KPBeZY8=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKVq+uiJXmIlYS367Ir9AFq/mL3iliLgUNIWqdLSh7XV+R8UJUz1jpcT1F6sJlCdGovM3R5xW/PrTQOr3DmikyI=` |
+> | japaneast | ecdsa-sha2-nistp384 | `9XLsxg1xqDtoZOsvWZ/m74I8HwdOw9dx7rqbYGZokqA=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFh7i1cfUoXeyAgXs+LxFGo7NwrO2vjDwCmONLuPMnwPT+Ujt7xelTlAW72G3aPeG2eoLgr6zkE48VguyhzSSQKy7fSpLkJCKt9s0DZg2w0+Bqs44XuB43ao6ZnxbMelJQ==` |
+> | canadaeast | rsa-sha2-256 | `SRhd9gnvJS630A8VtCYMqc4djz5R8EiG7spwAUCYSJk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQD2nSByIh/NC3ZHsjK3zt7mspcUUXcq9Y/jc9QQsfHXQetOH/fBalf17d5odCwQSyNY5Mm+RWTt+Aae5t8kGm0f+sKVO/4HcBIihNlAnXkf1ah5NoeJ+R0eFxRs6Uz/cJILD4wuJnyDptRk1GFhpAphvBi0fLEnvn6lGJbrfOxuHJSXhjJcxDCbmcTlcWoU1l+1SaYfOzkVBcqelYIimspCmIznMdE2D9vNar77FVaNlx4J9Ew+HQRPSLG1zAh5ae1806B6CHG1+4puuTUFxJR1AO+BuT6fqy1p0V77CrhkBTHs8DNqw9ZYI27fjyTrSW4SixyfcH16DAegeHO+d2YZ` |
+> | canadaeast | rsa-sha2-512 | `60yzcSSOHlubdGkuNPWMXB9j21HqIkIzGdJUv0J57iY=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDDmA4meGZwkdDzrgA9jAgcrlglZro0+IVzkLDCo791vsjJ29bTM6UbXVYFoKEkYliXSueL0q92W91IaFH/NhlOdW81Dbjs3jE+CuE4OX5pMisIMKx45QDcYCx3MJxqZrIOkDdS+m8JLs6XwM07LxiTX+6bH5vSwuGwvqg5gpnYfUpN0U5o7Wq7H7UplyUN8vsiDvTux3glXBLAI3ugjn6FC/YVPwMOq7Luwry3kxwEMx4Fnewe6hAlz47lbBHW6l/qmzzu4wfhJC20GqPzMJHD3kjHEGFBHpcmRbyijUUIyd7QBrnfS4J0xPVLftGJsrOOUP7Oq8AAru66/00We501` |
+> | canadaeast | ecdsa-sha2-nistp256 | `YPqDobCavdQ/zGV7FuR/gzYqgUIzWePgERDTQjYEE0M=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKlfnJ9/rlO5/YGeGle1K6I6Ctan4Z3cKpGE3W9BPe1ZcSfkXq47u/f6F/nR7WgrC6+NwJHaMkhiBGadEWbuA3Q=` |
+> | canadaeast | ecdsa-sha2-nistp384 | `Y6FK9rWscBkyKN7mgPAEj0jKFXrv4mGNzoaZ9ttc4io=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDS8gYaqmJ8eEjmDF2ET7d2d6WAO7SgBQdTvqt6cUEjp7I11AYATKVN4Isz1hx8qBCWGIjA42X1/jNzk3YR7Bv/hgXO7PgAfDZ41AcT4+cJd0WrAWnxv0xgOvgLKL/8GYQ==` |
+> | canadacentral | rsa-sha2-256 | `KOYkeGvx4egH9DTGgxiONDMvSlkEkoU8cXWnynOEQRE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC7jhZvp5GMrYyA2gYjbQXTC/QoSeDeluBUpji6ndy52KuqBNXelmHIsaEBc69MbixqfoopaFyJshdu7X8maxcRSsdDcnhbCgUO/MnJ+am6yb33v/25qtLToqzJRXb5y86o9/WtyA9DXbJMwwzQFqxIsa1gB` |
+> | canadacentral | rsa-sha2-512 | `tdixmLr++BVpFMpiWyVkr5iAXM4TDmj3jp5EC0x8mrw=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDNMZwL0AuF2Uyn4NIK+XdaHuon2jEBwUFSNAXo4JP7WDfmewISzMWqqi1btip/7VwZbxiz98C4NUEcsPNweaw3VdpYiXXXc7NN45cC32uM8yFeV6TqizoerHf+8Hm8avWQOfBv17kvGihob2vx8wZo4HkZg9KacQGvyuUyfUKa9LJI9BnpI2Wo3RPue4kbaV3JKmzxl8sF9i6OTT8Adj6+H7SkluITm105NX32uKBMjipEeMwDSQvkWGwlh2oZwJpL+Tvi2G0hQ/Q/FCQS5MAW9MCwnp0SSPWZaLiA9EDnzFrugFoundyBa0vRjNGZoj+X4+8MVG2fYgOzDED1JSPB` |
+> | canadacentral | ecdsa-sha2-nistp256 | `HhbpllbdxrinWvNsk/OvkowI9nWd9ZRVXXkQmwn2cq4=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBuyYEUpBjzEnYljSwksmHMxl5uoErbC30R8wstMIDLexpjSpdUxty1u2nDE3WY7m4W/doyXVSBYiHUUYhdNFjg=` |
+> | canadacentral | ecdsa-sha2-nistp384 | `EjEadkKaEgaNfdwXtzlqanUbDigzsdzcZJeTzJfQXP0=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBORAcpaBXKmSUyCLbAOzghHvH8NKzk0khR0QGHdru0kiFiE16uz9j07aV9AiQQ3PRyRZzsf+dnheD7zuEZAewRiWc54Vg8v8QVi9VUkOHCeSNaYxzaDTcMsKP/A7lR2AOQ==` |
+> | switzerlandn | rsa-sha2-256 | `4cXg5pca9HCvAxDMrE7GdwvUZl5RlaivApaqz8gl7vs=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCqqSS6hVSmykLqNCqZntOao0QSS1xG89BiwNaR7uQvz7Y2H+gJiXhgot6wtc4/A5743t7svXZqsCBGPvkpK05JMNZDUy0UTwQ1eI9WAcgFAHqzmazKT1B5/aK0P5IMcK00dVap4jTwxaoQbtc973E5XAiUW1ZRt6YComeoZB6cFVX28MaE6auWOPdEaSg8SlcmWyw73Q9X5SsJkDTW5543tzjJI5hnH03LAvPIs8pIvqxntsKPEeWnyIMHWtc5Vpg8LB7CnAr4C86++hxt3mws7+AOtcjfUu2LmLzG1A34B1yEa/wLqJCz7jWV/Wm21KlTp1VdBk+4qFoVfy2IFeX9` |
+> | switzerlandn | rsa-sha2-512 | `E63lmwPWd5a6K3wJLj4ksx0wPab1lqle2a4kwjXuR4c=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCtSlbkDdzwqHy2C/pAteV2mrkZFpJHAlL05iOrJSFk0dhq8iwsmOmQiF9Xwth6T1n3NVVncAodIN2MyHR7pQTUJu1dmHcikG/JU6wGPVN8law0+3f9aClbqWRV5tdOx1vWQP3uPrppYlT90bWbD0IBmmHnxPJXsXm+7tI1n+P1/bKewG7FvU1yF+gqOXyTXrdb3sEZOD6IYW/PusR44mDl/rV5dFilBvmluHY5155hk1O2HBOWlCiDGBdEIOmB73waUQabqBCicAWfyloGZqB1n8Eay6FksLtRSAUcCSyBSnA81phYdLiLBd9UmiVKPC7gvdBWPztWB+2MeLsXtim9` |
+> | switzerlandn | ecdsa-sha2-nistp256 | `DfyPsw04f2rU6PXeLx8iVRu+hrtSLushETT3zs5Dq7U=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJICveabT6GPfbyaCSeU7D553Q4Rr/IgGjTMC8vMCIUJKUzazeCeS3q46mXL2kwnBLIge9wTzzvP7JSWf+I2Fis=` |
+> | switzerlandn | ecdsa-sha2-nistp384 | `Rw0TLDVU4PqsXbOunR2BZcn2/wqFty6rCgWN4cCD/1Y=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLLhGaEyHYvfVU05lmKV4Rnrl9YiuSSOCXjUaJjJJRhe5ZXbDMHeiC67CAWW3mm/+c5i1hoob/8pHg7vmeC+ve+Ztu/ww12JsC4qy/CG8qIIQvlnDDqnfmOgr0Svw3/Izw==` |
+> | uaec | rsa-sha2-256 | `GW5lrSx75BsjFe4y4vwJFdg454fndPjm4ez2mYsG3zs=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDAQiEpj9zkZ8F3iDkDDbZV4A3+1RC/0Un6HZVYv5MCVYKqsVzmyn+7rbseUTkZMO/EqgF8+VWlwSU5C2JOesZtKXAgNzXBSOER3NbiucB5v1b1cC+8Qo4C2+iTHXyJSKxV0bTz55crCfhKO1KTQw3uZoYh6jE9xI1RzCI1J4qP+afZQQhn3H+7q+8kTMhmlQrfKuMWennoWZih+uTe9LPHjlvzwYiXkS2sOIlKtx8eLDJJg2ONl7YKSE4XVq7K33807Gz5sCD/ZV+Bn+NyP2yX14QKcyI97pkrFdcJf2DZi7LdTuEVPx3qK/rHzmzotwe6ne6sfV+FJpowUUTbKgT5` |
+> | uaec | rsa-sha2-512 | `zflL4olL2bga9JCxPA/qfvT2jSYmIfr2RY6GagpUjkE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDAtxSG7lHzGFclWVuErRZZo6VG5uaWy1ikhb67rJSXdTLuSGDU+4Boj4wKxK0EyVKXpdQ3VrIwC4rOEy/lKAlnI2PrkrMjluau2aetlwW0hCBKAcgEOpMeMJJxCvv9EVatmEhvCe0ARyVM539058da9LzoZ2geFnFIbh3t8fNCaJZTNSS5PW1SLkspSqYXUYJWzu8Kx9l3LTzlmJT1DukKLIKj5ZDwuzOIN5m1ePYp4MzfIeBN6ys8df8HqXLoEXE+vOZWOzwkPVWoTsYvwB8j9+FHECAVf4Gcm8sPvRZA/RKDn1dGW2THzVw/VI/F87fFC7stLmZJ1v+a9TTFE649` |
+> | uaec | ecdsa-sha2-nistp256 | `P3KxgoZgjHHxid66gbkRETjPsHUsNiPt5/TFU0Kby6I=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOvHAXCWC9HGJnr5SRW8I1zZWsyHIczEdPpzmafrU8drYmhpRxlD6HlKnY7iXqfq8bOIK063tpVOsPbrVevAKPs=` |
+> | uaec | ecdsa-sha2-nistp384 | `E+jKxd6hnfVIXPQYreABXpZB7tppZnWUxAelvEDh874=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMDLyroqceuIpmDQk/gvHHzFup7NZbyzjXMdGrkDvZDE2H+6XTthCGSVNVmwqdyHE4yGw88jgW1TfWTAZxCxTfXD+xF72iYyBAsejgiyYY/0x9NKM/lrtw8mnRtkZzLyrA==` |
+> | germanyn | rsa-sha2-256 | `ppHnlruDLR73KzW/m7yc3dHQ0JvxzuC1QKJWHPom9KU=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDNNjCcDxjL3ess1QQEkb9n5bPYpxXpekd32ZX4oTcXXFDOu+tz/jpA8JZL8lOBAcBQ5n+mZF0Pot1o+B1JxQOHHiEZdcdKtLtPWrI2OQyxZnvo7sCBeSk+r/j3mjqpvq3+KpwoTZKpYF/oNRXVHU4VFs+MzvqWd6vgLXsDwtJrriojtkrWy0bTa4NjjN/+olsITxDmR0TGAu+epCJptdpKjTcgcn25QuIKy37/zVW8BJ5QsZmIRwvlCYxj11UOAoDcbapJcnzJYpOmQTNpdzkazjViX17DZW17Jmfhc6Dk3H+TEseilkbq1ZjsUyGBBxklWHid7+BgKVXOoIgG6+0x` |
+> | germanyn | rsa-sha2-512 | `m/OFTRHkc3HxfhCKk1+jY1rPJrT9t4FYtQ/Wmo3MOUE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDkN3CN1VITaHy/CduQaZIkuKSC/+oX19sYntRdgCblJlIzUBmiGlhmnKXyhA29lwWWAYxSbUu0jEJUZfQ6xdQ4uALOb815DLNZtVrxqSm4SjvP5anNa7zRyCFfo4V8M4i6ji6NB+u+PwH5DOhxKLu6/Ml9pF8hWyfLRft8cg4wORLLhwGt2+agizq7N7vF2nmLBojmS0MMmpH5ON/NFshYIDNKPEeK9ehpaARf4fuXm440Zqzy/FfpptSspJIhbY2zsg4qGQgYGZyuRxkLzYgtD/uKW5ieFwXPn+tvVeVzezZTmGMoDlkPX18HSsuNaRkdnwpX8yk1/uoBCsuOFSph` |
+> | germanyn | ecdsa-sha2-nistp256 | `F4o8Z9llB5SRp0faYFwKMQtNw/+JYFKZdNoIuO7XUU0=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMoIo/OXMP7W5a5QRDAVBo+9YQg4YBrl3J7xM91PUGUiALDE1Iw8Uq4e1gLiSNA6A46om5yY/6oGj4iqEk8Ar8Y=` |
+> | germanyn | ecdsa-sha2-nistp384 | `BgW5e9lciYG1oIxolnVUcpdh3JpN/eQxfOyeyuZ6ZjI=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ69kH0urhCdcMMaqpID2m+u8MECowtNlYjYXoSUn6oEhj7VPxvCRZi5R02vHrtrTJslsrbpgYHXz+/jSLplKpccQGJFaZso9WWgEJH1k7tJOuOv0NIjoBTv7fY5IxeAvQ==` |
+> | australiac2 | rsa-sha2-256 | `sqVq1zdzD3OiAbnDjs70/why2c3UZOPMTuk5sXeOu4Y=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDKNZVZ5RVnGa0fYSn+Nx3tnt526fmMf+VufOBOy5/hEnqV6mPKXMiDijx2gFhKY4nyy957jYUwcqp1XasweWX6ISuhfg4QWcygW0HgmVdlSDezobPDueuP0WdhVsG3vXGbEYnrZOUR5kQHagX/wWf6Diy1J5Cn2ojIKGuSuGY/9bu3bnZzKt08fj+gQCEd1GxpPoBUfjF/73MM57IRhdmv919rsGD5nsyZCBmqFoKlLH/gKYZ4B3hylqf/6gER7OeZmG2S/U/fRAN0hVK7RkHNf2CFoCmuxXS6r87BznT5vF3nmd7tsf0akaxLjfWRbKLMWWyZkzU4/jijpbDDuu1x` |
+> | australiac2 | rsa-sha2-512 | `p6vLHCTBcKOkqz7eiVCT6pLuIg7h4Jp41lvL/WOQLWM=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDcqD2zICW1RLKweRXMG9wtOGxA5unQO/nd9yslfOIo54Ef0dlhAXGFPmCd3Yj60Gt/CIpqguzKLGm4D3nf19KjXE8V59cD7/lN6mVrFrm+6CU44JAzKN9ERUelxhSQKi/dsDR773wt4jsAt4SLBRrs19RC2fkYnxZgC/LzNZKXXY3FFb06uwheJjGOHyeQJbGpaV3hlelhOSV1UF2JAB8v6d8+9+S+b666EcpQ70JtxtA8h1s30hqhTKgYdRYMPfz7lqKXvact2NBXlqYRPod5cLW7lYBb2LzqTk1D44d8cwDknX2pYQJpgeFwJhB6SO9mF/Ot+jk+jV/CxUI55DPd` |
+> | australiac2 | ecdsa-sha2-nistp256 | `m7Go9P1bwcPHAcZzRSXdwYroDIdZzt0jhxkJW42YGKY=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHp76felOL7GAHcJoW6vcCS83jkeR6RdFCwUk0Jf6v7SFoqYNZfTaryy2n0vwG1W1dAyHvOjB1+gzTZOkHN/cAI=` |
+> | australiac2 | ecdsa-sha2-nistp384 | `9Jc39OueTg3pQcq8KJgzsmPlVXxILG24Euw27on7SkY=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEduOE61sSP2BozvJ6QLtRDZ7j0TenX7PjcpPVtYIQuKQ+h3qakXFwFnj8N3m8+LpTXYO41mgX7N02Rl12QvD7lDpUgHUChaNpUcMcSpm5qvguLyG6XZg2BDNd6pyx+fpw==` |
+> | southafricaw | rsa-sha2-256 | `aMMzaNmXR+V1NrwLmovyvKwfbKQ6aAKYiA5n8ETYQmU=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDGhe98UTnljsYaeJwtP3ABvT/hZP6Mp1r5beyJ2SWpdqZSZaKC+UQlWLu6WhLxLZ+5snB+YAlC56u4qOdDHLoid6vbAR/FPIcJlvQfcFJD88nihv9sq1kUX3JXrh0ZUrl2/Zj71aNlM/RL1OnXK/Pg2E+wu4EfnQTrzlWMhR8bxlQA0jH1zmfFN/6BTwP2if29TNlQkWuW3uq3rccY1GA6n0QtlucanPNRzsBtAzsH5/oFuB5R4sD/Msw0itvWuQP4e0y+Vdov1My/rjK19xLce6AhWmmhwkn5qxHdIy158C4cWnSkQvkYzPnwsi7KT9WRH7vfr8qD9zlA5mO+IDxJ` |
+> | southafricaw | rsa-sha2-512 | `Uc7QB0fT4NGyBp34GCAt8G4j1ZBXh/3Wa2YRlILu818=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCijtmaOHIcXjI07fVugz1M33+amlOEqdqtVgOlLmFRKSehPW2+6iGpAjQVwzsYOx32Hp5O07xj/PhiFsbBBqZXGHmuSIOJYa7tQSFvwclO+JW/kuoELXQLwnHxUfPyq4tYoj83GSZ5k/KRlEtbmjEwcozMQVya/7MzulAeV4nN6PDxoLjXlfGEQU2ZCGz2neeisQEM8+hZNuEH+O9O03g7CW8bwiI1Y70/bnNq95xJ5F7lRpwtJNWlx+kmUiNpfXOUPxZAUsny7z1Ka5XKEB1fDP8E/jAtrSWrRPDJew8lFpQeWukwB5tf3F3bh1SuSKaSQqKBArnSpJizWxp0brZZ` |
+> | southafricaw | ecdsa-sha2-nistp256 | `pr1KB8apI+FNQLKkzvUXx0/waiqBGZPEXMNglKimUwA=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPvbvOfXQjT+/3+jQtW3FBAnPnaypYSUhZMkTTSfd7RQMmSxsLNmDooERhVuUTa7XCTlpDNTSPdnnaa6P1a+F6A=` |
+> | southafricaw | ecdsa-sha2-nistp384 | `A3RfMOd6dGgUlcrkXL1YRKNXIdAB8M1lF9qwmy6PjFg=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNaJmo4QGmo6pbLHOXh06Rz9inntdxmuOtVxlJBO1i/ZK5les/AuaILMW7oQCxOKvZs/xI+P0MWRfrNgWSSapy5hNuTkbl8IqO4pH/lO//zdaHmVBC1kPnujDM9znJs6Rg==` |
+> | jioinw | rsa-sha2-256 | `hcy1XbIniEZloraGrvecJCvlw6zZhTOrzgMJag5b5DA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDOBU9e1Ae68+ScLUA5O1gaZ3eq0EGqBIEqL3+QuN8LYpF3Bi/+m43kgjhgiOx5imPK6peHHaaT/nEBQFJKFtWyn8q2kspcDy1xvJfG8Jaks1GQG33djOItiHlKjRWMcyWFvisFE2vVkp3uO0xG4nMDLM2rFazkax+6XA5cf2iW2SfL6Trs4v1waakU/jQLA7vsrx14S+wGEdVINTSPeh5DHqkLzTa3m2tpXVcUA4CG8uQZM8E/3/y0BuIW0Ahl/P6dx35W1Al7gnaTqmx7+idcc/YVe0auorZWWdyclf1sjnAw6U8uMhWmQ0dZgDehDtshlHyx84vvJ1JOJs0+6S2l` |
+> | jioinw | rsa-sha2-512 | `LPctDLIz/vqg4POMOPqI1yD9EE9sNS1gxY6cJoX+gEY=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDOH+IZFFfJN4lpFFpvp5x1lRzuOxLXs0WfpcCIACMOhCor2tkaa/MHlmPIbAqgZgth5NZIWpYkPAv7GpzPBOwTp3Bg5lUM7MXSayO/5+eJjMhB5PUCJ0We8Kfgf/U+vbaMIg9R8gJKutXrANd3sAWXMwWqKUw+ZX/AC7h58w04gb1s+lNOQbfhpqkw8+mrOj2eKH8zHYUJQBUYEyDHqirj565r7HhBtEZImn/ioJS+nYT5Zl/SNtW/ehhUsARG9p6O4wSy20Ysdk7b9Ur2YL0RyFa6QhWQeKktKPVFQuMMLRkYX7dv35uAKq8YN833lLjGESYNdCzYmGTJXk5KYZ8B` |
+> | jioinw | ecdsa-sha2-nistp256 | `mBx6CZ+6DseVrwsKfkNPh9NdrVLgwhHT4DEq9cYfZrg=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPXqhYQKwmkGb8qRq52ulEkXrNVjzVU4sGEuRFn4xXK8xanasbEea3iMOTihzMDflMwgTDmVGoTKtDXy8tQ+Y8k=` |
+> | jioinw | ecdsa-sha2-nistp384 | `lwQX9Yfn7uDz/8gXpG4sZcWLCAoXIpkpSYlgh8NpK1E=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLKY2+wwHIzFOfiKFKFHyfiqjUrscm0qwYTAirNPE1GI6OwAjconeX072ecY3/1G0dE7kAUaWbCKWSO3DqM4r6O+AewzxJoey85zMexW23g2lXFH7HkYn9rldURoUdk31A==` |
+> | swedens | rsa-sha2-256 | `kS1NUherycqJAYe8KZi8AnqIQ9UdDbpoEpcdHlZ702g=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDJ+Imy6VuOvZsel9SCoMmej4kFvP8MDgDY9EdgfkgpjEfOSk+vmXBMCFtthI7sHRggkuXQE5v6OkOPwuWuVWjAWmclfFIz+TTNE5dUUY6L+UMipDEcwFxtufnY3AW0v2MW5lOFHWbx3w7605yb2AFQuZjvngkjdelhDpVpX9a0XdPa7zUYBwXdxWeteH+i4ZJ62sjlBGzYRjFhK/y1rUKR3BVR5xtP9ofzqE1n/TRLpViU8iy4bpsQntTWa71xVoTFtE29h3ESw4QG2lRCwk7NIf8efyNdR25+YpVGIysAxXG2smGAi2W/YXUjteCE7k3IU+ehHJdWKB3spUBSoF/V` |
+> | swedens | rsa-sha2-512 | `G+oX014UJXR0t1xHrCi715XuoHBkBxJMdH8hmVMilJc=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDCa5Ny0EUd8yLOgzczm6Zge+D39VY7hpG+et2ln0i/HdYLd1aijEiF/0RDgnJYxZM4RhPZHxrVZXJXLsLa2T+ud+cqifvsjudsUSCzWNY3pHAwKBTSuu8Po+TrJXx8b+ogg+EhTh1BZQzIVQbtLwqRFJ3beLtvhp+V1pPWOoXRiN6Rq+x6ciT37jOdp033rbEM3AtzWdRBvRxUiVxKoRXcDYwAAIb3joaZ26p69Vj7HpD0HAf7w9f70zIwIzqrW4RcHcP+RbDVzNukK8gWP66OgSKrAQgRmibS6SEJx4kgkaghiQfm1k1bXkTnlKlz956DHkTkpMQe21/eW1Prs+q1` |
+> | swedens | ecdsa-sha2-nistp256 | `8C148yiGdrJCGF6HpDzINhGkB5AAyWDqkauJClRqCZs=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEREKXJT7obM0RXGFrUPmJOoEpJD8T+QT29UEt3/jZrUzVzqXLV/9+VK0xqv1suljhUoUoClBdKqx5E/Sv1kSV4=` |
+> | swedens | ecdsa-sha2-nistp384 | `ra8+vb8aSkTBsO0KAxDrl2lN9p41BxymtRU6Seby83M=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIMby6y3wzWnzE304DjregQcSqKTsoMx2vPGk7OlBtjFKoubZlBRQH4jQrtPbJv/Hpf8f+D0JmvPe5G75yZFG1BcP5eB4aonAr0NNCw+3sCb50JVpoT4yoT787KKYf+5qg==` |
+> | jioinc | rsa-sha2-256 | `DmNCjG1VJxWWmrXw5USD0pAnJAbEAVonkUtzRFKEEFI=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQC/x6T0nye3elqPzK8IF+Q70bLn2zg4MVJpK3P6YurtsRH8cv5+NEHyP0LWdeQWqKa9ivQRIQb8mHS+9KDMxOnzZraUeaaJLcXI0YV512kqzdevsEbH6BSmy8HhZHcRyXqH0PjxLcWJ5Wn9+caNhiVC40Oks7yrrZpAVbddzD9y/eJfguMVWiu1c8iZpYORss1QYo7JqVvEB6pLY03NXWM+xti1RSs+C6IEblQkPvnT3ELni9T1eZOACi12KGZHVLU9n27Nyg/fPjRheYSkw/lkkKDG0zvIQ7jr/k8SCHGcvtDYwRlFErFdGYBlIE888le2oDNNoKjJuhzN6S7ddpzp` |
+> | jioinc | rsa-sha2-512 | `m2P7vnysl2adTz/0P6ebSR7Xx8AubkYkex6cmD9C0ys=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDQHFDt8zTk+Hqh912v0U8CVTgAPUb8Kmuec+2orydM/InG+/zSuqQHsCZaD2mhEg8kevU8k2veF5z2sbko5TR/cghGg5dXlzz4YaKiNdNyKIGm2MdynXJofAtiktGhcB6ummctHqATfGSgkLJHtLvstzTVbVK1zgxXcB8hA52c2EPB1cN1TkAKEyiYNX7fKFe1EEPCxdx3fC/UyApKdD+D432HCW/g8Syj/n7asdB8EQqcoCT3ajh2wG2Qq0ZxjVbbrFImlr0VoTqLImJ4kZ9d2G7Rq2jqrlfESLAxKVDaqj+SjyWpzb3MHFSnwJZybCKXyTt+7BXpKeeGAcHpTs3F` |
+> | jioinc | ecdsa-sha2-nistp256 | `zAZ0A1pk0Xz8Vr/DEf8ztPaLCivXxfajlKMtWqAEzgU=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDow29ds+BRDNTZNW70CEoxUjLiUF+IHgaDRaO+dAWwxL13d+MqTIYY4I0D7vgVvh0OegmYLXIWpCdR8LvVT7zA=` |
+> | jioinc | ecdsa-sha2-nistp384 | `OTG7jxUSj+XrdL28JpYAhsfr6tfO7vtnfzWCxkC/jmQ=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJ/Bb3/3u/UIcYGRLSl7YvOObb43LO5Ksi0ewWJU+MPsPWZr7OTTPs76TdwXMvD8+QuY8U9JxgQQrNmvbpabmbGENkllEgjGlev5P2mHy/IZZAUQhAeeCinCRvTsiOOoLw==` |
+> | brazilse | rsa-sha2-256 | `D+S7uHDWy0VPrRg9mlaK70PBPghBRCR1ru/KEsKpcjA=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQCz86hzEBpBBVJqClTRb7dNolds/LShOM4jucPRawZrlKGEpeKv70Khk8BdI4697fORKavxjWK0O9tQpAJHtVGSv3Ajwb9MB7ERKncBVx/xfdiedrJNmF0G+p97XlCiwkAqePT/9IFaMy1OFqwl6LF7p7I0iBKZX0UgePwdiHtJnK0foTfsASNY4AEVcXHEuaulLFJKUjrr6ootblEbPBnC6IxTPj9oD+Nm0rtlCeD5JtCRFgKUj3LWybEog/AnnMNQDQ+vMPbsCnfsW/J/HQc+ebx3NtcumL+PIxqJw2Pk6mRpDdL+vs2nw/PtnPkdJ7DjIQYLypBSi3AFIONSlO15` |
+> | brazilse | rsa-sha2-512 | `C+p2eAPf5uec0yG+aeoVAaLOAAf0p8gbBNss3xfawPQ=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDV3WmETlQwzfuYoOsPAqfB9Z2gxsNecbpuwIBYwitLYKmJnT9Q3SNSgqnBiI1TKWyEweerdQaPnEvz9TeynGqSmLyGT0JJXQXFQCjTCgRHP4WD0Q+V7HWHnWYQ5c2e8tKEVA1jWt57dcrFlrGKEsywuMeEX21V13qQxK2acXVRWJPWgQCVwtiNpToc/cILOqL5XXKnSA81Ex7iRqw8QRAGdIozkryisucy+cStdJX6q+YUE5L62ENV8qMwJdwUGywEpKhXRg5VQKN0ciFqvyT/3cZQVF+NkUFGPnOi0bk4JzHxWxmQNTIwE7bmPsuniw5njD3ota/IPUHV2og190Xx` |
+> | brazilse | ecdsa-sha2-nistp256 | `dhADLmzQOE0mPcctS3wV+x2AUlv1GviguDQgSbGn/qs=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPYuseeJN3CvtyPSKOz5FSu7PoNul+o6/LB62/MW9CUW+3AmqtVANVox1XQ8eX/YhL0a5+brjmveZPQS6M09YyQ=` |
+> | brazilse | ecdsa-sha2-nistp384 | `mjFHELtgAJkPTWO4dc7pzVVnJ6WLfAXGvXN1Wq2+NPs=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIwFI6bRmgPe0tN7Qwc03PMrlpBn+wBChcuofyWlKVd/Ec6t2dxHr/0ev0dqyKS2nbK7CAOQxSrV1NVYnYZKql/eC2sPqI1oxz7DzUtRnNKrXcH714ViN3RIY3DZA6rJOw==` |
+> | norwayw | rsa-sha2-256 | `Ea3Vj3EfZYM25AX1IAty30AD+lhXYZsgtPGEFzNtjOk=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDuxOcTdADdJHI8MFrXV00XKbKVjXpirS3ZPzzIxw0mIFxFTArJEpXJeRfb0OZzQ1IABDwoasp1u+IhnY1Uv2VQ8mYAXtC3He08+7+EXJgFU/xQ8qFfM4eioAuXpxR7M7qV/0golNT4dvvLrY4zHxbSWmVB7cYJAeIjDU8dKISWFvMYjnRuiI7RYtxh/JI5ZfImU65Vfxi26vqWm51QDyF5+FmmXLUHpMFFuW8i/g8wSE1C3Qk+NZ3YJDlHjYqasPm4QidX8rHQ1xyMX9+ouzBZArNrVfrA4/ozoKGnPhe4GFzpuwdppkP4Ciy+H6t1/de/8fo9zkNgUJWHQrxzT4Lt` |
+> | norwayw | rsa-sha2-512 | `uHGfIB97I8y8nSAEciD7InBKzAx9ui5xQHAXIUo6gdE=` | `AAAAB3NzaC1yc2EAAAADAQABAAABAQDPXLVCb1kqh8gERY43bvyPcfxVOUnZyWsHkEK5+QT6D7ttThO2alZbnAPMhMGpAzJieT1IArRbCjmssWQmJrhTGXSJBsi75zmku4vN+UB712EGXm308/TvClN0wlnFwFI9RWXonDBkUN1WjZnUoQuN+JNZ7ybApHEgyaiHkJfhdrtTkfzGLHqyMnESUvnEJkexLDog88xZVNL7qJTSJlq1m32JEAEDgTuO4Wb7IIr92s6GOFXKukwY8dRldXCaJvjwfBz5MEdPknvipwTHYlxYzpcCtb9qnOliDLD2g4gm9d5nq3QBlLj/4cS1M9trkAxQQfUmuVQooXfO2Zw+fOW1` |
+> | norwayw | ecdsa-sha2-nistp256 | `muljUcRHpId06YvSLxboTHWmq0pUXxH6QRZHspsLZvs=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOefohG21zu2JGcUvjk/qlz5sxhJcy5Vpk5Etj3cgmE/BuOTt5GR4HHpbcj/hrLxGRmAWhBV7uVMqO376pwsOBs=` |
+> | norwayw | ecdsa-sha2-nistp384 | `QlzJV54Ggw1AObztQjGt/J2TQ1kTiTtJDcxxIdCtWYE=` | `AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNYnNgJKaYCByLPdh21ZYEV/I4FNSZ4RWxK4bMDgNo/53HROhQmezQgoDvJFWsQiFVDXOPLXf26OeVXJ7qXAm6vS+17Z7E1iHkrqo2MqnlMTYzvBOgYNFp9GfW6lkDYfiQ==` |
<sup>1</sup> The SHA 256 fingerprint is used by Open SSH and WinSCP. ## See also -- [SSH File Transfer Protocol (SFTP) support in Azure Blob Storage](secure-file-transfer-protocol-support.md)
+- [SSH File Transfer Protocol (SFTP) support in Azure Blob Storage](secure-file-transfer-protocol-support.md)
storage Secure File Transfer Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support-how-to.md
Previously updated : 02/03/2022 Last updated : 02/16/2022
Azure Storage doesn't support shared access signature (SAS), or Azure Active dir
In this section, you'll learn how to create a local user, choose an authentication method, and assign permissions for that local user.
-To learn more about the SFTP permissions model, see [SFTP Permissions model](secure-file-transfer-protocol-support.md#sftp-permissions-model).
+To learn more about the SFTP permissions model, see [SFTP Permissions model](secure-file-transfer-protocol-support.md#sftp-permission-model).
> [!TIP] > This section shows you how to configure local users for an existing storage account. To view an Azure Resource Manager template that configures a local user as part of creating an account, see [Create an Azure Storage Account and Blob Container accessible using SFTP protocol on Azure](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.storage/storage-sftp).
storage Secure File Transfer Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support.md
Previously updated : 02/03/2022 Last updated : 02/16/2022
Different protocols extend from the hierarchical namespace. The SFTP is one of t
> [!div class="mx-imgBorder"] > ![hierarchical namespace](./media/secure-file-transfer-protocol-support/hierarchical-namespace-and-sftp-support.png)
-## SFTP Permissions model
+## SFTP permission model
-Azure Storage does not support shared access signature (SAS), or Azure Active directory (Azure AD) authentication for connecting SFTP clients. Instead, SFTP clients must use either a password or a Secure Shell (SSH) private key credential.
+Azure Blob Storage does not support Azure Active Directory (Azure AD) authentication or authorization via SFTP. Instead, SFTP utilizes a new form of identity management called _local users_.
-To grant access to a connecting client, the storage account must have an identity associated with that credential. That identity is called a local user. Local Users are a new form of identity management provided with SFTP support. You can add up 1000 local users to a storage account.
+Local users must use either a password or a Secure Shell (SSH) private key credential for authentication. You can have a maximum of 1000 local users for a storage account.
To set up access permissions, you will create a local user, and choose authentication methods. Then, for each container in your account, you can specify the level of access you want to give that user.
-> [!NOTE]
-> After your data is ingested into Azure Storage, you can use the full breadth of Azure storage security settings. While authorization mechanisms such as role-based access control (RBAC) and access control lists aren't supported as a means to authorize a connecting SFTP client, they can be used to authorize access via Azure tools (such Azure portal, Azure CLI, Azure PowerShell commands, and AzCopy) as well as Azure SDKS, and Azure REST APIs.
->
-> To learn more, see [Access control model in Azure Data Lake Storage Gen2](data-lake-storage-access-control-model.md)
+> [!CAUTION]
+> Local users do not interoperate with other Azure Storage permission models such as RBAC (role based access control), ABAC (attribute based access control), and ACLs (access control lists).
+>
+> For example, user A has an Azure AD identity with only read permission for file _foo.txt_ and a local user identity with delete permission for container _con1_ in which _foo.txt_ is stored. In this case, User A could login in via SFTP using their local user identity and delete _foo.txt_.
+
+For SFTP enabled storage accounts, you can use the full breadth of Azure Blob Storage security settings, to authenticate and authorize users accessing Blob Storage via Azure portal, Azure CLI, Azure PowerShell commands, AzCopy, as well as Azure SDKS, and Azure REST APIs. To learn more, see [Access control model in Azure Data Lake Storage Gen2](data-lake-storage-access-control-model.md)
## Authentication methods
-You can authenticate a connecting SFTP client by using a password or a Secure Shell (SSH) public-private keypair. You can configure both forms of authentication and let connecting clients choose which one to use. However, multifactor authentication, whereby both a valid password and a valid public-private key pair are required for successful authentication is not supported.
+You can authenticate local users connecting via SFTP by using a password or a Secure Shell (SSH) public-private keypair. You can configure both forms of authentication and let connecting local users choose which one to use. However, multifactor authentication, whereby both a valid password and a valid public-private key pair are required for successful authentication is not supported.
#### Passwords
Passwords are generated for you. If you choose password authentication, then you
#### SSH key pairs
-A public-private key pair is the most common form of authentication for Secure Shell (SSH). The private key is secret and is known only to you. The public key is stored in Azure. When an SSH client connects to the storage account, it sends a message with the private key and signature. Azure validates the message and checks that the user and key are recognized by the storage account. To learn more, see [Overview of SSH and keys](../../virtual-machines/linux/ssh-from-windows.md#).
+A public-private key pair is the most common form of authentication for Secure Shell (SSH). The private key is secret and should be known only to the local user. The public key is stored in Azure. When an SSH client connects to the storage account using a local user identity, it sends a message with the private key and signature. Azure validates the message and checks that the user and key are recognized by the storage account. To learn more, see [Overview of SSH and keys](../../virtual-machines/linux/ssh-from-windows.md#).
If you choose to authenticate with private-public key pair, you can either generate one, use one already stored in Azure, or provide Azure the public key of an existing public-private key pair.
sftp myaccount.myusername@myaccount.blob.core.windows.net
put logfile.txt ```
-If you set the home directory of a user to `mycontainer/mydirectory`, then the client would connect to that directory. Then, the `logfile.txt` file would be uploaded to `mycontainer/mydirectory`. If you did not set the home directory, then the connection attempt would fail. Instead, connecting clients would have to specify a container along with the request and then use SFTP commands to navigate to the target directory before uploading a file. The following example shows this:
+If you set the home directory of a user to `mycontainer/mydirectory`, then they would connect to that directory. Then, the `logfile.txt` file would be uploaded to `mycontainer/mydirectory`. If you did not set the home directory, then the connection attempt would fail. Instead, connecting users would have to specify a container along with the request and then use SFTP commands to navigate to the target directory before uploading a file. The following example shows this:
```powershell sftp myaccount.mycontainer.myusername@myaccount.blob.core.windows.net
cd mydirectory
put logfile.txt ```
+> [!Note]
+> Home directory is only the initial directory that the connecting local user is placed in. Local users can navigate to any other path in the container they are connected to if they have the appropriate container permissions.
+ ## Supported algorithms
-You can use any SFTP client to securely connect and then transfer files. Connecting clients must use one the algorithms listed below.
+You can use many different SFTP clients to securely connect and then transfer files. Connecting clients must use algorithms specified in table below.
| Host key | Key exchange | Ciphers/encryption | Integrity/MAC | Public key | |-|--|--|||
You can use any SFTP client to securely connect and then transfer files. Connect
| ecdsa-sha2-nistp384| diffie-hellman-group16-sha512 | aes256-cbc | | ||| aes192-cbc ||
-SFTP support in Azure Blob Storage currently limits its cryptographic algorithm support in accordance to the Microsoft Security Development Lifecycle (SDL). We strongly recommend that customers utilize SDL approved algorithms to securely access their data. More details can be found [here](/security/sdl/cryptographic-recommendations)
+SFTP support in Azure Blob Storage currently limits its cryptographic algorithm support based on security considerations. We strongly recommend that customers utilize Microsoft Security Development Lifecycle (SDL) approved algorithms to securely access their data. More details can be found [here](/security/sdl/cryptographic-recommendations)
## Known issues and limitations
storage Storage Quickstart Blobs Javascript Client Libraries Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-javascript-client-libraries-legacy.md
- Title: "Quickstart: Azure Blob storage for JavaScript v10 in the browser"
-description: Learn to upload, list, and delete blobs using JavaScript v10 SDK in an HTML page.
----- Previously updated : 07/24/2020--
-#Customer intent: As a web application developer I want to interface with Azure Blob storage entirely on the client so that I can build a SPA application that is able to upload and delete files on blob storage.
--
-# Quickstart: Manage blobs with JavaScript v10 SDK in browser
-
-In this quickstart, you learn to manage blobs by using JavaScript code running entirely in the browser. Blobs are objects that can hold large amounts of text or binary data, including images, documents, streaming media, and archive data. You'll use required security measures to ensure protected access to your blob storage account.
-
-> [!NOTE]
-> This quickstart uses a legacy version of the Azure Blob storage client library. To get started with the latest version, see [Quickstart: Manage blobs with JavaScript v12 SDK in a browser](quickstart-blobs-javascript-browser.md).
-
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).-- An Azure Storage account. [Create a storage account](../common/storage-account-create.md).-- A local web server. This article uses [Node.js](https://nodejs.org) to open a basic server.-- [Visual Studio Code](https://code.visualstudio.com).-- A VS Code extension for browser debugging, such as [Debugger for Chrome](https://marketplace.visualstudio.com/items?itemName=msjsdiag.debugger-for-chrome) or [Debugger for Microsoft Edge](https://devblogs.microsoft.com/visualstudio/debug-javascript-in-microsoft-edge-from-visual-studio/).-
-## Setting up storage account CORS rules
-
-Before your web application can access a blob storage from the client, you must configure your account to enable [cross-origin resource sharing](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services), or CORS.
-
-Return to the Azure portal and select your storage account. To define a new CORS rule, navigate to the **Settings** section and click on the **CORS** link. Next, click the **Add** button to open the **Add CORS rule** window. For this quickstart, you create an open CORS rule:
-
-![Azure Blob Storage Account CORS settings](media/storage-quickstart-blobs-javascript-client-libraries-v10/azure-blob-storage-cors-settings.png)
-
-The following table describes each CORS setting and explains the values used to define the rule.
-
-|Setting |Value | Description |
-||||
-| Allowed origins | * | Accepts a comma-delimited list of domains set as acceptable origins. Setting the value to `*` allows all domains access to the storage account. |
-| Allowed methods | delete, get, head, merge, post, options, and put | Lists the HTTP verbs allowed to execute against the storage account. For the purposes of this quickstart, select all available options. |
-| Allowed headers | * | Defines a list of request headers (including prefixed headers) allowed by the storage account. Setting the value to `*` allows all headers access. |
-| Exposed headers | * | Lists the allowed response headers by the account. Setting the value to `*` allows the account to send any header. |
-| Max age (seconds) | 86400 | The maximum amount of time the browser caches the preflight OPTIONS request. A value of *86400* allows the cache to remain for a full day. |
-
-> [!IMPORTANT]
-> Ensure any settings you use in production expose the minimum amount of access necessary to your storage account to maintain secure access. The CORS settings described here are appropriate for a quickstart as it defines a lenient security policy. These settings, however, are not recommended for a real-world context.
-
-Next, you use the Azure cloud shell to create a security token.
--
-## Create a shared access signature
-
-The shared access signature (SAS) is used by the code running in the browser to authorize requests to Blob storage. By using the SAS, the client can authorize access to storage resources without the account access key or connection string. For more information on SAS, see [Using shared access signatures (SAS)](../common/storage-sas-overview.md).
-
-You can create a SAS using the Azure CLI through the Azure cloud shell, or with the Azure portal or Azure Storage Explorer. The following table describes the parameters you need to provide values for to generate a SAS with the CLI.
-
-| Parameter |Description | Placeholder |
-|-|-|-|
-| *expiry* | The expiration date of the access token in YYYY-MM-DD format. Enter tomorrow's date for use with this quickstart. | *FUTURE_DATE* |
-| *account-name* | The storage account name. Use the name set aside in an earlier step. | *YOUR_STORAGE_ACCOUNT_NAME* |
-| *account-key* | The storage account key. Use the key set aside in an earlier step. | *YOUR_STORAGE_ACCOUNT_KEY* |
-
-Use the following CLI command, with actual values for each placeholder, to generate a SAS that you can use in your JavaScript code.
-
-```azurecli-interactive
-az storage account generate-sas \
- --permissions racwdl \
- --resource-types sco \
- --services b \
- --expiry FUTURE_DATE \
- --account-name YOUR_STORAGE_ACCOUNT_NAME \
- --account-key YOUR_STORAGE_ACCOUNT_KEY
-```
-
-You may find the series of values after each parameter a bit cryptic. These parameter values are taken from the first letter of their respective permission. The following table explains where the values come from:
-
-| Parameter | Value | Description |
-||||
-| *permissions* | racwdl | This SAS allows *read*, *append*, *create*, *write*, *delete*, and *list* capabilities. |
-| *resource-types* | sco | The resources affected by the SAS are *service*, *container*, and *object*. |
-| *services* | b | The service affected by the SAS is the *blob* service. |
-
-Now that the SAS is generated, copy the return value and save it somewhere for use in an upcoming step. If you generated your SAS using a method other than the Azure CLI, you will need to remove the initial `?` if it is present. This character is a URL separator that is already provided in the URL template later in this topic where the SAS is used.
-
-> [!IMPORTANT]
-> In production, always pass SAS tokens using TLS. Also, SAS tokens should be generated on the server and sent to the HTML page in order pass back to Azure Blob Storage. One approach you may consider is to use a serverless function to generate SAS tokens. The Azure Portal includes function templates that feature the ability to generate a SAS with a JavaScript function.
-
-## Implement the HTML page
-
-In this section, you'll create a basic web page and configure VS Code to launch and debug the page. Before you can launch, however, you'll need to use Node.js to start a local web server and serve the page when your browser requests it. Next, you'll add JavaScript code to call various blob storage APIs and display the results in the page. You can also see the results of these calls in the [Azure portal](https://portal.azure.com), [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer), and the [Azure Storage extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurestorage) for VS Code.
-
-### Set up the web application
-
-First, create a new folder named *azure-blobs-javascript* and open it in VS Code. Then create a new file in VS Code, add the following HTML, and save it as *https://docsupdatetracker.net/index.html* in the *azure-blobs-javascript* folder.
-
-```html
-<!DOCTYPE html>
-<html>
-
-<body>
- <button id="create-container-button">Create container</button>
- <button id="delete-container-button">Delete container</button>
- <button id="select-button">Select and upload files</button>
- <input type="file" id="file-input" multiple style="display: none;" />
- <button id="list-button">List files</button>
- <button id="delete-button">Delete selected files</button>
- <p><b>Status:</b></p>
- <p id="status" style="height:160px; width: 593px; overflow: scroll;" />
- <p><b>Files:</b></p>
- <select id="file-list" multiple style="height:222px; width: 593px; overflow: scroll;" />
-</body>
-
-<!-- You'll add code here later in this quickstart. -->
-
-</html>
-```
-
-### Configure the debugger
-
-To set up the debugger extension in VS Code, select **Debug > Add Configuration...**, then select **Chrome** or **Edge**, depending on which extension you installed in the Prerequisites section earlier. This action creates a *launch.json* file and opens it in the editor.
-
-Next, modify the *launch.json* file so that the `url` value includes `/https://docsupdatetracker.net/index.html` as shown:
-
-```json
-{
- // Use IntelliSense to learn about possible attributes.
- // Hover to view descriptions of existing attributes.
- // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
- "version": "0.2.0",
- "configurations": [
- {
- "type": "chrome",
- "request": "launch",
- "name": "Launch Chrome against localhost",
- "url": "http://localhost:8080/https://docsupdatetracker.net/index.html",
- "webRoot": "${workspaceFolder}"
- }
- ]
-}
-```
-
-This configuration tells VS Code which browser to launch and which URL to load.
-
-### Launch the web server
-
-To launch the local Node.js web server, select **View > Terminal** to open a console window inside VS Code, then enter the following command.
-
-```console
-npx http-server
-```
-
-This command will install the *http-server* package and launch the server, making the current folder available through default URLs including the one indicated in the previous step.
-
-### Start debugging
-
-To launch *https://docsupdatetracker.net/index.html* in the browser with the VS Code debugger attached, select **Debug > Start Debugging** or press F5 in VS Code.
-
-The UI displayed doesn't do anything yet, but you'll add JavaScript code in the following section to implement each function shown. You can then set breakpoints and interact with the debugger when it's paused on your code.
-
-When you make changes to *https://docsupdatetracker.net/index.html*, be sure to reload the page to see the changes in the browser. In VS Code, you can also select **Debug > Restart Debugging** or press CTRL + SHIFT + F5.
-
-### Add the blob storage client library
-
-To enable calls to the blob storage API, first [Download the Azure Storage SDK for JavaScript - Blob client library](https://aka.ms/downloadazurestoragejsblob), extract the contents of the zip, and place the *azure-storage-blob.js* file in the *azure-blobs-javascript* folder.
-
-Next, paste the following HTML into *https://docsupdatetracker.net/index.html* after the `</body>` closing tag, replacing the placeholder comment.
-
-```html
-<script src="azure-storage-blob.js" charset="utf-8"></script>
-
-<script>
-// You'll add code here in the following sections.
-</script>
-```
-
-This code adds a reference to the script file and provides a place for your own JavaScript code. For the purposes of this quickstart, we're using the `azure-storage-blob.js` script file so that you can open it in VS Code, read its contents, and set breakpoints. In production, you should use the more compact *azure-storage.blob.min.js* file that is also provided in the zip file.
-
-You can find out more about each blob storage function in the [reference documentation](/javascript/api/%40azure/storage-blob/index). Note that some of the functions in the SDK are only available in Node.js or only available in the browser.
-
-The code in *azure-storage-blob.js* exports a global variable called `azblob`, which you'll use in your JavaScript code to access the blob storage APIs.
-
-### Add the initial JavaScript code
-
-Next, paste the following code into the `<script>` element shown in the previous code block, replacing the placeholder comment.
-
-```javascript
-const createContainerButton = document.getElementById("create-container-button");
-const deleteContainerButton = document.getElementById("delete-container-button");
-const selectButton = document.getElementById("select-button");
-const fileInput = document.getElementById("file-input");
-const listButton = document.getElementById("list-button");
-const deleteButton = document.getElementById("delete-button");
-const status = document.getElementById("status");
-const fileList = document.getElementById("file-list");
-
-const reportStatus = message => {
- status.innerHTML += `${message}<br/>`;
- status.scrollTop = status.scrollHeight;
-}
-```
-
-This code creates fields for each HTML element that the following code will use, and implements a `reportStatus` function to display output.
-
-In the following sections, add each new block of JavaScript code after the previous block.
-
-### Add your storage account info
-
-Next, add code to access your storage account, replacing the placeholders with your account name and the SAS you generated in a previous step.
-
-```javascript
-const accountName = "<Add your storage account name>";
-const sasString = "<Add the SAS you generated earlier>";
-const containerName = "testcontainer";
-const containerURL = new azblob.ContainerURL(
- `https://${accountName}.blob.core.windows.net/${containerName}?${sasString}`,
- azblob.StorageURL.newPipeline(new azblob.AnonymousCredential));
-```
-
-This code uses your account info and SAS to create a [ContainerURL](/javascript/api/@azure/storage-blob/ContainerURL) instance, which is useful for creating and manipulating a storage container.
-
-### Create and delete a storage container
-
-Next, add code to create and delete the storage container when you press the corresponding button.
-
-```javascript
-const createContainer = async () => {
- try {
- reportStatus(`Creating container "${containerName}"...`);
- await containerURL.create(azblob.Aborter.none);
- reportStatus(`Done.`);
- } catch (error) {
- reportStatus(error.body.message);
- }
-};
-
-const deleteContainer = async () => {
- try {
- reportStatus(`Deleting container "${containerName}"...`);
- await containerURL.delete(azblob.Aborter.none);
- reportStatus(`Done.`);
- } catch (error) {
- reportStatus(error.body.message);
- }
-};
-
-createContainerButton.addEventListener("click", createContainer);
-deleteContainerButton.addEventListener("click", deleteContainer);
-```
-
-This code calls the ContainerURL [create](/javascript/api/@azure/storage-blob/containerclient#create-containercreateoptions-) and [delete](/javascript/api/@azure/storage-blob/containerclient#delete-containerdeletemethodoptions-) functions without using an [Aborter](/javascript/api/@azure/storage-blob/aborter) instance. To keep things simple for this quickstart, this code assumes that your storage account has been created and is enabled. In production code, use an Aborter instance to add timeout functionality.
-
-### List blobs
-
-Next, add code to list the contents of the storage container when you press the **List files** button.
-
-```javascript
-const listFiles = async () => {
- fileList.size = 0;
- fileList.innerHTML = "";
- try {
- reportStatus("Retrieving file list...");
- let marker = undefined;
- do {
- const listBlobsResponse = await containerURL.listBlobFlatSegment(
- azblob.Aborter.none, marker);
- marker = listBlobsResponse.nextMarker;
- const items = listBlobsResponse.segment.blobItems;
- for (const blob of items) {
- fileList.size += 1;
- fileList.innerHTML += `<option>${blob.name}</option>`;
- }
- } while (marker);
- if (fileList.size > 0) {
- reportStatus("Done.");
- } else {
- reportStatus("The container does not contain any files.");
- }
- } catch (error) {
- reportStatus(error.body.message);
- }
-};
-
-listButton.addEventListener("click", listFiles);
-```
-
-This code calls the [ContainerURL.listBlobFlatSegment](/javascript/api/@azure/storage-blob/containerclient#listblobsflat-containerlistblobsoptions-) function in a loop to ensure that all segments are retrieved. For each segment, it loops over the list of blob items it contains and updates the **Files** list.
-
-### Upload blobs
-
-Next, add code to upload files to the storage container when you press the **Select and upload files** button.
-
-```javascript
-const uploadFiles = async () => {
- try {
- reportStatus("Uploading files...");
- const promises = [];
- for (const file of fileInput.files) {
- const blockBlobURL = azblob.BlockBlobURL.fromContainerURL(containerURL, file.name);
- promises.push(azblob.uploadBrowserDataToBlockBlob(
- azblob.Aborter.none, file, blockBlobURL));
- }
- await Promise.all(promises);
- reportStatus("Done.");
- listFiles();
- } catch (error) {
- reportStatus(error.body.message);
- }
-}
-
-selectButton.addEventListener("click", () => fileInput.click());
-fileInput.addEventListener("change", uploadFiles);
-```
-
-This code connects the **Select and upload files** button to the hidden `file-input` element. In this way, the button `click` event triggers the file input `click` event and displays the file picker. After you select files and close the dialog box, the `input` event occurs and the `uploadFiles` function is called. This function calls the browser-only [uploadBrowserDataToBlockBlob](/javascript/api/@azure/storage-blob/blockblobclient#uploadbrowserdata-blobarraybufferarraybufferview--blockblobparalleluploadoptions-) function for each file you selected. Each call returns a Promise, which is added to a list so that they can all be awaited at once, causing the files to upload in parallel.
-
-### Delete blobs
-
-Next, add code to delete files from the storage container when you press the **Delete selected files** button.
-
-```javascript
-const deleteFiles = async () => {
- try {
- if (fileList.selectedOptions.length > 0) {
- reportStatus("Deleting files...");
- for (const option of fileList.selectedOptions) {
- const blobURL = azblob.BlobURL.fromContainerURL(containerURL, option.text);
- await blobURL.delete(azblob.Aborter.none);
- }
- reportStatus("Done.");
- listFiles();
- } else {
- reportStatus("No files selected.");
- }
- } catch (error) {
- reportStatus(error.body.message);
- }
-};
-
-deleteButton.addEventListener("click", deleteFiles);
-```
-
-This code calls the [BlobURL.delete](/javascript/api/@azure/storage-blob/BlobURL#delete-aborter--iblobdeleteoptions-) function to remove each file selected in the list. It then calls the `listFiles` function shown earlier to refresh the contents of the **Files** list.
-
-### Run and test the web application
-
-At this point, you can launch the page and experiment to get a feel for how blob storage works. If any errors occur (for example, when you try to list files before you've created the container), the **Status** pane will display the error message received. You can also set breakpoints in the JavaScript code to examine the values returned by the storage APIs.
-
-## Clean up resources
-
-To clean up the resources created during this quickstart, go to the [Azure portal](https://portal.azure.com) and delete the resource group you created in the Prerequisites section.
-
-## Next steps
-
-In this quickstart, you've created a simple website that accesses blob storage from browser-based JavaScript. To learn how you can host a website itself on blob storage, continue to the following tutorial:
-
-> [!div class="nextstepaction"]
-> [Host a static website on Blob Storage](./storage-blob-static-website-host.md)
storage Storage Quickstart Blobs Nodejs Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-nodejs-legacy.md
- Title: "Quickstart: Azure Blob storage client library v10 for JavaScript"
-description: Create, upload, and delete blobs and containers in Node.js with the Azure Storage client library v10 for JavaScript
-- Previously updated : 01/19/2021------
-# Quickstart: Manage blobs with JavaScript v10 SDK in Node.js
-
-In this quickstart, you learn to manage blobs by using Node.js. Blobs are objects that can hold large amounts of text or binary data, including images, documents, streaming media, and archive data. You'll upload, download, list, and delete blobs, and you'll manage containers.
-
-> [!NOTE]
-> This quickstart uses a legacy version of the Azure Blob storage client library. To get started with the latest version, see [Quickstart: Manage blobs with JavaScript v12 SDK in Node.js](storage-quickstart-blobs-nodejs.md).
-
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).-- An Azure Storage account. [Create a storage account](../common/storage-account-create.md).-- [Node.js](https://nodejs.org/en/download/).-
-## Download the sample application
-
-The [sample application](https://github.com/Azure-Samples/azure-storage-js-v10-quickstart.git) in this quickstart is a simple Node.js console application. To begin, clone the repository to your machine using the following command:
-
-```bash
-git clone https://github.com/Azure-Samples/azure-storage-js-v10-quickstart.git
-```
-
-Next, change folders for the application:
-
-```bash
-cd azure-storage-js-v10-quickstart
-```
-
-Now, open the folder in your favorite code editing environment.
-
-## Configure your storage credentials
-
-Before running the application, you must provide the security credentials for your storage account. The sample repository includes a file named *.env.example*. Rename this file by removing the *.example* extension, which results in a file named *.env*. Inside the *.env* file, add your account name and access key values after the *AZURE_STORAGE_ACCOUNT_NAME* and *AZURE_STORAGE_ACCOUNT_ACCESS_KEY* keys.
-
-## Install required packages
-
-In the application directory, run *npm install* to install the required packages for the application.
-
-```bash
-npm install
-```
-
-## Run the sample
-
-Now that the dependencies are installed, you can run the sample by issuing the following command:
-
-```bash
-npm start
-```
-
-The output from the app will be similar to the following example:
-
-```bash
-Container "demo" is created
-Containers:
-Blob "quickstart.txt" is uploaded
-Local file "./readme.md" is uploaded
-Blobs in "demo" container:
-Blob downloaded blob content: "hello!"
-Blob "quickstart.txt" is deleted
-Container "demo" is deleted
-Done
-```
-
-If you're using a new storage account for this quickstart, then you may only see the *demo* container listed under the label "*Containers:*".
-
-## Understanding the code
-
-The sample begins by importing a number of classes and functions from the Azure Blob storage namespace. Each of the imported items is discussed in context as they're used in the sample.
-
-```javascript
-const {
- Aborter,
- BlobURL,
- BlockBlobURL,
- ContainerURL,
- ServiceURL,
- SharedKeyCredential,
- StorageURL,
- uploadStreamToBlockBlob
-} = require('@azure/storage-blob');
-```
-
-Credentials are read from environment variables based on the appropriate context.
-
-```javascript
-if (process.env.NODE_ENV !== 'production') {
- require('dotenv').config();
-}
-```
-
-The *dotenv* module loads environment variables when running the app locally for debugging. Values are defined in a file named *.env* and loaded into the current execution context. In production, the server configuration provides these values, which is why this code only runs when the script is *not* running under a "production" environment.
-
-The next block of modules is imported to help interface with the file system.
-
-```javascript
-const fs = require('fs');
-const path = require('path');
-```
-
-The purpose of these modules is as follows:
--- *fs* is the native Node.js module used to work with the file system--- *path* is required to determine the absolute path of the file, which is used when uploading a file to Blob storage-
-Next, environment variable values are read and set aside in constants.
-
-```javascript
-const STORAGE_ACCOUNT_NAME = process.env.AZURE_STORAGE_ACCOUNT_NAME;
-const ACCOUNT_ACCESS_KEY = process.env.AZURE_STORAGE_ACCOUNT_ACCESS_KEY;
-```
-
-The next set of constants helps to reveal the intent of file size calculations during upload operations.
-
-```javascript
-const ONE_MEGABYTE = 1024 * 1024;
-const FOUR_MEGABYTES = 4 * ONE_MEGABYTE;
-```
-
-Requests made by the API can be set to time out after a given interval. The [Aborter](/javascript/api/%40azure/storage-blob/aborter?view=azure-node-legacy&preserve-view=true) class is responsible for managing how requests are timed-out and the following constant is used to define timeouts used in this sample.
-
-```javascript
-const ONE_MINUTE = 60 * 1000;
-```
-
-### Calling code
-
-To support JavaScript's *async/await* syntax, all the calling code is wrapped in a function named *execute*. Then *execute* is called and handled as a promise.
-
-```javascript
-async function execute() {
- // commands...
-}
-
-execute().then(() => console.log("Done")).catch((e) => console.log(e));
-```
-
-All of the following code runs inside the execute function where the `// commands...` comment is placed.
-
-First, the relevant variables are declared to assign names, sample content and to point to the local file to upload to Blob storage.
-
-```javascript
-const containerName = "demo";
-const blobName = "quickstart.txt";
-const content = "hello!";
-const localFilePath = "./readme.md";
-```
-
-Account credentials are used to create a pipeline, which is responsible for managing how requests are sent to the REST API. Pipelines are thread-safe and specify logic for retry policies, logging, HTTP response deserialization rules, and more.
-
-```javascript
-const credentials = new SharedKeyCredential(STORAGE_ACCOUNT_NAME, ACCOUNT_ACCESS_KEY);
-const pipeline = StorageURL.newPipeline(credentials);
-const serviceURL = new ServiceURL(`https://${STORAGE_ACCOUNT_NAME}.blob.core.windows.net`, pipeline);
-```
-
-The following classes are used in this block of code:
--- The [SharedKeyCredential](/javascript/api/%40azure/storage-blob/sharedkeycredential?view=azure-node-legacy&preserve-view=true) class is responsible for wrapping storage account credentials to provide them to a request pipeline.--- The [StorageURL](/javascript/api/%40azure/storage-blob/storageurl?view=azure-node-legacy&preserve-view=true) class is responsible for creating a new pipeline.--- The [ServiceURL](/javascript/api/%40azure/storage-blob/serviceurl?view=azure-node-legacy&preserve-view=true) models a URL used in the REST API. Instances of this class allow you to perform actions like list containers and provide context information to generate container URLs.-
-The instance of *ServiceURL* is used with the [ContainerURL](/javascript/api/%40azure/storage-blob/containerurl?view=azure-node-legacy&preserve-view=true) and [BlockBlobURL](/javascript/api/%40azure/storage-blob/blockbloburl?view=azure-node-legacy&preserve-view=true) instances to manage containers and blobs in your storage account.
-
-```javascript
-const containerURL = ContainerURL.fromServiceURL(serviceURL, containerName);
-const blockBlobURL = BlockBlobURL.fromContainerURL(containerURL, blobName);
-```
-
-The *containerURL* and *blockBlobURL* variables are reused throughout the sample to act on the storage account.
-
-At this point, the container doesn't exist in the storage account. The instance of *ContainerURL* represents a URL that you can act upon. By using this instance, you can create and delete the container. The location of this container equates to a location such as this:
-
-```bash
-https://<ACCOUNT_NAME>.blob.core.windows.net/demo
-```
-
-The *blockBlobURL* is used to manage individual blobs, allowing you to upload, download, and delete blob content. The URL represented here is similar to this location:
-
-```bash
-https://<ACCOUNT_NAME>.blob.core.windows.net/demo/quickstart.txt
-```
-
-As with the container, the block blob doesn't exist yet. The *blockBlobURL* variable is used later to create the blob by uploading content.
-
-### Using the Aborter class
-
-Requests made by the API can be set to time out after a given interval. The *Aborter* class is responsible for managing how requests are timed out. The following code creates a context where a set of requests is given 30 minutes to execute.
-
-```javascript
-const aborter = Aborter.timeout(30 * ONE_MINUTE);
-```
-
-Aborters give you control over requests by allowing you to:
--- designate the amount of time given for a batch of requests-- designate how long an individual request has to execute in the batch-- allow you to cancel requests-- use the *Aborter.none* static member to stop your requests from timing out all together-
-### Create a container
-
-To create a container, the *ContainerURL*'s *create* method is used.
-
-```javascript
-await containerURL.create(aborter);
-console.log(`Container: "${containerName}" is created`);
-```
-
-As the name of the container is defined when calling *ContainerURL.fromServiceURL(serviceURL, containerName)*, calling the *create* method is all that's required to create the container.
-
-### Show container names
-
-Accounts can store a vast number of containers. The following code demonstrates how to list containers in a segmented fashion, which allows you to cycle through a large number of containers. The *showContainerNames* function is passed instances of *ServiceURL* and *Aborter*.
-
-```javascript
-console.log("Containers:");
-await showContainerNames(serviceURL, aborter);
-```
-
-The *showContainerNames* function uses the *listContainersSegment* method to request batches of container names from the storage account.
-
-```javascript
-async function showContainerNames(aborter, serviceURL) {
- let marker = undefined;
-
- do {
- const listContainersResponse = await serviceURL.listContainersSegment(aborter, marker);
- marker = listContainersResponse.nextMarker;
- for(let container of listContainersResponse.containerItems) {
- console.log(` - ${ container.name }`);
- }
- } while (marker);
-}
-```
-
-When the response is returned, then the *containerItems* are iterated to log the name to the console.
-
-### Upload text
-
-To upload text to the blob, use the *upload* method.
-
-```javascript
-await blockBlobURL.upload(aborter, content, content.length);
-console.log(`Blob "${blobName}" is uploaded`);
-```
-
-Here the text and its length are passed into the method.
-
-### Upload a local file
-
-To upload a local file to the container, you need a container URL and the path to the file.
-
-```javascript
-await uploadLocalFile(aborter, containerURL, localFilePath);
-console.log(`Local file "${localFilePath}" is uploaded`);
-```
-
-The *uploadLocalFile* function calls the *uploadFileToBlockBlob* function, which takes the file path and an instance of the destination of the block blob as arguments.
-
-```javascript
-async function uploadLocalFile(aborter, containerURL, filePath) {
-
- filePath = path.resolve(filePath);
-
- const fileName = path.basename(filePath);
- const blockBlobURL = BlockBlobURL.fromContainerURL(containerURL, fileName);
-
- return await uploadFileToBlockBlob(aborter, filePath, blockBlobURL);
-}
-```
-
-### Upload a stream
-
-Uploading streams is also supported. This sample opens a local file as a stream to pass to the upload method.
-
-```javascript
-await uploadStream(containerURL, localFilePath, aborter);
-console.log(`Local file "${localFilePath}" is uploaded as a stream`);
-```
-
-The *uploadStream* function calls *uploadStreamToBlockBlob* to upload the stream to the storage container.
-
-```javascript
-async function uploadStream(aborter, containerURL, filePath) {
- filePath = path.resolve(filePath);
-
- const fileName = path.basename(filePath).replace('.md', '-stream.md');
- const blockBlobURL = BlockBlobURL.fromContainerURL(containerURL, fileName);
-
- const stream = fs.createReadStream(filePath, {
- highWaterMark: FOUR_MEGABYTES,
- });
-
- const uploadOptions = {
- bufferSize: FOUR_MEGABYTES,
- maxBuffers: 5,
- };
-
- return await uploadStreamToBlockBlob(
- aborter,
- stream,
- blockBlobURL,
- uploadOptions.bufferSize,
- uploadOptions.maxBuffers);
-}
-```
-
-During an upload, *uploadStreamToBlockBlob* allocates buffers to cache data from the stream in case a retry is necessary. The *maxBuffers* value designates at most how many buffers are used as each buffer creates a separate upload request. Ideally, more buffers equate to higher speeds, but at the cost of higher memory usage. The upload speed plateaus when the number of buffers is high enough that the bottleneck transitions to the network or disk instead of the client.
-
-### Show blob names
-
-Just as accounts can contain many containers, each container can potentially contain a vast amount of blobs. Access to each blob in a container are available via an instance of the *ContainerURL* class.
-
-```javascript
-console.log(`Blobs in "${containerName}" container:`);
-await showBlobNames(aborter, containerURL);
-```
-
-The function *showBlobNames* calls *listBlobFlatSegment* to request batches of blobs from the container.
-
-```javascript
-async function showBlobNames(aborter, containerURL) {
- let marker = undefined;
-
- do {
- const listBlobsResponse = await containerURL.listBlobFlatSegment(Aborter.none, marker);
- marker = listBlobsResponse.nextMarker;
- for (const blob of listBlobsResponse.segment.blobItems) {
- console.log(` - ${ blob.name }`);
- }
- } while (marker);
-}
-```
-
-### Download a blob
-
-Once a blob is created, you can download the contents by using the *download* method.
-
-```javascript
-const downloadResponse = await blockBlobURL.download(aborter, 0);
-const downloadedContent = await streamToString(downloadResponse.readableStreamBody);
-console.log(`Downloaded blob content: "${downloadedContent}"`);
-```
-
-The response is returned as a stream. In this example, the stream is converted to a string by using the following *streamToString* helper function.
-
-```javascript
-// A helper method used to read a Node.js readable stream into a string
-async function streamToString(readableStream) {
- return new Promise((resolve, reject) => {
- const chunks = [];
- readableStream.on("data", data => {
- chunks.push(data.toString());
- });
- readableStream.on("end", () => {
- resolve(chunks.join(""));
- });
- readableStream.on("error", reject);
- });
-}
-```
-
-### Delete a blob
-
-The *delete* method from a *BlockBlobURL* instance deletes a blob from the container.
-
-```javascript
-await blockBlobURL.delete(aborter)
-console.log(`Block blob "${blobName}" is deleted`);
-```
-
-### Delete a container
-
-The *delete* method from a *ContainerURL* instance deletes a container from the storage account.
-
-```javascript
-await containerURL.delete(aborter);
-console.log(`Container "${containerName}" is deleted`);
-```
-
-## Clean up resources
-
-All data written to the storage account is automatically deleted at the end of the code sample.
-
-## Next steps
-
-This quickstart demonstrates how to manage blobs and containers in Azure Blob storage using Node.js. To learn more about working with this SDK, refer to the GitHub repository.
-
-> [!div class="nextstepaction"]
-> [Azure Storage v10 SDK for JavaScript repository](https://github.com/Azure/azure-storage-js)
-> [Azure Storage JavaScript API Reference](/javascript/api/overview/azure/storage-overview)
storage Storage Secure Access Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-secure-access-application.md
Last updated 06/10/2020
+ms.devlang: csharp
storage Storage Upload Process Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-upload-process-images.md
Last updated 06/24/2020
+ms.devlang: csharp, javascript
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
Previously updated : 12/08/2021 Last updated : 02/16/2022
Azure Storage provides a layered security model. This model enables you to secur
Storage accounts have a public endpoint that is accessible through the internet. You can also create [Private Endpoints for your storage account](storage-private-endpoints.md), which assigns a private IP address from your VNet to the storage account, and secures all traffic between your VNet and the storage account over a private link. The Azure storage firewall provides access control for the public endpoint of your storage account. You can also use the firewall to block all access through the public endpoint when using private endpoints. Your storage firewall configuration also enables select trusted Azure platform services to access the storage account securely.
-An application that accesses a storage account when network rules are in effect still requires proper authorization for the request. Authorization is supported with Azure Active Directory (Azure AD) credentials for blobs and queues, with a valid account access key, or with an SAS token.
+An application that accesses a storage account when network rules are in effect still requires proper authorization for the request. Authorization is supported with Azure Active Directory (Azure AD) credentials for blobs and queues, with a valid account access key, or with an SAS token. When a blob container is configured for anonymous public access, requests to read data in that container are not authorized, but the firewall rules remain in effect and will block anonymous traffic.
> [!IMPORTANT] > Turning on firewall rules for your storage account blocks incoming requests for data by default, unless the requests originate from a service operating within an Azure Virtual Network (VNet) or from allowed public IP addresses. Requests that are blocked include those from other Azure services, from the Azure portal, from logging and metrics services, and so on.
storage Monitor Table Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/monitor-table-storage.md
Last updated 11/10/2021
+ms.devlang: csharp
stream-analytics Stream Analytics Edge Csharp Udf Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-edge-csharp-udf-methods.md
Azure Stream Analytics offers a SQL-like query language for performing transformations and computations over streams of event data. There are many built-in functions, but some complex scenarios require additional flexibility. With .NET Standard user-defined functions (UDF), you can invoke your own functions written in any .NET standard language (C#, F#, etc.) to extend the Stream Analytics query language. UDFs allow you to perform complex math computations, import custom ML models using ML.NET, and use custom imputation logic for missing data. The UDF feature for Stream Analytics jobs is currently in preview and shouldn't be used in production workloads.
-.NET user-defined-function for cloud jobs is available in:
+## Regions
+
+The .NET user-defined-function feature is enable for cloud jobs that run on [Stream Analytics clusters](./cluster-overview.md). Jobs that run on the Standard multi-tenant SKU can leverage this feature in the following public regions:
* West Central US * North Europe * East US
Azure Stream Analytics offers a SQL-like query language for performing transform
* East US 2 * West Europe
-If you are interested in using this feature in any another region, you can [request access](https://aka.ms/ccodereqregion). However, there is no such region restriction when using [Stream Analytics clusters](./cluster-overview.md).
+If you are interested in using this feature in any another region, you can [request access](https://aka.ms/ccodereqregion).
## Package path
The UDF preview currently has the following limitations:
* [Tutorial: Write a C# user-defined function for an Azure Stream Analytics job (Preview)](stream-analytics-edge-csharp-udf.md) * [Tutorial: Azure Stream Analytics JavaScript user-defined functions](stream-analytics-javascript-user-defined-functions.md)
-* [Create an Azure Stream Analytics job in Visual Studio Code](quick-create-visual-studio-code.md)
+* [Create an Azure Stream Analytics job in Visual Studio Code](quick-create-visual-studio-code.md)
time-series-insights Time Series Insights Authentication And Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/time-series-insights-authentication-and-authorization.md
+ms.devlang: csharp
Last updated 02/23/2021
virtual-desktop Configure Host Pool Personal Desktop Assignment Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-host-pool-personal-desktop-assignment-type.md
Update-AzWvdSessionHost -HostPoolName <hostpoolname> -Name <sessionhostname> -Re
``` >[!IMPORTANT]
+> - Azure Virtual Desktop will not delete any VHD or profile data for unassigned personal desktops.
> - You must include the _-Force_ parameter when running the PowerShell cmdlet to unassign a personal desktop. If you don't include the _-Force_ parameter, you'll receive an error message. > - There must be no existing user sessions on the session host when you unassign the user from the personal desktop. If there's an existing user session on the session host while you're unassigning it, you won't be able to unassign the personal desktop successfully. > - If the session host has no user assignment, nothing will happen when you run this cmdlet.
Update-AzWvdSessionHost -HostPoolName <hostpoolname> -Name <sessionhostname> -Re
``` >[!IMPORTANT]
+> - Azure Virtual Desktop will not delete any VHD or profile data for reassigned personal desktops.
> - You must include the _-Force_ parameter when running the PowerShell cmdlet to reassign a personal desktop. If you don't include the _-Force_ parameter, you'll receive an error message. > - There must be no existing user sessions on the session host when you reassign a personal desktop. If there's an existing user session on the session host while you're reassigning it, you won't be able to reassign the personal desktop successfully. > - If the user principal name (UPN) you enter for the _-AssignedUser_ parameter is the same as the UPN currently assigned to the personal desktop, the cmdlet won't do anything.
virtual-desktop Connect Web https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/user-documentation/connect-web.md
The web client lets you access your Azure Virtual Desktop resources from a web b
>[!IMPORTANT] >As of September 30, 2021, the Azure Virtual Desktop web client no longer supports Internet Explorer. We recommend that you use Microsoft Edge to connect to the web client instead. For more information, see our [blog post](https://aka.ms/WVDSupportIE11).
-While any HTML5-capable browser should work, we officially support the following operating systems and browsers.
+While any HTML5-capable browser should work, we officially support the following operating systems and browsers:
| Browser | Supported OS | Notes | |-|-||
-| Microsoft Edge | Windows | |
-| Apple Safari | macOS | |
+| Microsoft Edge | Windows, macOS, Linux, Chrome OS | Version 79 or later |
+| Apple Safari | macOS | Version 11 or later |
| Mozilla Firefox | Windows, macOS, Linux | Version 55 or later |
-| Google Chrome | Windows, macOS, Linux, Chrome OS | |
+| Google Chrome | Windows, macOS, Linux, Chrome OS | Version 57 or later |
## Access remote resources feed
virtual-machines Automatic Vm Guest Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/automatic-vm-guest-patching.md
VMs on Azure now support the following patch orchestration modes:
**Manual:** - This mode is supported only for Windows VMs.-- This mode disables Automatic Updates on the Windows virtual machine.
+- This mode disables Automatic Updates on the Windows virtual machine. When deploying a VM using CLI or PowerShell, setting `--enable-auto-updates` to `false` will also set `patchMode` to `manual` and will disable Automatic Updates.
- This mode does not support availability-first patching. - This mode should be set when using custom patching solutions. - To use this mode on Windows VMs, set the property `osProfile.windowsConfiguration.enableAutomaticUpdates=false`, and set the property `osProfile.windowsConfiguration.patchSettings.patchMode=Manual` in the VM template.
virtual-machines Expand Os Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/expand-os-disk.md
To register for the feature, use the following command:
Register-AzProviderFeature -FeatureName "LiveResize" -ProviderNamespace "Microsoft.Compute" ```
-It may take a few minutes for registration to take complete. To confirm that you've registered, use the following command:
+It may take a few minutes for registration to complete. To confirm that you've registered, use the following command:
```azurepowershell
-Register-AzProviderFeature -FeatureName "LiveResize" -ProviderNamespace "Microsoft.Compute"
+Get-AzProviderFeature -FeatureName "LiveResize" -ProviderNamespace "Microsoft.Compute"
``` ## Resize a managed disk in the Azure portal
When you have expanded the disk for the VM, you need to go into the OS and expan
## Next steps
-You can also attach disks using the [Azure portal](attach-managed-disk-portal.md).
+You can also attach disks using the [Azure portal](attach-managed-disk-portal.md).
virtual-machines Migrate To Premium Storage Using Azure Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/migrate-to-premium-storage-using-azure-site-recovery.md
Title: Migrate your Windows VMs to Azure Premium Storage with Azure Site Recovery description: Learn how to migrate your VM disks from a standard storage account to a premium storage account by using Azure Site Recovery. -+ Last updated 08/15/2017
virtual-machines Automation Configure Control Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-control-plane.md
The table below defines the parameters used for defining the Key Vault informati
> [!div class="mx-tdCol2BreakAll "] > | Variable | Description | Type | > | | - | -- |
-> | `firewall_deployment` | Boolean flag controlling if an Azure firewall is to be deployed | Mandatory |
+> | `firewall_deployment` | Boolean flag controlling if an Azure firewall is to be deployed | Mandatory |
+> | `bastion_deployment` | Boolean flag controlling if Azure bastion host is to be deployed | Mandatory |
> | `enable_purge_control_for_keyvaults` | Boolean flag controlling if purge control is enabled on the Key Vault. Use only for test deployments | Optional | > | `use_private_endpoint` | Boolean flag controlling if private endpoints are used. | Optional |
virtual-machines Automation Configure Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-devops.md
Save the Pipeline, to see the Save option select the chevron next to the Run but
This pipeline should be used when there's an update in the sap-automation repository that you want to use.
+## Import Ansible task from Visual Studio Marketplace
+
+The pipelines use a custom task to run Ansible. The custom task can be installed from [Ansible](https://marketplace.visualstudio.com/items?itemName=ms-vscs-rm.vss-services-ansible). Install it to your Azure DevOps organization before running the _Configuration and SAP installation_ or _SAP software acquisition_ pipelines.
+ ## Import Cleanup task from Visual Studio Marketplace
-The pipelines use a custom task to perform cleanup activities post deployment. The custom task can be installed from [Post Build Cleanup](https://marketplace.visualstudio.com/items?itemName=mspremier.PostBuildCleanup). Install it to your Azure DevOps organization before running the _Configuration and SAP installation_ or _SAP software acquisition_ pipelines.
+The pipelines use a custom task to perform cleanup activities post deployment. The custom task can be installed from [Post Build Cleanup](https://marketplace.visualstudio.com/items?itemName=mspremier.PostBuildCleanup). Install it to your Azure DevOps organization before running the pipelines.
## Variable definitions
Create a new variable group 'SDAF-General' using the Library page in the Pipelin
| - | | - | | `ANSIBLE_HOST_KEY_CHECKING` | false | | | Deployment_Configuration_Path | WORKSPACES | For testing the sample configuration use 'samples/WORKSPACES' instead of WORKSPACES. |
-| Repository | https://github.com/Azure/sap-automation | |
| Branch | main | | | S-Username | `<SAP Support user account name>` | | | S-Password | `<SAP Support user password>` | Change variable type to secret by clicking the lock icon |
Save the variables and assign permissions for all pipelines using _Pipeline perm
### Environment specific variables
-As each environment may have different deployment credentials you'll need to create a variable group per environment, for example 'SDAF-DEV', 'SDAF-QA'.
+As each environment may have different deployment credentials you'll need to create a variable group per environment, for example 'SDAF-MGMT','SDAF-DEV', 'SDAF-QA'.
+
+Create a new variable group 'SDAF-MGMT' for the control plane environment using the Library page in the Pipelines section. Add the following variables:
+
+| Variable | Value | Notes |
+| | - | -- |
+| Agent | Either 'Azure Pipelines' or the name of the agent pool containing the deployer, for instance 'MGMT-WEEU-POOL' Note, this pool will be created in a later step. |
+| ARM_CLIENT_ID | Service principal application id | |
+| ARM_CLIENT_SECRET | Service principal password | Change variable type to secret by clicking the lock icon |
+| ARM_SUBSCRIPTION_ID | Target subscription ID | |
+| ARM_TENANT_ID | Tenant ID for service principal | |
+| AZURE_CONNECTION_NAME | Previously created connection name | |
+| sap_fqdn | SAP Fully Qualified Domain Name, for example sap.contoso.net | Only needed if Private DNS isn't used. |
+
-Create a new variable group 'SDAF-DEV' using the Library page in the Pipelines section. Add the following variables:
+Clone the group for each environment 'SDAF-DEV', 'SDAF-QA', ... and update the values to reflect the environment.
| Variable | Value | Notes | | | - | -- |
-| Agent | Either 'Azure Pipelines' or the name of the agent pool containing the deployer, for instance 'DEV-WEEU-POOL' Note, this pool will be created in a later step. |
+| Agent | Either 'Azure Pipelines' or the name of the agent pool containing the deployer, for instance 'MGMT-WEEU-POOL' Note, this pool will be created in a later step. |
| ARM_CLIENT_ID | Service principal application id | | | ARM_CLIENT_SECRET | Service principal password | Change variable type to secret by clicking the lock icon | | ARM_SUBSCRIPTION_ID | Target subscription ID | |
virtual-machines Automation Configure Sap Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-sap-parameters.md
+
+ Title: Configure SAP parameters file for Ansible
+description: Define SAP parameters for Ansible
+++ Last updated : 02/14/2022++++
+# Configure sap-parameters file
+
+Ansible will use a file called sap-parameters.yaml that will contain the parameters required for the Ansible playbooks. The file is a .yaml file.
+
+## Parameters
+
+The table below contains the parameters stored in the sap-parameters.yaml file, most of the values are pre-populated via the Terraform deployment.
+
+### Infrastructure
+
+> [!div class="mx-tdCol2BreakAll "]
+> | Parameter | Description | Type |
+> | - | - | - |
+> | `sap_fqdn` | The FQDN suffix for the virtual machines to be added to the local hosts file | Required |
+
+### Application Tier
+
+> [!div class="mx-tdCol2BreakAll "]
+> | Parameter | Description | Type |
+> | - | - | - |
+> | `bom_base_name` | The name of the SAP Application Bill of Materials file | Required |
+> | `sap_sid` | The SID of the SAP application | Required |
+> | `scs_high_availability` | Defines if the Central Services is deployed highly available | Required |
+> | `scs_instance_number` | Defines the instance number for ASCS | Required |
+> | `scs_lb_ip` | IP address of ASCS instance | Required |
+> | `ers_instance_number` | Defines the instance number for ERS | Required |
+> | `ers_lb_ip` | IP address of ERS instance | Required |
+> | `pas_instance_number` | Defines the instance number for PAS | Required |
+
+### Database Tier
+
+> [!div class="mx-tdCol2BreakAll "]
+> | Parameter | Description | Type |
+> | - | - | - |
+> | `db_sid` | The SID of the SAP database | Required |
+> | `db_high_availability` | Defines if the database is deployed highly available | Required |
+> | `db_lb_ip` | IP address of the database load balancer | Required |
+> | `platform` | The database platform. Valid values are: ASE, DB2, HANA, ORACLE, SQLSERVER | Required |
+
+### NFS
+
+> [!div class="mx-tdCol2BreakAll "]
+> | Parameter | Description | Type |
+> | - | - | - |
+> | `NFS_Provider` | Defines what NFS backend to use, the options are 'AFS' for Azure Files NFS or 'ANF' for Azure NetApp files, 'NONE' for NFS from the SCS server or 'NFS' for an external NFS solution. | Optional |
+> | `sap_mnt` | The NFS path for sap_mnt | Required |
+> | `sap_trans` | The NFS path for sap_trans | Required |
+> | `usr_sap_install_mountpoint' | The NFS path for usr/sap/install | Required |
+
+### Miscellaneous
+
+> [!div class="mx-tdCol2BreakAll "]
+> | Parameter | Description | Type |
+> | - | - | - |
+> | `kv_name` | The name of the Azure key vault containing the system credentials | Required |
+> | `secret_prefix` | The prefix for the name of the secrets for the SID stored in key vault | Required |
+> | `upgrade_packages` | Update all installed packages on the virtual machines | Required |
+
+### Disks
+
+Disks is a dictionary defining the disks of all the virtual machines in the SID.
+
+> [!div class="mx-tdCol2BreakAll "]
+> | attribute | Description | Type |
+> | - | - | - |
+> | `host` | The computer name of the virtual machine | Required |
+> | `LUN` | Defines the LUN number that the disk is attached to | Required |
+> | `type` | This attribute is used to group the disks, each disk of the same type will be added to the LVM on the virtual machine | Required |
++
+See sample below
+```yaml
+
+disks:
+ - { host: 'rh8dxdb00l084', LUN: 0, type: 'sap' }
+ - { host: 'rh8dxdb00l084', LUN: 10, type: 'data' }
+ - { host: 'rh8dxdb00l084', LUN: 11, type: 'data' }
+ - { host: 'rh8dxdb00l084', LUN: 12, type: 'data' }
+ - { host: 'rh8dxdb00l084', LUN: 13, type: 'data' }
+ - { host: 'rh8dxdb00l084', LUN: 20, type: 'log' }
+ - { host: 'rh8dxdb00l084', LUN: 21, type: 'log' }
+ - { host: 'rh8dxdb00l084', LUN: 22, type: 'log' }
+ - { host: 'rh8dxdb00l084', LUN: 2, type: 'backup' }
+ - { host: 'rh8dxdb00l184', LUN: 0, type: 'sap' }
+ - { host: 'rh8dxdb00l184', LUN: 10, type: 'data' }
+ - { host: 'rh8dxdb00l184', LUN: 11, type: 'data' }
+ - { host: 'rh8dxdb00l184', LUN: 12, type: 'data' }
+ - { host: 'rh8dxdb00l184', LUN: 13, type: 'data' }
+ - { host: 'rh8dxdb00l184', LUN: 20, type: 'log' }
+ - { host: 'rh8dxdb00l184', LUN: 21, type: 'log' }
+ - { host: 'rh8dxdb00l184', LUN: 22, type: 'log' }
+ - { host: 'rh8dxdb00l184', LUN: 2, type: 'backup' }
+ - { host: 'rh8app00l84f', LUN: 0, type: 'sap' }
+ - { host: 'rh8app01l84f', LUN: 0, type: 'sap' }
+ - { host: 'rh8scs00l84f', LUN: 0, type: 'sap' }
+ - { host: 'rh8scs01l84f', LUN: 0, type: 'sap' }
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Deploy SAP System](automation-deploy-system.md)
+
virtual-machines Automation Configure System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-system.md
Title: Configure SAP system parameters for automation
-description: Define the SAP system properties for the SAP deployment automation framework on Azure using a parameters JSON file.
+description: Define the SAP system properties for the SAP deployment automation framework on Azure using a parameters file.
virtual-machines Automation Run Ansible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-run-ansible.md
The **sap-parameters.yaml** contains information that Ansible uses for configura
# bom_base_name is the name of the SAP Application Bill of Materials file
-bom_base_name: S41909SPS03_v0006ms
+bom_base_name: S41909SPS03_v0010ms
# Set to true to instruct Ansible to update all the packages on the virtual machines upgrade_packages: false
virtual-machines Automation Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-software.md
Configure the SAP parameters file:
```bash cat <<EOF > sap-parameters.yaml
- bom_base_name: S41909SPS03_v0006ms
+ bom_base_name: S41909SPS03_v0010ms
kv_name: Name of your Management/Control Plane keyvault .. EOF
Configure the SAP parameters file:
1. Update the following parameters:
- 1. Change the value of `bom_base_name` to `S41909SPS03_v0006ms`.
+ 1. Change the value of `bom_base_name` to `S41909SPS03_v0010ms`.
1. Change the value of `kv_name` to the name of the deployer key vault.
virtual-machines Automation Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-tutorial.md
A sample extract of a BOM file looks like:
```yaml
-name: 'S41909SPS03_v0007ms'
+name: 'S41909SPS03_v0010'
target: 'S/4 HANA 1909 SPS 03' version: 7
For this example configuration, the resource group is `MGMT-NOEU-DEP00-INFRASTRU
```yaml
- bom_base_name: S41909SPS03_v0007ms
+ bom_base_name: S41909SPS03_v0010ms
```
For this example configuration, the resource group is `MGMT-NOEU-DEP00-INFRASTRU
```yaml
- bom_base_name: S41909SPS03_v0007ms
+ bom_base_name: S41909SPS03_v0010ms
kv_name: <Deployer KeyVault Name> ```
For this example configuration, the resource group is `MGMT-NOEU-DEP00-INFRASTRU
```yaml
- bom_base_name: S41909SPS03_v0007ms
+ bom_base_name: S41909SPS03_v0010
kv_name: <Deployer KeyVault Name> check_storage_account: false
virtual-network Accelerated Networking How It Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/accelerated-networking-how-it-works.md
+
+ Title: How Accelerated Networking works in Linux and FreeBSD VMs
+description: How Accelerated Networking Works in Linux and FreeBSD VMs
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
+
+ms.devlang: na
+
+ vm-linux
+ Last updated : 02/15/2022+++
+# How Accelerated Networking works in Linux and FreeBSD VMs
+
+When a VM is created in Azure, a synthetic network interface is created for each virtual NIC in its configuration. The synthetic interface is a VMbus device and uses the ΓÇ£netvscΓÇ¥ driver. Network packets that use this synthetic interface flow through the virtual switch in the Azure host and onto the datacenter physical network.
+
+If the VM is configured with Accelerated Networking, a second network interface is created for each virtual NIC that is configured. The second interface is an SR-IOV Virtual Function (VF) offered by the physical network NIC in the Azure host. The VF interface shows up in the Linux guest as a PCI device, and uses the Mellanox ΓÇ£mlx4ΓÇ¥ or ΓÇ£mlx5ΓÇ¥ driver in Linux, since Azure hosts use physical NICs from Mellanox. Most network packets go directly between the Linux guest and the physical NIC without traversing the virtual switch or any other software that runs on the host. Because of the direct access to the hardware, network latency is lower and less CPU time is used to process network packets when compared with the synthetic interface.
+
+Different Azure hosts use different models of Mellanox physical NIC, so Linux automatically determines whether to use the ΓÇ£mlx4ΓÇ¥ or ΓÇ£mlx5ΓÇ¥ driver. Placement of the VM on an Azure host is controlled by the Azure infrastructure. With no customer option to specify which physical NIC that a VM deployment uses, the VMs must include both drivers. If a VM is stopped/deallocated and then restarted, it might be redeployed on hardware with a different model of Mellanox physical NIC. Therefore, it might use the other Mellanox driver.
+
+FreeBSD provides the same support for Accelerated Networking as Linux when running in Azure. The remainder of this article describes Linux and uses Linux examples, but the same functionality is available in FreeBSD.
+
+## Bonding
+
+The synthetic network interface and VF interface are automatically paired and act as a single interface in most aspects that are seen by applications. The bonding is done by the netvsc driver. Depending on the Linux distro, udev rules and scripts might help in naming the VF interface and in network configuration. If the VM is configured with multiple virtual NICs, the Azure host provides a unique serial number for each one. It's used to allow Linux to do the proper pairing of synthetic and VF interfaces for each virtual NIC.
+
+The synthetic and VF interfaces both have the same MAC address. Together they constitute a single NIC from the standpoint of other network entities that exchange packets with the virtual NIC in the VM. Other entities don't take any special action because of the existence of both the synthetic interface and the VF interface.
+
+Both interfaces are visible via the ΓÇ£ifconfigΓÇ¥ or ΓÇ£ip addrΓÇ¥ command in Linux. Here's example ΓÇ£ifconfigΓÇ¥ output in Ubuntu 18.04:
+
+```output
+U1804:~$ ifconfig
+enP53091s1np0: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
+ether 00:0d:3a:f5:76:bd txqueuelen 1000 (Ethernet)
+RX packets 365849 bytes 413711297 (413.7 MB)
+RX errors 0 dropped 0 overruns 0 frame 0
+TX packets 9447684 bytes 2206536829 (2.2 GB)
+TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
+
+eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
+inet 10.1.19.4 netmask 255.255.255.0 broadcast 10.1.19.255
+inet6 fe80::20d:3aff:fef5:76bd prefixlen 64 scopeid 0x20<link>
+ether 00:0d:3a:f5:76:bd txqueuelen 1000 (Ethernet)
+RX packets 8714212 bytes 4954919874 (4.9 GB)
+RX errors 0 dropped 0 overruns 0 frame 0
+TX packets 9103233 bytes 2183731687 (2.1 GB)
+TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
+```
+
+The synthetic interface always has a name of the form ΓÇ£eth\<n\>ΓÇ¥. Depending on the Linux distro, the VF interface might have a name of the form ΓÇ£eth\<n\>ΓÇ¥, or a name of a different form because of a udev rule that does renaming.
+
+Whether a particular interface is the synthetic interface or the VF interface can be determined with the shell command line that shows the device driver used by the interface:
+
+```output
+$ ethtool -i <interface name> | grep driver
+```
+
+If the driver is ΓÇ£hv_netvscΓÇ¥, it's the synthetic interface. The VF interface has a driver name that contains ΓÇ£mlxΓÇ¥. The VF interface is also identifiable because its flags field includes ΓÇ£SLAVE.ΓÇ¥ This flag indicates that it's under the control of the synthetic interface that has the same MAC address. Finally, IP addresses are assigned only to the synthetic interface, and the output of ΓÇÿifconfigΓÇÖ or ΓÇÿip addrΓÇÖ shows this distinction as well.
+
+## Application Usage
+
+Applications should interact only with the synthetic interface, just like in any other networking environment. Outgoing network packets are passed from the netvsc driver to the VF driver and then transmitted through the VF interface. Incoming packets are received and processed on the VF interface before being passed to the synthetic interface. Exceptions are incoming TCP SYN packets and broadcast/multicast packets that are processed by the synthetic interface only.
+
+You can verify that packets are flowing over the VF interface from the output of ΓÇ£ethtool -S eth\<n\>ΓÇ¥. The output lines that contain ΓÇ£vfΓÇ¥ show the traffic over the VF interface. For example:
+
+```output
+U1804:~# ethtool -S eth0 | grep ' vf_'
+ vf_rx_packets: 111180
+ vf_rx_bytes: 395460237
+ vf_tx_packets: 9107646
+ vf_tx_bytes: 2184786508
+ vf_tx_dropped: 0
+```
+
+If these counters are incrementing on successive execution of the ΓÇ£ethtoolΓÇ¥ command, then network traffic is flowing over the VF interface.
+
+The existence of the VF interface as a PCI device can be seen with the ΓÇ£lspciΓÇ¥ command. For example, on the Generation 1 VM, you might see output similar to this (Generation 2 VMs donΓÇÖt have the legacy PCI devices):
+
+```output
+U1804:~# lspci
+0000:00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge (AGP disabled) (rev 03)
+0000:00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 01)
+0000:00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
+0000:00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 02)
+0000:00:08.0 VGA compatible controller: Microsoft Corporation Hyper-V virtual VGA
+cf63:00:02.0 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx Virtual Function] (rev 80)
+```
+
+In this example, the last line of output identifies a VF from the Mellanox ConnectX-4 physical NIC.
+
+The ΓÇ£ethtool -lΓÇ¥ or ΓÇ£ethtool -LΓÇ¥ command (to get and set the number of transmit and receive queues) is an exception to the guidance to interact with the ΓÇ£eth\<n\>ΓÇ¥ interface. This command can be used directly against the VF interface to control the number of queues for the VF interface. The number of queues for the VF interface is independent of the number of queues for the synthetic interface.
+
+## Interpreting Boot-up Messages
+
+During booting, Linux shows many messages related to the initialization and configuration of the VF interface. Information about the bonding with the synthetic interface is shown as well. Understanding these messages can be helpful in identifying any problems in the process.
+
+Here's example output from the ΓÇÿdmesgΓÇÖ command, trimmed to just the lines relevant to the VF interface. Depending on the Linux kernel version and distro in your VM, the messages might vary slightly, but the overall flow is the same.
+
+```output
+[ 2.327663] hv_vmbus: registering driver hv_netvsc
+[ 3.918902] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: VF slot 1 added
+```
+
+The netvsc driver for eth0 has been registered.
+
+```output
+[ 6.944883] hv_vmbus: registering driver hv_pci
+```
+
+The VMbus virtual PCI driver has been registered. This driver provides core PCI services in a Linux VM in Azure and must be registered before the VF interface can be detected and configured.
+
+```output
+[ 6.945132] hv_pci e9ac9b28-cf63-4466-9ae3-4b849c3ee03b: PCI VMBus probing: Using version 0x10002
+[ 6.947953] hv_pci e9ac9b28-cf63-4466-9ae3-4b849c3ee03b: PCI host bridge to bus cf63:00
+[ 6.947955] pci_bus cf63:00: root bus resource [mem 0xfe0000000-0xfe00fffff window]
+[ 6.948805] pci cf63:00:02.0: [15b3:1016] type 00 class 0x020000
+[ 6.957487] pci cf63:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref]
+[ 7.035464] pci cf63:00:02.0: enabling Extended Tags
+[ 7.040811] pci cf63:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at cf63:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link)
+[ 7.041264] pci cf63:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref]
+```
+
+The PCI device with the listed GUID (assigned by the Azure host) has been detected. It's assigned a PCI domain ID (0xcf63 in this case) based on the GUID. The PCI domain ID must be unique across all PCI devices available in the VM. This uniqueness requirement spans other Mellanox VF interfaces, GPUs, NVMe devices, etc., that may be present in the VM.
+
+```output
+[ 7.128515] mlx5_core cf63:00:02.0: firmware version: 14.25.8362
+[ 7.139925] mlx5_core cf63:00:02.0: handle_hca_cap:524:(pid 12): log_max_qp value in current profile is 18, changing it to HCA capability limit (12)
+[ 7.342391] mlx5_core cf63:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0)
+```
+
+A Mellanox VF that uses the mlx5 driver has been detected, and the mlx5 driver begins its initialization of the device.
+
+```output
+[ 7.465085] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: VF registering: eth1
+[ 7.465119] mlx5_core cf63:00:02.0 eth1: joined to eth0
+```
+
+The corresponding synthetic interface that is using the netvsc driver has detected a matching VF. The mlx5 driver recognizes that it has been bonded with the synthetic interface.
+
+```output
+[ 7.466064] mlx5_core cf63:00:02.0 eth1: Disabling LRO, not supported in legacy RQ
+[ 7.480575] mlx5_core cf63:00:02.0 eth1: Disabling LRO, not supported in legacy RQ
+[ 7.480651] mlx5_core cf63:00:02.0 enP53091s1np0: renamed from eth1
+```
+
+The VF interface initially was named ΓÇ£eth1ΓÇ¥ by the Linux kernel. A udev rule renamed it to avoid confusion with the names given to the synthetic interfaces.
+
+```output
+[ 8.087962] mlx5_core cf63:00:02.0 enP53091s1np0: Link up
+```
+
+The Mellanox VF interface is now up and active.
+
+```output
+[ 8.090127] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: Data path switched to VF: enP53091s1np0
+[ 9.654979] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: Data path switched from VF: enP53091s1np0
+```
+
+These messages indicate that the data path for the bonded pair has switched to use the VF interface. Then about 1.6 seconds later, it switches back to the synthetic interface. Such switches might occur two or three times during the boot process and are normal behavior as the configuration gets initialized.
+
+```output
+[ 9.909128] mlx5_core cf63:00:02.0 enP53091s1np0: Link up
+[ 9.910595] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: Data path switched to VF: enP53091s1np0
+[ 11.411194] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: Data path switched from VF: enP53091s1np0
+[ 11.532147] mlx5_core cf63:00:02.0 enP53091s1np0: Disabling LRO, not supported in legacy RQ
+[ 11.731892] mlx5_core cf63:00:02.0 enP53091s1np0: Link up
+[ 11.733216] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: Data path switched to VF: enP53091s1np0
+```
+
+The final message indicates that the data path has switched to using the VF interface. It's expected during normal operation of the VM.
++
+## Azure Host Servicing
+
+When Azure host servicing is performed, all VF interfaces might be temporarily removed from the VM during the servicing. When the servicing is complete, the VF interfaces are added back to the VM and normal operation continues. While the VM is operating without the VF interfaces, network traffic continues to flow through the synthetic interface without any disruption to applications. In this context, Azure host servicing might include updating the various components of the Azure network infrastructure or a full upgrade of the Azure host hypervisor software. Such servicing events occur at time intervals depending on the operational needs of the Azure infrastructure. These events typically can be expected several times over the course of a year. If applications interact only with the synthetic interface, the automatic switching between the VF interface and the synthetic interface ensures that workloads aren't disturbed by such servicing events. Latencies and CPU load might be higher during the periods because of the use of the synthetic interface. The duration of such periods is typically on the order of 30 seconds, but sometimes might be as long as a few minutes.
+
+The removal and re-add of the VF interface during a servicing event is visible in the ΓÇ£dmesgΓÇ¥ output in the VM. Here's typical output:
+
+```output
+[ 8160.911509] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: Data path switched from VF: enP53091s1np0
+[ 8160.912120] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: VF unregistering: enP53091s1np0
+[ 8162.020138] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: VF slot 1 removed
+```
+
+The data path has been switched away from the VF interface, and the VF interface has been unregistered. At this point, Linux has removed all knowledge of the VF interface and is operating as if Accelerated Networking wasn't enabled.
+
+```output
+[ 8225.557263] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: VF slot 1 added
+[ 8225.557867] hv_pci e9ac9b28-cf63-4466-9ae3-4b849c3ee03b: PCI VMBus probing: Using version 0x10002
+[ 8225.566794] hv_pci e9ac9b28-cf63-4466-9ae3-4b849c3ee03b: PCI host bridge to bus cf63:00
+[ 8225.566797] pci_bus cf63:00: root bus resource [mem 0xfe0000000-0xfe00fffff window]
+[ 8225.571556] pci cf63:00:02.0: [15b3:1016] type 00 class 0x020000
+[ 8225.584903] pci cf63:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref]
+[ 8225.662860] pci cf63:00:02.0: enabling Extended Tags
+[ 8225.667831] pci cf63:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at cf63:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link)
+[ 8225.667978] pci cf63:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref]
+```
+
+When the VF interface is re-added after servicing is complete, a new PCI device with the specified GUID is detected. It's assigned the same PCI domain ID (0xcf63) as before. The handling of the re-add VF interface is like during the initial boot.
+
+```output
+[ 8225.679672] mlx5_core cf63:00:02.0: firmware version: 14.25.8362
+[ 8225.888476] mlx5_core cf63:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0)
+[ 8226.021016] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: VF registering: eth1
+[ 8226.021058] mlx5_core cf63:00:02.0 eth1: joined to eth0
+[ 8226.021968] mlx5_core cf63:00:02.0 eth1: Disabling LRO, not supported in legacy RQ
+[ 8226.026631] mlx5_core cf63:00:02.0 eth1: Disabling LRO, not supported in legacy RQ
+[ 8226.026699] mlx5_core cf63:00:02.0 enP53091s1np0: renamed from eth1
+[ 8226.265256] mlx5_core cf63:00:02.0 enP53091s1np0: Link up
+```
+
+The mlx5 driver initializes the VF interface, and the interface is now functional. The output is similar to the output during the initial boot.
+
+```output
+[ 8226.267380] hv_netvsc 000d3af5-76bd-000d-3af5-76bd000d3af5 eth0: Data path switched to VF: enP53091s1np0
+```
+
+The data path has been switched back to the VF interface.
+
+## Disable/Enable Accelerated Networking in a Running VM
+
+Accelerated Networking can be toggled on a virtual NIC in a running VM with Azure CLI. For example:
+
+```output
+$ az network nic update --name u1804895 --resource-group testrg --accelerated-network false
+```
+
+Disabling Accelerated Networking that is enabled in the guest VM produces a ΓÇ£dmesgΓÇ¥ output. It's the same as when the VF interface is removed for Azure host servicing. Enabling Accelerated Networking produces the same ΓÇ£dmesgΓÇ¥ output as when the VF interface is readded after Azure host servicing. These Azure CLI commands can be used to simulate Azure host servicing. With them you can verify that your applications do not incorrectly depend on direct interaction with the VF interface.
+
+## Next steps
+* Learn how to [create a VM with Accelerated Networking in PowerShell](../virtual-network/create-vm-accelerated-networking-powershell.md)
+* Learn how to [create a VM with Accerelated Networking using Azure CLI](../virtual-network/create-vm-accelerated-networking-cli.md)
+* Improve latency with an [Azure proximity placement group](../virtual-machines/co-location.md)
virtual-network Accelerated Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/accelerated-networking-overview.md
+
+ Title: Accelerated Networking overview
+description: Accelerated Networking to improves networking performance of Azure VMs.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
+
+ms.devlang: na
+
+ vm-windows
+ Last updated : 02/15/2022+++
+# What is Accelerated Networking?
+
+Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, greatly improving its networking performance. This high-performance path bypasses the host from the data path, which reduces latency, jitter, and CPU utilization for the most demanding network workloads on supported VM types. The following diagram illustrates how two VMs communicate with and without accelerated networking:
+
+![Communication between Azure virtual machines with and without accelerated networking](./media/create-vm-accelerated-networking/accelerated-networking.png)
+
+Without accelerated networking, all networking traffic in and out of the VM must traverse the host and the virtual switch. The virtual switch provides all policy enforcement, such as network security groups, access control lists, isolation, and other network virtualized services to network traffic.
+
+> [!NOTE]
+> To learn more about virtual switches, see [Hyper-V Virtual Switch](/windows-server/virtualization/hyper-v-virtual-switch/hyper-v-virtual-switch).
+
+With accelerated networking, network traffic arrives at the VM's network interface (NIC) and is then forwarded to the VM. All network policies that the virtual switch applies are now offloaded and applied in hardware. Because policy is applied in hardware, the NIC can forward network traffic directly to the VM. The NIC bypasses the host and the virtual switch, while it maintains all the policy it applied in the host.
+
+The benefits of accelerated networking only apply to the VM that it's enabled on. For the best results, enable this feature on at least two VMs connected to the same Azure virtual network. When communicating across virtual networks or connecting on-premises, this feature has minimal impact to overall latency.
+
+## Benefits
+
+- **Lower Latency / Higher packets per second (pps)**: Eliminating the virtual switch from the data path removes the time packets spend in the host for policy processing. It also increases the number of packets that can be processed inside the VM.
+
+- **Reduced jitter**: Virtual switch processing depends on the amount of policy that needs to be applied. It also depends on the workload of the CPU that's doing the processing. Offloading the policy enforcement to the hardware removes that variability by delivering packets directly to the VM. Offloading also removes the host-to-VM communication, all software interrupts, and all context switches.
+
+- **Decreased CPU utilization**: Bypassing the virtual switch in the host leads to less CPU utilization for processing network traffic.
+
+## Supported operating systems
+
+The following versions of Windows are supported:
+
+- **Windows Server 2019 Standard/Datacenter**
+- **Windows Server 2016 Standard/Datacenter**
+- **Windows Server 2012 R2 Standard/Datacenter**
+
+The following distributions are supported out of the box from the Azure Gallery:
+- **Ubuntu 14.04 with the linux-azure kernel**
+- **Ubuntu 16.04 or later**
+- **SLES12 SP3 or later**
+- **RHEL 7.4 or later**
+- **CentOS 7.4 or later**
+- **CoreOS Linux**
+- **Debian "Stretch" with backports kernel, Debian "Buster" or later**
+- **Oracle Linux 7.4 and later with Red Hat Compatible Kernel (RHCK)**
+- **Oracle Linux 7.5 and later with UEK version 5**
+- **FreeBSD 10.4, 11.1 & 12.0 or later**
+
+## Limitations and constraints
+
+### Supported VM instances
+
+Accelerated Networking is supported on most general purpose and compute-optimized instance sizes with 2 or more vCPUs. On instances that support hyperthreading, Accelerated Networking is supported on VM instances with 4 or more vCPUs.
+
+Support for Accelerated Networking can be found in the individual [virtual machine sizes](../virtual-machines/sizes.md) documentation.
+
+### Custom images
++
+If you're using a custom image and your image supports Accelerated Networking, make sure that you have the required drivers to work with Mellanox ConnectX-3, ConnectX-4 Lx, and ConnectX-5 NICs on Azure. Also, Accelerated Networking requires network configurations that exempt the configuration of the virtual functions (mlx4_en and mlx5_core drivers). In images that have cloud-init >=19.4, networking is correctly configured to support Accelerated Networking during provisioning.
+
+### Regions
+
+Accelerated networking is available in all global Azure regions and Azure Government Cloud.
+
+### Enabling accelerated networking on a running VM
+
+A supported VM size without accelerated networking enabled can only have the feature enabled when it's stopped and deallocated.
+
+### Deployment through Azure Resource Manager
+
+Virtual machines (classic) can't be deployed with accelerated networking.
+
+## Next steps
+* Learn [how Accelerated Networking works](./accelerated-networking-how-it-works.md)
+* Learn how to [create a VM with Accelerated Networking in PowerShell](./create-vm-accelerated-networking-powershell.md)
+* Learn how to [create a VM with Accerelated Networking using Azure CLI](./create-vm-accelerated-networking-cli.md)
+* Improve latency with an [Azure proximity placement group](../virtual-machines/co-location.md)
+*
virtual-network Create Vm Accelerated Networking Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-vm-accelerated-networking-cli.md
Title: Create an Azure VM with Accelerated Networking using Azure CLI
description: Learn how to create a Linux virtual machine with Accelerated Networking enabled. documentationcenter: na-+ editor: '' tags: azure-resource-manager ms.assetid:
+ms.devlang: na
na Previously updated : 01/10/2019- Last updated : 02/15/2022+ # Create a Linux virtual machine with Accelerated Networking using Azure CLI
-In this tutorial, you learn how to create a Linux virtual machine (VM) with Accelerated Networking. To create a Windows VM with Accelerated Networking, see [Create a Windows VM with Accelerated Networking](create-vm-accelerated-networking-powershell.md). Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, greatly improving its networking performance. This high-performance path bypasses the host from the datapath, reducing latency, jitter, and CPU utilization, for use with the most demanding network workloads on supported VM types. The following picture shows communication between two VMs with and without accelerated networking:
-
-![Comparison](./media/create-vm-accelerated-networking/accelerated-networking.png)
-
-Without accelerated networking, all networking traffic in and out of the VM must traverse the host and the virtual switch. The virtual switch provides all policy enforcement, such as network security groups, access control lists, isolation, and other network virtualized services to network traffic. To learn more about virtual switches, read the [Hyper-V network virtualization and virtual switch](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj134230(v=ws.11)) article.
-
-With accelerated networking, network traffic arrives at the virtual machine's network interface (NIC), and is then forwarded to the VM. All network policies that the virtual switch applies are now offloaded and applied in hardware. Applying policy in hardware enables the NIC to forward network traffic directly to the VM, bypassing the host and the virtual switch, while maintaining all the policy it applied in the host.
-
-The benefits of accelerated networking only apply to the VM that it is enabled on. For the best results, it is ideal to enable this feature on at least two VMs connected to the same Azure virtual network (VNet). When communicating across VNets or connecting on-premises, this feature has minimal impact to overall latency.
-
-## Benefits
-* **Lower Latency / Higher packets per second (pps):** Removing the virtual switch from the datapath removes the time packets spend in the host for policy processing and increases the number of packets that can be processed inside the VM.
-* **Reduced jitter:** Virtual switch processing depends on the amount of policy that needs to be applied and the workload of the CPU that is doing the processing. Offloading the policy enforcement to the hardware removes that variability by delivering packets directly to the VM, removing the host to VM communication and all software interrupts and context switches.
-* **Decreased CPU utilization:** Bypassing the virtual switch in the host leads to less CPU utilization for processing network traffic.
-
-## Supported operating systems
-The following distributions are supported out of the box from the Azure Gallery:
-* **Ubuntu 14.04 with the linux-azure kernel**
-* **Ubuntu 16.04 or later**
-* **SLES12 SP3 or later**
-* **RHEL 7.4 or later**
-* **CentOS 7.4 or later**
-* **CoreOS Linux**
-* **Debian "Stretch" with backports kernel, Debian "Buster" or later**
-* **Oracle Linux 7.4 and later with Red Hat Compatible Kernel (RHCK)**
-* **Oracle Linux 7.5 and later with UEK version 5**
-* **FreeBSD 10.4, 11.1 & 12.0 or later**
-
-## Limitations and Constraints
-
-### Supported VM instances
-Accelerated Networking is supported on most general purpose and compute-optimized instance sizes with 2 or more vCPUs. On instances that support hyperthreading, Accelerated Networking is supported on VM instances with 4 or more vCPUs.
-
-Support for Accelerated Networking can be found in the individual [virtual machine sizes](../virtual-machines/sizes.md) documentation.
-
-### Custom Images
-If you're using a custom image and your image supports Accelerated Networking, make sure that you have the required drivers to work with Mellanox ConnectX-3, ConnectX-4 Lx, and ConnectX-5 NICs on Azure. Also, Accelerated Networking requires network configurations that exempt the configuration of the virtual functions (mlx4_en and mlx5_core drivers). In images that have cloud-init >=19.4, networking is correctly configured to support Accelerated Networking during provisioning.
-
-### Regions
-Available in all public Azure regions as well as Azure Government Clouds.
-
-<!-- ### Network interface creation
-Accelerated networking can only be enabled for a new NIC. It cannot be enabled for an existing NIC.
-removed per issue https://github.com/MicrosoftDocs/azure-docs/issues/9772 -->
-### Enabling Accelerated Networking on a running VM
-A supported VM size without accelerated networking enabled can only have the feature enabled when it is stopped and deallocated.
-### Deployment through Azure Resource Manager
-Virtual machines (classic) cannot be deployed with Accelerated Networking.
-
-## Create a Linux VM with Azure Accelerated Networking
## Portal creation
-Though this article provides steps to create a virtual machine with accelerated networking using the Azure CLI, you can also [create a virtual machine with accelerated networking using the Azure portal](../virtual-machines/linux/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json). When creating a virtual machine in the portal, in the **Create a virtual machine** blade, choose the **Networking** tab. In this tab, there is an option for **Accelerated networking**. If you have chosen a [supported operating system](#supported-operating-systems) and [VM size](#supported-vm-instances), this option will automatically populate to "On." If not, it will populate the "Off" option for Accelerated Networking and give the user a reason why it is not be enabled.
+
+Though this article provides steps to create a virtual machine with accelerated networking using the Azure CLI, you can also [create a virtual machine with accelerated networking using the Azure portal](../virtual-machines/linux/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json). When creating a virtual machine in the portal, in the **Create a virtual machine** blade, choose the **Networking** tab. In this tab, there is an option for **Accelerated networking**. If you have chosen a [supported operating system](./accelerated-networking-overview.md#supported-operating-systems) and [VM size](./accelerated-networking-overview.md#supported-vm-instances), this option will automatically populate to "On." If not, it will populate the "Off" option for Accelerated Networking and give the user a reason why it isn't enabled.
You can also enable or disable accelerated networking through the portal after VM creation by navigating to the network interface and clicking the button at the top of the **Overview** blade.
-* *Note:* Only supported operating systems can be enabled through the portal. If you are using a custom image, and your image supports Accelerated Networking, please create your VM using CLI or PowerShell.
+>[!NOTE]
+> Only supported operating systems can be enabled through the portal. If you're using a custom image, and your image supports Accelerated Networking, create your VM using CLI or PowerShell.
-After the virtual machine is created, you can confirm Accelerated Networking is enabled by following the instructions in the [Confirm that accelerated networking is enabled](#confirm-that-accelerated-networking-is-enabled).
+After the VM is created, you can confirm that Accelerated Networking is enabled by following the [confirmation instructions](#confirm-that-accelerated-networking-is-enabled).
## CLI creation ### Create a virtual network
Create a resource group with [az group create](/cli/azure/group). The following
az group create --name myResourceGroup --location centralus ```
-Select a supported Linux region listed in [Linux accelerated networking](https://azure.microsoft.com/updates/accelerated-networking-in-expanded-preview).
+Select a supported Linux region listed in [Linux Accelerated Networking](https://azure.microsoft.com/updates/accelerated-networking-in-expanded-preview).
Create a virtual network with [az network vnet create](/cli/azure/network/vnet). The following example creates a virtual network named *myVnet* with one subnet:
az network nsg rule create \
--destination-port-range 22 ```
-### Create a network interface with accelerated networking
+### Create a network interface with Accelerated Networking
-Create a public IP address with [az network public-ip create](/cli/azure/network/public-ip). A public IP address isn't required if you don't plan to access the virtual machine from the Internet, but to complete the steps in this article, it is required.
+Create a public IP address with [az network public-ip create](/cli/azure/network/public-ip). A public IP address isn't required if you don't plan to access the VM from the Internet. However, it's required to complete the steps in this article.
```azurecli az network public-ip create \
az network public-ip create \
--resource-group myResourceGroup ```
-Create a network interface with [az network nic create](/cli/azure/network/nic) with accelerated networking enabled. The following example creates a network interface named *myNic* in the *mySubnet* subnet of the *myVnet* virtual network and associates the *myNetworkSecurityGroup* network security group to the network interface:
+Create a network interface with [az network nic create](/cli/azure/network/nic) with Accelerated Networking enabled. The following example creates a network interface named *myNic* in the *mySubnet* subnet of the *myVnet* virtual network and associates the *myNetworkSecurityGroup* network security group to the network interface:
```azurecli az network nic create \
Once the VM is created, output similar to the following example output is return
### Confirm that accelerated networking is enabled
-Use the following command to create an SSH session with the VM. Replace `<your-public-ip-address>` with the public IP address assigned to the virtual machine you created, and replace *azureuser* if you used a different value for `--admin-username` when you created the VM.
+Use the following command to create an SSH session with the VM. Replace `<your-public-ip-address>` with the public IP address assigned to the virtual machine that you created, and replace *azureuser* if you used a different value for `--admin-username` when you created the VM.
```bash ssh azureuser@<your-public-ip-address>
From the Bash shell, enter `uname -r` and confirm that the kernel version is one
* **CentOS**: 3.10.0-693
-Confirm the Mellanox VF device is exposed to the VM with the `lspci` command. The returned output is similar to the following output:
+Confirm that the Mellanox VF device is exposed to the VM with the `lspci` command. The returned output is similar to the following output:
```output 0000:00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge (AGP disabled) (rev 03)
Confirm the Mellanox VF device is exposed to the VM with the `lspci` command. Th
0001:00:02.0 Ethernet controller: Mellanox Technologies MT27500/MT27520 Family [ConnectX-3/ConnectX-3 Pro Virtual Function] ```
-Check for activity on the VF (virtual function) with the `ethtool -S eth0 | grep vf_` command. If you receive output similar to the following sample output, accelerated networking is enabled and working.
+Check for activity on the VF (virtual function) with the `ethtool -S eth0 | grep vf_` command. If you receive output similar to the following sample output, accelerated networking is enabled and active.
```output vf_rx_packets: 992956
Accelerated Networking is now enabled for your VM.
## Handle dynamic binding and revocation of virtual function Applications must run over the synthetic NIC that is exposed in VM. If the application runs directly over the VF NIC, it doesn't receive **all** packets that are destined to the VM, since some packets show up over the synthetic interface.
-If you run an application over the synthetic NIC, it guarantees that the application receives **all** packets that are destined to it. It also makes sure that the application keeps running, even if the VF is revoked when the host is being serviced.
+If you run an application over the synthetic NIC, it guarantees that the application receives **all** packets that are destined to it. It also makes sure that the application keeps running, even if the VF is revoked during host servicing.
Applications binding to the synthetic NIC is a **mandatory** requirement for all applications taking advantage of **Accelerated Networking**. ## Enable Accelerated Networking on existing VMs
-If you have created a VM without Accelerated Networking, it is possible to enable this feature on an existing VM. The VM must support Accelerated Networking by meeting the following prerequisites that are also outlined above:
+If you've created a VM without Accelerated Networking, it's possible to enable this feature on an existing VM. The VM must support Accelerated Networking by meeting the following prerequisites that are also outlined:
* The VM must be a supported size for Accelerated Networking * The VM must be a supported Azure Gallery image (and kernel version for Linux)
az vm deallocate \
--name myVM ```
-Important, please note, if your VM was created individually, without an availability set, you only need to stop/deallocate the individual VM to enable Accelerated Networking. If your VM was created with an availability set, all VMs contained in the availability set will need to be stopped/deallocated before enabling Accelerated Networking on any of the NICs.
+If your VM was created individually without an availability set, you only must stop or deallocate the individual VM to enable Accelerated Networking. If your VM was created with an availability set, all VMs contained in the set must be stopped or deallocated before enabling Accelerated Networking on any of the NICs.
Once stopped, enable Accelerated Networking on the NIC of your VM:
az vm start --resource-group myResourceGroup \
``` ### VMSS
-VMSS is slightly different but follows the same workflow. First, stop the VMs:
+VMSS is slightly different but follows the same workflow. First, stop the VMs:
```azurecli az vmss deallocate \
az vmss update --name myvmss \
--set virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].enableAcceleratedNetworking=true ```
-Please note, a VMSS has VM upgrades that apply updates using three different settings, automatic, rolling and manual. In these instructions the policy is set to automatic so that the VMSS will pick up the changes immediately after restarting. To set it to automatic so that the changes are immediately picked up:
+>[!NOTE]
+> A VMSS has VM upgrades that apply updates using three different settings, automatic, rolling, and manual. In these instructions, the policy is set to automatic so that the VMSS will pick up the changes immediately after reboot. To set it to automatic so that the changes are immediately picked up:
```azurecli az vmss update \
az vmss start \
--resource-group myrg ```
-Once you restart, wait for the upgrades to finish but once completed, the VF will appear inside the VM. (Please make sure you are using a supported OS and VM size.)
+Once you restart, wait for the upgrades to finish but once completed, the VF appears inside the VM. (Make sure you're using a supported OS and VM size.)
### Resizing existing VMs with Accelerated Networking VMs with Accelerated Networking enabled can only be resized to VMs that support Accelerated Networking.
-A VM with Accelerated Networking enabled cannot be resized to a VM instance that does not support Accelerated Networking using the resize operation. Instead, to resize one of these VMs:
+A VM with Accelerated Networking enabled can't be resized to a VM instance that doesn't support Accelerated Networking using the resize operation. Instead, to resize one of these VMs:
* Stop/Deallocate the VM or if in an availability set/VMSS, stop/deallocate all the VMs in the set/VMSS. * Accelerated Networking must be disabled on the NIC of the VM or if in an availability set/VMSS, all VMs in the set/VMSS.
-* Once Accelerated Networking is disabled, the VM/availability set/VMSS can be moved to a new size that does not support Accelerated Networking and restarted.
+* Once Accelerated Networking is disabled, the VM/availability set/VMSS can be moved to a new size that doesn't support Accelerated Networking and restarted.
+
+## Next steps
+* Learn [how Accelerated Networking works](./accelerated-networking-how-it-works.md)
+* Learn how to [create a VM with Accelerated Networking in PowerShell](../virtual-network/create-vm-accelerated-networking-powershell.md)
+* Improve latency with an [Azure proximity placement group](../virtual-machines/co-location.md)
+
virtual-network Create Vm Accelerated Networking Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-vm-accelerated-networking-powershell.md
Title: Create Windows VM with accelerated networking - Azure PowerShell
-description: Create a Windows virtual machine (VM) with accelerated networking to greatly improve its networking performance.
+description: Create a Windows virtual machine (VM) with Accelerated Networking for improved network performance
documentationcenter: ''
editor: ''
ms.assetid:
+ms.devlang: na
vm-windows Previously updated : 04/15/2020 Last updated : 02/15/2022 # Create a Windows VM with accelerated networking using Azure PowerShell
-In this tutorial, you learn how to create a Windows virtual machine (VM) with accelerated networking.
-
-> [!NOTE]
-> To use accelerated networking with a Linux virtual machine, see [Create a Linux VM with accelerated networking](create-vm-accelerated-networking-cli.md).
-
-Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, greatly improving its networking performance. This high-performance path bypasses the host from the data path, which reduces latency, jitter, and CPU utilization for the most demanding network workloads on supported VM types. The following diagram illustrates how two VMs communicate with and without accelerated networking:
-
-![Communication between Azure virtual machines with and without accelerated networking](./media/create-vm-accelerated-networking/accelerated-networking.png)
-
-Without accelerated networking, all networking traffic in and out of the VM must traverse the host and the virtual switch. The virtual switch provides all policy enforcement, such as network security groups, access control lists, isolation, and other network virtualized services to network traffic.
-
-> [!NOTE]
-> To learn more about virtual switches, see [Hyper-V Virtual Switch](/windows-server/virtualization/hyper-v-virtual-switch/hyper-v-virtual-switch).
-
-With accelerated networking, network traffic arrives at the VM's network interface (NIC) and is then forwarded to the VM. All network policies that the virtual switch applies are now offloaded and applied in hardware. Because policy is applied in hardware, the NIC can forward network traffic directly to the VM. The NIC bypasses the host and the virtual switch, while it maintains all the policy it applied in the host.
-
-The benefits of accelerated networking only apply to the VM that it's enabled on. For the best results, enable this feature on at least two VMs connected to the same Azure virtual network. When communicating across virtual networks or connecting on-premises, this feature has minimal impact to overall latency.
-
-## Benefits
--- **Lower Latency / Higher packets per second (pps)**: Eliminating the virtual switch from the data path removes the time packets spend in the host for policy processing. It also increases the number of packets that can be processed inside the VM.--- **Reduced jitter**: Virtual switch processing depends on the amount of policy that needs to be applied. It also depends on the workload of the CPU that's doing the processing. Offloading the policy enforcement to the hardware removes that variability by delivering packets directly to the VM. Offloading also removes the host-to-VM communication, all software interrupts, and all context switches.--- **Decreased CPU utilization**: Bypassing the virtual switch in the host leads to less CPU utilization for processing network traffic.-
-## Supported operating systems
-
-The following versions of Windows are supported:
--- **Windows Server 2019 Standard/Datacenter**-- **Windows Server 2016 Standard/Datacenter** -- **Windows Server 2012 R2 Standard/Datacenter**-
-## Limitations and constraints
-
-### Supported VM instances
-
-Accelerated Networking is supported on most general purpose and compute-optimized instance sizes with 2 or more vCPUs. On instances that support hyperthreading, Accelerated Networking is supported on VM instances with 4 or more vCPUs.
-
-Support for Accelerated Networking can be found in the individual [virtual machine sizes](../virtual-machines/sizes.md) documentation.
-
-### Custom images
-
-If you're using a custom image and your image supports Accelerated Networking, be sure that you have the required drivers that work with Mellanox ConnectX-3 and ConnectX-4 Lx NICs on Azure.
-
-### Regions
-
-Accelerated networking is available in all global Azure regions and Azure Government Cloud.
-
-### Enabling accelerated networking on a running VM
-
-A supported VM size without accelerated networking enabled can only have the feature enabled when it's stopped and deallocated.
-
-### Deployment through Azure Resource Manager
-
-Virtual machines (classic) can't be deployed with accelerated networking.
- ## VM creation using the portal
-Though this article provides steps to create a VM with accelerated networking using Azure PowerShell, you can also [use the Azure portal to create a virtual machine](../virtual-machines/windows/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) that enables accelerated networking. When you create a VM in the portal, in the **Create a virtual machine** page, choose the **Networking** tab. This tab has an option for **Accelerated networking**. If you have chosen a [supported operating system](#supported-operating-systems) and [VM size](#supported-vm-instances), this option is automatically set to **On**. Otherwise, the option is set to **Off**, and Azure displays the reason why it can't be enabled.
+Though this article provides steps to create a VM with accelerated networking using Azure PowerShell, you can also [use the Azure portal to create a virtual machine](../virtual-machines/windows/quick-create-portal.md?toc=%2fazure%2fvirtual-network%2ftoc.json) that enables accelerated networking. When you create a VM in the portal, in the **Create a virtual machine** page, choose the **Networking** tab. This tab has an option for **Accelerated networking**. If you have chosen a [supported operating system](./accelerated-networking-overview.md#supported-operating-systems) and [VM size](./accelerated-networking-overview.md#supported-vm-instances), this option is automatically set to **On**. Otherwise, the option is set to **Off**, and Azure displays the reason why it can't be enabled.
You can also enable or disable accelerated networking through the portal after VM creation by navigating to the network interface and clicking the button at the top of the **Overview** blade. > [!NOTE]
Once you create the VM in Azure, connect to the VM and confirm that the Ethernet
7. In the **Device Manager** window, expand the **Network adapters** node.
-8. Confirm that the **[Mellanox ConnectX-3 Virtual Function Ethernet Adapter](https://www.mellanox.com/products/adapter-software/ethernet/windows/winof-2)** appears, as shown in the following image:
+8. Confirm that the **Mellanox ConnectX-3 Virtual Function Ethernet Adapter** appears, as shown in the following image:
![Mellanox ConnectX-3 Virtual Function Ethernet Adapter, new network adapter for accelerated networking, Device Manager](./media/create-vm-accelerated-networking/device-manager.png)
A VM with accelerated networking enabled can't be resized to a VM instance that
2. Disable accelerated networking on the NIC of the VM. For an availability set or scale set, disable accelerated networking on the NICs of all VMs in the availability set or scale set. 3. After you disable accelerated networking, move the VM, availability set, or scale set to a new size that doesn't support accelerated networking, and then restart them.+
+## Next steps
+* Learn [how Accelerated Networking works](./accelerated-networking-how-it-works.md)
+* Learn how to [create a VM with Accerelated Networking using Azure CLI](./create-vm-accelerated-networking-cli.md)
+* Improve latency with an [Azure proximity placement group](../virtual-machines/co-location.md)
virtual-network Virtual Network Service Endpoint Policies Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-service-endpoint-policies-overview.md
No centralized logging is available for service endpoint policies. For service r
- Validate whether Azure Storage is configured to allow access from the virtual network over endpoints, or whether the default policy for the resource is set to *Allow All*. - Ensure the accounts are not **classic storage accounts** with service endpoint policies on the subnet. - A managed Azure Service stopped working after applying a Service Endpoint Policy over the subnet
- - Managed services are not supported with service endpoint policies at this time. *Watch this space for updates*.
+ - Managed services other than Azure SQL Managed Instance are not currently supported with service endpoints.
- Access to Managed Storage Accounts stopped working after applying a Service Endpoint Policy over the subnet - Managed Storage Accounts are not supported with service endpoint policies. If configured, policies will deny access to all Managed Storage Accounts, by default. If your application needs access to Managed Storage Accounts, endpoint policies should not be used for this traffic.
Virtual networks and Azure Storage accounts can be in the same or different subs
- Virtual networks must be in the same region as the service endpoint policy. - You can only apply service endpoint policy on a subnet if service endpoints are configured for the Azure services listed in the policy. - You can't use service endpoint policies for traffic from your on-premises network to Azure services.-- Azure managed services do not currently support Endpoint policies. This includes managed services deployed into the shared subnets (e.g. *Azure Batch, Azure ADDS, Azure Application Gateway, Azure VPN Gateway, Azure Firewall*) or into the dedicated subnets (e.g. *Azure App Service Environment, Azure Redis Cache, Azure API Management, Azure SQL MI, classic managed services*).
+- Azure managed services other than Azure SQL Managed Instance do not currently support endpoint policies. This includes managed services deployed into shared subnets (such as *Azure Batch, Azure ADDS, Azure Application Gateway, Azure VPN Gateway, Azure Firewall*) or into dedicated subnets (such as *Azure App Service Environment, Azure Redis Cache, Azure API Management, classic managed services*).
> [!WARNING] > Azure services deployed into your virtual network, such as Azure HDInsight, access other Azure services, such as Azure Storage, for infrastructure requirements. Restricting endpoint policy to specific resources could break access to these infrastructure resources for the Azure services deployed in your virtual network.
virtual-wan Global Hub Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/global-hub-profile.md
Azure Virtual WAN offers two types of connectivity for remote users: Global and
## Global profile
-The profile points to a load balancer that includes all active User VPN hubs. The user is directed to the hub that is closest to the user's geographic location. This type of connectivity is useful when users travel to different locations frequently. To download the **global** profile:
+The global profile associated with a User VPN Configuration points to a load balancer that includes all active User VPN hubs using that User VPN Configuration. A user connected to the global profile is directed to the hub that is closest to the user's geographic location. This type of connectivity is useful when users travel to different locations frequently.
+
+For example, you can associate a VPN configuration to 2 different Virtual WAN hubs, one in West US and one in Southeast Asia. If a user connects to the global profile associated with the User VPN configuration, they will connect to the closest Virtual WAN hub based on their location.
+
+To download the **global** profile:
1. Navigate to the virtual WAN. 2. Click **User VPN configuration**.
The profile points to a load balancer that includes all active User VPN hubs. Th
![Global profile](./media/global-hub-profile/global1.png)
+### Include or exclude hub from global profile
+
+By default, every hub using a specific User VPN Configuration is included in the corresponding global VPN profile. You may choose to exclude a hub from the global VPN profile, meaning a user will not be load-balanced to connect to that hub's gateway if they are using the global VPN profile.
+
+To check whether or not the hub is included in the global VPN profile:
+
+1. Navigate to the hub
+1. Navigate to **User VPN (Point to site)** under **Connectivity** on the left-hand panel
+1. See **Gateway Attachment State** to determine if this hub is included in the global VPN profile. If the state is **attached**, then the hub is included in the global VPN profile. If the state is **detached**, then the hub is not included in the global VPN profile.
+
+ :::image type="content" source="./media/global-hub-profile/attachment-state.png" alt-text="Screenshot showing attachment state of gateway."lightbox="./media/global-hub-profile/attachment-state.png":::
+
+To include or exclude a specific hub from the global VPN profile:
+
+1. Click **Include/Exclude Gateway from Global Profile**
+
+ :::image type="content" source="./media/global-hub-profile/include-exclude-1.png" alt-text="Screenshot showing how to include or exclude hub from profile" lightbox="/media/global-hub-profile/include-exclude-1.png":::
+
+1. Click **Exclude** if you wish to remove this hub's gateway from the WAN Global User VPN Profile. Users who are using the Hub-level User VPN profile will still be able to connect to this gateway. Users who are using the WAN-level profile will not be able to connect to this gateway.
+
+1. Click **Include** if you wish to include this hub's gateway in the Virtual WAN Global User VPN Profile. Users who are using this WAN-level profile will be able to connect to this gateway.
++
+ ![Hub profile 4](./media/global-hub-profile/include-exclude.png)
+ ## Hub-based profile The profile points to a single hub. The user can only connect to the particular hub using this profile. To download the **hub-based** profile:
The profile points to a single hub. The user can only connect to the particular
![Hub profile 1](./media/global-hub-profile/hub1.png) 3. Click **User VPN (Point to site)**. 4. Click **Download virtual Hub User VPN profile**.
+ :::image type="content" source="./media/global-hub-profile/hub2.png" alt-text="Screenshot showing how to download hub profile."lightbox="./media/global-hub-profile/hub2.png":::
- ![Hub profile 2](./media/global-hub-profile/hub2.png)
5. Check **EAPTLS**. 6. Click **Generate and download profile**. ![Hub profile 3](./media/global-hub-profile/download.png) + ## Next steps To learn more about Virtual WAN, see the [Virtual WAN Overview](virtual-wan-about.md) page.
virtual-wan Virtual Wan Point To Site Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-point-to-site-powershell.md
++
+ Title: 'Tutorial: Use Azure Virtual WAN to create a Point-to-Site connection to Azure using PowerShell'
+description: In this tutorial, learn how to use Azure Virtual WAN to create a User VPN (point-to-site) connection to Azure using PowerShell
+++++ Last updated : 02/01/2022+++
+# Tutorial: Create a User VPN connection to Azure Virtual WAN using PowerShell
+
+This tutorial shows you how to use Virtual WAN to connect to your resources in Azure over an OpenVPN or IPsec/IKE (IKEv2) VPN connection using a User VPN (P2S) configuration using PowerShell. This type of connection requires the native VPN client to be configured on each connecting client computer.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Create a virtual WAN
+> * Create the User VPN configuration
+> * Create the virtual hub and gateway
+> * Generate client configuration files
+> * Configure VPN clients
+> * Connect to a VNet
+> * Clean up resources
+++
+## Prerequisites
++
+### Azure PowerShell
++
+## <a name="signin"></a>Sign in
++
+## <a name="openvwan"></a>Create a virtual WAN
+
+Before you can create a virtual wan, you have to create a resource group to host the virtual wan or use an existing resource group. Create a resource group with [New-AzResourceGroup](/powershell/module/az.Resources/New-azResourceGroup). This example creates a new resource group named **testRG** in the **West US** location:
+
+Create a resource group:
+
+```azurepowershell-interactive
+New-AzResourceGroup -Location "West US" -Name "testRG"
+```
+
+Create the virtual wan:
+
+```azurepowershell-interactive
+$virtualWan = New-AzVirtualWan -ResourceGroupName testRG -Name myVirtualWAN -Location "West US"
+```
+
+## <a name="p2sconfig"></a>Create a User VPN configuration
+
+The User VPN (P2S) configuration defines the parameters for remote clients to connect depending on the authentication method you want to use. When selecting the authentication method, there are three methods available with each method having its specific requirements.
+
+* **Azure certificates:** For this configuration, certificates are required. You need to either generate or obtain certificates. A client certificate is required for each client. Additionally, the root certificate information (public key) needs to be uploaded. For more information about the required certificates, see [Generate and export certificates](certificates-point-to-site.md).
+
+* **Azure Active Directory authentication:** Use the [Configure a User VPN connection - Azure Active Directory authentication](virtual-wan-point-to-site-azure-ad.md) article, which contains the specific steps necessary for this configuration.
+
+* **Radius-based authentication:** Obtain the Radius server IP, Radius server secret, and certificate information.
+
+### Configuration steps using Azure Certificate authentication
+
+User VPN (point-to-site) connections can use certificates to authenticate. To create a self-signed root certificate and generate client certificates using PowerShell on Windows 10 or Windows Server 2016, see [Generate and export certificates](certificates-point-to-site.md).
+
+Once you have generated and exported the self-signed root certificate, you need to reference the location of the stored certificate. If using Cloud Shell on the Azure portal, you would need to upload the certificate first.
+
+```azurepowershell-interactive
+$VpnServerConfigCertFilePath = Join-Path -Path /home/name -ChildPath "\P2SRootCert1.cer"
+$listOfCerts = New-Object "System.Collections.Generic.List[String]"
+$listOfCerts.Add($VpnServerConfigCertFilePath)
+```
+
+Next is to create the User VPN Server Configuration. For the VPN Protocol, you can choose IKEv2 VPN, OpenVPN, and OpenVpn and IKEv2 depending on your requirements.
+
+```azurepowershell-interactive
+New-AzVpnServerConfiguration -Name testconfig -ResourceGroupName testRG -VpnProtocol IkeV2 -VpnAuthenticationType Certificate -VpnClientRootCertificateFilesList $listOfCerts -VpnClientRevokedCertificateFilesList $listOfCerts -Location westus
+```
+
+## <a name="hub"></a>Create the hub and Point-to-Site Gateway
+
+```azurepowershell-interactive
+New-AzVirtualHub -VirtualWan $virtualWan -ResourceGroupName "testRG" -Name "westushub" -AddressPrefix "10.11.0.0/24" -Location "westus"
+```
+
+Next you declare the variables for the existing resources and also specify the Client address pool from which IP addresses will be automatically assigned to VPN clients.
+
+```azurepowershell-interactive
+$virtualHub = Get-AzVirtualHub -ResourceGroupName testRG -Name westushub
+$vpnServerConfig = Get-AzVpnServerConfiguration -ResourceGroupName testRG -Name testconfig
+$vpnClientAddressSpaces = New-Object string[] 1
+$vpnClientAddressSpaces[0] = "192.168.2.0/24"
+```
+
+For the Point-to-Site Gateway, you need to specify the Gateway scale units and also reference the User VPN Server Configuration created earlier. Creating a Point-to-Site Gateway can take 30 minutes or more to complete
+
+```azurepowershell-interactive
+$P2SVpnGateway = New-AzP2sVpnGateway -ResourceGroupName testRG -Name p2svpngw -VirtualHub $virtualHub -VpnGatewayScaleUnit 1 -VpnClientAddressPool $vpnClientAddressSpaces -VpnServerConfiguration $vpnServerConfig -EnableInternetSecurityFlag -EnableRoutingPreferenceInternetFlag
+```
+
+## <a name="download"></a>Generate client configuration files
+
+When you connect to VNet using User VPN (P2S), you use the VPN client that is natively installed on the operating system from which you are connecting. All of the necessary configuration settings for the VPN clients are contained in a VPN client configuration zip file. The settings in the zip file help you easily configure the VPN clients. The VPN client configuration files that you generate are specific to the User VPN configuration for your gateway. In this section, you run the script to get the profile URL to generate and download the files used to configure your VPN clients.
+
+```azurepowershell-interactive
+Get-AzVirtualWanVpnServerConfigurationVpnProfile -Name myVirtualWAN -ResourceGroupName testRG -VpnServerConfiguration $vpnServerConfig -AuthenticationMethod EAPTLS
+```
+
+## <a name="configure-client"></a>Configure VPN clients
+
+Use the downloaded profile package to configure the remote access VPN clients. The procedure for each operating system is different. Follow the instructions that apply to your system.
+Once you have finished configuring your client, you can connect.
++
+## <a name="connect-vnet"></a>Connect VNet to hub
+
+First the declare a variable to get the already existing Virtual network
+
+```azurepowershell-interactive
+$remoteVirtualNetwork = Get-AzVirtualNetwork -Name "testRGvnet" -ResourceGroupName "testRG"
+```
+
+Then you create a connection between your virtual hub and your VNet.
+
+```azurepowershell-interactive
+New-AzVirtualHubVnetConnection -ResourceGroupName "testRG" -VirtualHubName "westushub" -Name "testvnetconnection" -RemoteVirtualNetwork $remoteVirtualNetwork
+```
+
+## <a name="cleanup"></a>Clean up resources
+
+When you no longer need the resources that you created, delete them. Some of the Virtual WAN resources must be deleted in a certain order due to dependencies. Deleting can take about 30 minutes to complete.
+
+1. Delete the gateway entities following the below order for the Point-to-Site VPN configuration. This can take up to 30 minutes to complete.
+
+ Delete the Point-to-Site VPN Gateway
+
+ ```azurepowershell-interactive
+ Remove-AzP2sVpnGateway -Name "p2svpngw" -ResourceGroupName "testRG"
+ ```
+
+ Delete the User VPN Server configuration
+
+ ```azurepowershell-interactive
+ Remove-AzVpnServerConfiguration -Name "testconfig" -ResourceGroupName "testRG"
+ ```
+
+1. You can delete the Resource Group to delete all the other resources in the resource group, including the hubs, sites and the virtual WAN.
+
+ ```azurepowershell-interactive
+ Remove-AzResourceGroup -Name "testRG"
+ ```
+
+1. Or you can choose to delete each of the resources in the Resource Group
+
+ Delete the Virtual Hub
+
+ ```azurepowershell-interactive
+ Remove-AzVirtualHub -ResourceGroupName "testRG" -Name "westushub"
+ ```
+
+ Delete the Virtual WAN
+
+ ```azurepowershell-interactive
+ Remove-AzVirtualWan -Name "MyVirtualWan" -ResourceGroupName "testRG"
+ ```
++
+## Next steps
+
+Next, to learn more about Virtual WAN, see:
+
+> [!div class="nextstepaction"]
+> * [Virtual WAN FAQ](virtual-wan-faq.md)
web-application-firewall Web Application Firewall Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/web-application-firewall-logs.md
The firewall log is generated only if you have enabled it for each application g
|ruleSetVersion | Rule set version used. Available values are 2.2.9 and 3.0. | |ruleId | Rule ID of the triggering event. | |message | User-friendly message for the triggering event. More details are provided in the details section. |
-|action | Action taken on the request. Available values are Blocked and Allowed (for custom rules), Matched (when a rule matches a part of the request), and Detected and Blocked (these are both for mandatory rules, depending on if the WAF is in detection or prevention mode). |
+|action | Action taken on the request. Available values are: </br>**Blocked and Allowed** (for custom rules) </br>**Matched** (when a rule matches a part of the request) </br>**Detected and Blocked** (these are both for mandatory rules, depending on if the WAF is in detection or prevention mode). |
|site | Site for which the log was generated. Currently, only Global is listed because rules are global.| |details | Details of the triggering event. | |details.message | Description of the rule. |